pythonpytorchclassificationmlp

can't classify the inputs of an formula by output.(celcius-fahrenheit)


I am learning deep learning for a while by myself. However, I try to build a classification model in pytorch. The input and output can be taken from formula of celcius-fahrenheit.

C = (F-32)/1.8

The inputs are value of fahrenheit and the outputs are classified as negative value-0 or positive value-1.

Input Output
... 1
34 1
33 1
32 0
31 0
... 0

I tried the following pipeline but I can't configure the model to predict the tests with %100 accuracy. How to reach this prediction accuracy rate?

import torch
import numpy as np

x = np.arange(-100,100)
y =  np.where(((x-32)*1.8)>0, 1., 0.)

x = torch.from_numpy(x).to(torch.float32).unsqueeze(1)
y = torch.from_numpy(y).to(torch.float32).unsqueeze(1)

class BasicModel(torch.nn.Module):
  def __init__(self,in_features:int,out_features:int):
    super(BasicModel,self).__init__()
    self.in_features = in_features
    self.out_features = out_features

    self.linear = torch.nn.Linear(in_features=self.in_features,out_features=self.out_features)
    self.sigmoid = torch.nn.Sigmoid()
  
  def forward(self,input):
    out = self.linear(input)
    out = self.sigmoid(out)

    return out

model = BasicModel(1,1)
loss_func = torch.nn.BCELoss()
optimizer = torch.optim.SGD(model.parameters(),lr=0.1)
epochs = 1000

model.train()
for epoch in range(epochs):
  losses = []
  for value,target in zip(x,y):
    optimizer.zero_grad()
    prediction = model(value) 
    loss = loss_func(prediction,target)
    loss.backward() 
    optimizer.step()
    losses.append(loss)
    

  if epoch%100 ==0:
      for name, param in model.named_parameters():
          print(name, param.data,end = " ")
      print(f"Epoch:{epoch} loss:{sum(losses)/len(losses)}")

x_test = torch.tensor([[33.],
                      [32.]]) # 2 sample test input

model.eval()
print(f"Test:{model(x_test)}") # should be [[>0.5],[<0.5]]

I expect weight and bias to be 0.5555 and -17.7777 respectively but both keep increasing.With these expected values,output of the sigmoid should give 1 or 0. Isn't that correct? How to solve this problem?


Solution

  • Sigmoid can neither get 1 nor 0 exactly. It gets closer and closer to 1 for large inputs, which is probably why your weights keep increasing and increasing. However, it never reaches 1. To solve this issue, remove the sigmoid layer or add a linear layer after the sigmoid layer.

    For statically building your classifier, Yyou might actually want to use the heaviside function as activation function: https://en.wikipedia.org/wiki/Heaviside_step_function. However, as the gradient of this function is zero always, this will not allow you to train the classifier.