pythonmachine-learningneural-networkactivation-functionstochastic-gradient

How can I get my neural net to correctly do linear regression?


I used the code for the first neural net from in the book neural nets and deep learning by Michael Nielsen, which was used for recognising handwritten digits. It uses stochastic gradient descent with mini batches and the sigmoid activation function. I gave it one input neuron, two hidden neurons and one output neuron. I then give it a bunch of data, which represents a straight line so basically a number of points between zero and 1, where the input is the same as the output. No matter how I tweak the learning rate and number of epochs used, the network is never able to make a linear regression. Is that due to the fact that I am using the sigmoid activation function? If so, what other function can I use?

The prediction of the network based on new input

The blue line represents the prediction of the network while the green line is the training data and the inputs for the network predictions were just numbers between 0 and 3, with an interval of 0.01.

Here's the code:

"""
network.py
~~~~~~~~~~
A module to implement the stochastic gradient descent learning
algorithm for a feedforward neural network.  Gradients are calculated
using backpropagation.  Note that I have focused on making the code
simple, easily readable, and easily modifiable.  It is not optimized,
and omits many desirable features.
"""

#### Libraries
# Standard library
import random

# Third-party libraries
import numpy as np

from sklearn.datasets import make_regression
import matplotlib.pyplot as plt

class Network(object):

    def __init__(self, sizes):
        """The list ``sizes`` contains the number of neurons in the
        respective layers of the network.  For example, if the list
        was [2, 3, 1] then it would be a three-layer network, with the
        first layer containing 2 neurons, the second layer 3 neurons,
        and the third layer 1 neuron.  The biases and weights for the
        network are initialized randomly, using a Gaussian
        distribution with mean 0, and variance 1.  Note that the first
        layer is assumed to be an input layer, and by convention we
        won't set any biases for those neurons, since biases are only
        ever used in computing the outputs from later layers."""
        self.num_layers = len(sizes)
        self.sizes = sizes
        '''creates a list of arrays with random numbers with mean 0 and variance 1;
        These arrays represent the biases of each neuron in each layer so one random number is assigned per neuron in 
        each layer and every array represents one layer of biases
        '''
        self.biases = [np.random.randn(y, 1) for y in sizes[1:]]
        self.weights = [np.random.randn(y, x)
                        for x, y in zip(sizes[:-1], sizes[1:])]

    #self always refers to an instance of a class
    def feedforward(self, a):
        # a are the activations of the neurons
        """Return the output of the network if ``a`` is input."""
        for b, w in zip(self.biases, self.weights):
            a = sigmoid(np.dot(w, a)+b)
            
        return a

    def SGD(self, training_data, epochs, mini_batch_size, eta,
            test_data=None):
        """Train the neural network using mini-batch stochastic
        gradient descent.  The ``training_data`` is a list of tuples
        ``(x, y)`` representing the training inputs and the desired
        outputs.  The other non-optional parameters are
        self-explanatory.  If ``test_data`` is provided then the
        network will be evaluated against the test data after each
        epoch, and partial progress printed out.  This is useful for
        tracking progress, but slows things down substantially."""
        if test_data: n_test = len(test_data)
        n = len(training_data)
        #this is done as many times as the number of epochs say -> that is how often the network is trained
        for j in range(epochs):
            random.shuffle(training_data)
            mini_batches = [
                training_data[k:k+mini_batch_size]
                for k in range(0, n, mini_batch_size)]
            #data is made into appropriately sized mini-batches
            for mini_batch in mini_batches:
                self.update_mini_batch(mini_batch, eta)
                for x,y in mini_batch:
                    print("Loss: ", (self.feedforward(x) - y)**2)
            if test_data:
                print ("Epoch {0}: {1} / {2}".format(
                    j, self.evaluate(test_data), n_test))
            else:
                print ("Epoch {0} complete".format(j))

    def update_mini_batch(self, mini_batch, eta):
        """Update the network's weights and biases by applying
        gradient descent using backpropagation to a single mini batch.
        The ``mini_batch`` is a list of tuples ``(x, y)``, and ``eta``
        is the learning rate."""
        #nabla_b and nabla_w are the same lists of matrices as "biases" and 
        #"weights" but all matrices are filled with zeroes; Thus, it is reset to 0 for every mini_batch.        
        nabla_b = [np.zeros(b.shape) for b in self.biases]
        nabla_w = [np.zeros(w.shape) for w in self.weights]
        for x, y in mini_batch:
            delta_nabla_b, delta_nabla_w = self.backprop(x, y)
            nabla_b = [nb+dnb for nb, dnb in zip(nabla_b, delta_nabla_b)]
            nabla_w = [nw+dnw for nw, dnw in zip(nabla_w, delta_nabla_w)]
        #updates the weights and biases by subtracting the average of the sum of the derivatives of the cost
        #function wrt to the biases/weights that were added for every training example in the mini_batch.
        self.weights = [w-(eta/len(mini_batch))*nw
                        for w, nw in zip(self.weights, nabla_w)]
        self.biases = [b-(eta/len(mini_batch))*nb
                       for b, nb in zip(self.biases, nabla_b)]

    def backprop(self, x, y):
        """Return a tuple ``(nabla_b, nabla_w)`` representing the
        gradient for the cost function C_x.  ``nabla_b`` and
        ``nabla_w`` are layer-by-layer lists of numpy arrays, similar
        to ``self.biases`` and ``self.weights``."""
        """Makes two lists filled with zeros in the same shape as biases and weights"""
        nabla_b = [np.zeros(b.shape) for b in self.biases]
        nabla_w = [np.zeros(w.shape) for w in self.weights]
        # feedforward
        activation = x
        activations = [x]
        zs = [] # list to store all the z vectors, layer by layer
        for b, w in zip(self.biases, self.weights):
            #multiplies w matrix for each layer by activation vector and adds bias
            z = np.dot(w, activation)+b
            zs.append(z)
            activation = sigmoid(z)
            activations.append(activation)
        # backward pass
        #this calculates the output error
        delta = self.cost_derivative(activations[-1], y) * \
            sigmoid_prime(zs[-1])
        #this is the derivative of the cost function wrt the biases in the last layer
        nabla_b[-1] = delta
        #this is the derivative of the cost function wrt the weights in the last layer
        nabla_w[-1] = np.dot(delta, activations[-2].transpose())
        for l in range(2, self.num_layers): #Code really is this: for l in range(2, self.num_layers):
            z = zs[-l]
            sp = sigmoid_prime(z)
            #This is the vector of errors of the layer -l
            delta = np.dot(self.weights[-l+1].transpose(), delta) * sp
            #fills the matrices nabla_b and nabla_w with the derivatives of the 
            #cost function with respect to the biases and weights in layers -l
            nabla_b[-l] = delta
            nabla_w[-l] = np.dot(delta, activations[-l-1].transpose())
        return (nabla_b, nabla_w)

    def evaluate(self, test_data):
        """Return the number of test inputs for which the neural
        network outputs the correct result. Note that the neural
        network's output is assumed to be the index of whichever
        neuron in the final layer has the highest activation."""
        test_results = [(np.argmax(self.feedforward(x)), y)
                        for (x, y) in test_data]
        #returns the number of inputs that were preducted correctly.
        return sum(int(x == y) for (x, y) in test_results)

    def cost_derivative(self, output_activations, y):
        """Return the vector of partial derivatives \partial C_x /
        \partial a for the output activations."""
        return (output_activations-y)

#### Miscellaneous functions
def sigmoid(z):
    """The sigmoid function.""" 
    return 1.0/(1.0+np.exp(-z))

def sigmoid_prime(z):
    """Derivative of the sigmoid function."""
    return sigmoid(z)*(1-sigmoid(z))

Solution

  • Sigmoid activation function is used for classification task, which in your case is to recognize handwritten digits. Whereas Linear Regression is regression task where output should be continuous. If you want the output layer to act as regression, you should be using linear activation function which comes as default for Keras Dense layers.