machine-learningneural-networkperceptronmulti-layeractivation-function

Making training example of multilayer perceptron


I'm trying to make several training examples to get a set of weights and bias for the particular network which correctly implements a hard threshold activation function.

  • Four inputs x_1, ... x_4 , where x_i is Real number, and the network must output(y) 1 if x_1 < x_2 < x_3 < x_4 (sorted order), and 0 otherwise.

  • A hard threshold activation function ;

f(z) = 1 (if z>= 0) or 0 (if z <0)

h1 = x1w11 + x2w12 + x3w13 + x4w14 + b11

h2 = x1w21 + x2w22 + x3w23 + x4w24 + b21

h3 = x1w31 + x2w32 + x3w33 + x4w34 + b31

y = w1h1 + h2w2 + h3w3 + b (*Actually h1, h2, h3 are f(h1),f(h2),f(h3) because of activation function)

And, f(y).

I guess training example should be

(-2,-1,0,1) -> output 1, (0,0,0,0) -> output 0, (0,0,0,1) -> output 0, (1,2,3,4) -> output 1.

.. and so on. But the domain of input is too broad to build specific examples to use multilayer perception algorithm.

Can you help me to get proper example for applying the algorithm?


Solution

  • No, its not to broad, you can just concentrate in the [0, 1] range for each x_i, as in any case you need normalized data to train a neural network.

    So basically you can just generate uniformly distributed random numbers in the [0, 1] range, check if they are sorted or not, and produce the label accordingly. Then you repeat say for 10K or 100K and then you have a dataset to train an MLP. You could also discretize the [0, 1] range with a chosen step to generate the numbers.