I've implemented an MLP that is able to detect hand written digits. So far the algorithm can identify numbers 0 and 1, but when I have implemented a new class, e.i. 2, the algorithm is unable to learn it. At the beginning I thought I had had a mistake in the implementation of the new class so I decided to swap the new class for a previous one that worked, in other words, if class0 was 0 and new class was 2 now class0 is 2 and new class is 0. Surprisingly the new class managed to be detected with almost no error, but class0 had a huge error, which means, the new class is properly implemented.
The MLP has two layers with 20 hidden units each, both of them are nonlinear with a sigmoidal function.
I think if I am able to understand your question properly then as you will add a new class and training a model such as here you trained a neural network then the final layer will change i.e, the no. of neurons in the final layer will be changed as a new class is added.
This can be one of the reasons for not detecting the new class.