machine-learningstatisticsneural-networkregressionspss-modeler

Why represent neural network quality as 1 minus the ratio of the mean absolute error in prediction to the range of the predicted values?


The documentation for IBM's SPSS Modeler defines neural network quality as:

For a continuous target, this 1 minus the ratio of the mean absolute error in prediction (the average of the absolute values of the predicted values minus the observed values) to the range of predicted values (the maximum predicted value minus the minimum predicted value).

Is this calculation standard?

I'm having trouble understanding how quality is derived from this.


Solution

  • The main point here is to make the network quality measure independent from the range of output values. The proposed measure is 1 - relative_error This means that for a perfect network, you will get the maximum quality of 1. It also means that the quality cannot become less than 0.

    Example:

    If you want to predict values in the range 0 to 1, an absolute error of 0.2 would mean 20%. When predicting values in the range 0 to 100, you could have a much larger absolute error of 20 for the same accuracy of 20%.

    When using the formula you describe, you get these relative errors:

    1 - 0.2 / (1 - 0) = 0.8
    
    1 - 20 / (100 - 0) = 0.8