TMultiLayerPerceptron normalized output out of range

I am using TMultiLayerPerceptron with 6 inputs, 6 hiddens, and one output. I use the @ symbol for the output neuron to normalize. My output neuron training data is either 0 or 1. However, when I run the NN, output varies from about -2 to 1. I have tried all of the different training methods, and a large variety of number of hidden neurons and number of epochs, all with similar results. The actual NN performance is very good; it is just not within the normalized range from 0 to 1. Unfortunately my data and code is far too large to post here. I hope that someone can suggest possible causes of this type of behavior.

Root version 4.04/02

Hi,

Normalizing the output means that the mean becomes 0 and sigma 1. For the type of application you are interested in, this is not good.
Since your output is already 0 or 1, you should remove the “@” symbol.

Cheers,
Christophe.