TMultilayerPerceptron always gives same result

Dear Rooters,

I have a TMlp. Training is good. But then on another sample it gives always the same output when its used with eANNPair->Evaluate(0,params);
For example:
Params: 0.000709847 11.4472 80.6485 -650 0.0976215 1 - Input: 1
value=eANNPair->Evaluate(0,params) = 0.675225

Params: 0.000515387 23.7956 31.8739 -650 0.0623857 0 - Input: 1
value=eANNPair->Evaluate(0,params) = 0.675225

What could be the principal problems that could cause this effect.
The NN-Layout is the same as used for training as for running.

Thank You for your suggestions.