ANN Weights deterministic , same result?

Hello,
I recently saw this behaviour in TMultilayerPerceptron.
Is it possible that on the same input data, different weights of the ANN can be produced?
For example , i run the training, get the weights, i run it once again, get different weights, and once again, the third time different weights again. Shouldy be it on same training be reproducable? Or are in the code deep inside some randomization variables in there???

Best greetings.

Hi,

I am not extremely expert on NN and I am also intersted in this point so I just want to stimulate other interventions.

There are “randomization” in the training. If genetic algorithm are used, a random initial weight configuration is set.
Then the training should converge and give you the same result.
My guess is that, if the global mininum is not found before last epoch of the training, the best local minimum
is kept and the weights saved. This local minimum could depend on the initial weight configuration.
Increasing the number of max epochs, one should be able to find the global minimum and have a stable result.

please, NN experts, comment on this!!

cheers,
delo