Since the initial values of the weights between neurons affect the training results, in order to get the same result everytime, I use the following code to get the initial weights (saved in “weighti.txt”),
I’am surprised, since I would expect your procedure to work fine.
Can you give me which version of ROOT you are using ?
Which training method are you using ? If you use the simple stochastic method, then the result may differ because events in the TTree are first shuffled.
Could you also make the following tests ?
try with ntrain=0
try to dump the weights without training the network (randomize, dump, load, dump again and compare with the first dump).
I’ll try to reproduce the problem on my side with the tutorial, and let you know when I have results (but I’m rather busy this week).
The root version I used is 4.00.04, and the default training method(kBFGS?).
I did what you suggested, when ntrain=0, the two files with dumped final weights are exactly the same. When I try ntrain=10, they are slightly different, but for ntrain=1000, the difference are increased, as I shown in the attachment.
I also did the same procedure by using mlpHiggs.C in the tutorial and found the same problem.
I cannot reproduce the problem. I attached the modified mlpHiggs.C that I used, as well as the output files. As you can see, the weights are strictly the same.
I’m using the HEAD version of ROOT, so I can only suggest you to switch to a newer version (even if I’m not aware of modifications that may have influenced this).