Question with TMLP

I have a list of questions in the following part, could any one give me a answer? Thanks for that.
Question 1. I train Sample. A to test Sample.B.
After the training of sample A finished, I got a file C recording the neuron weights and the synapse weights.
During the testing of Sample.B , I know that it is necessary to reload file C. But why do I have to reload Sample A at the same time?
I have made a sall test,I randomly choose 100 events in Sample.A to make Sample A1,Sample A has nearly 7500 events.
Case1: I reload Sample A1 and file C, and at the begining I got
ptrk -> 0.0474875 +/- 0.0421531
pt -> 0.0474875 +/- 0.0421531
normPH -> 0.0474875 +/- 0.0421531
goodHits -> 0.0474875 +/- 0.0421531
tof1m2 -> 0.0474875 +/- 0.0421531
zhit1 -> 0.0474875 +/- 0.0421531
ph1 -> 0.0474875 +/- 0.0421531
tof2m2 -> 0.0474875 +/- 0.0421531
zhit2 -> 0.0474875 +/- 0.0421531
ph2 -> 0.0474875 +/- 0.0421531
Case 2: I reload Sample A and file C, and at the beging I got
ptrk -> 0.0271719 +/- 0.0359351
pt -> 0.0271719 +/- 0.0359351
normPH -> 0.0271719 +/- 0.0359351
goodHits -> 0.0271719 +/- 0.0359351
tof1m2 -> 0.0271719 +/- 0.0359351
zhit1 -> 0.0271719 +/- 0.0359351
ph1 -> 0.0271719 +/- 0.0359351
tof2m2 -> 0.0271719 +/- 0.0359351
zhit2 -> 0.0271719 +/- 0.0359351
ph2 -> 0.0271719 +/- 0.0359351

And results are a bit different in the two cases. And when I set A1 has only 1 event , distribution is very bad, compared with 100 events and 7500 events.

And my question is , what leads to this difference ?

Question 2:
In the Error-epoch graph, what does the error mean???

Question 3:
I use the following codes to train a network:
void epoch()
{
TFile fin("…/neural.root",“read”);
TTree t_in = (TTree)fin.Get(“tree_dEdx”);
TMultiLayerPerceptron* mlp = new TMultiLayerPerceptron(“ptrk,pt,normPH,goodHits:8:3:type”,t_in);
TMLPAnalyzer* mlpa = new TMLPAnalyzer(mlp);
mlpa->GatherInformations();
mlpa->CheckNetwork();
mlp->LoadWeights("…/dEdx.txt");
mlp->Train(100,“text,update=10”);
mlp->DumpWeights("…/test.txt");
mlp->Export(“NN”,“c++”);
mlp->Export(“NN”,“Python”);

In the test.txt file, I findn 59 synapse weights and 16 neuron weight:

#neurons weights
0.354302
0.292847
0.175663
-0.346992
0.124682
-1.13705
0.634699
-1.75328
4.47485
0.62524
-1.87484
-5.58502
-1.63711
-0.703122
0.264243
0.924091

But in the NN.py, there are altoghter 63 figures,some of them is showed beneath:

class NN:
def value(self,index,in0,in1,in2,in3):
self.input0 = (in0 - 0.389542)/0.127415
self.input1 = (in1 - 0.0140051)/0.362617
self.input2 = (in2 - 1.11147)/1.19032
self.input3 = (in3 - 0.677148)/0.0913964
if index==0: return self.neuron0xa27a380()
return 0.
def neuron0xa279db8(self):
return self.input0
def neuron0xa279f20(self):
return self.input1
def neuron0xa27a088(self):
return self.input2
def neuron0xa27a1f0(self):
return self.input3
def neuron0xa27a478(self):
input = 0.124682
input = input + self.synapse0xa27a5c8()
input = input + self.synapse0xa27a5f0()
input = input + self.synapse0xa27a618()
input = input + self.synapse0xa27a640()
return ((1/(1+exp(-input)))*1)+0
def neuron0xa27a668(self):
input = -1.13705
input = input + self.synapse0xa27a7d8()
input = input + self.synapse0xa27a800()
input = input + self.synapse0xa27a828()
input = input + self.synapse0xa27a850()
return ((1/(1+exp(-input)))*1)+0

  Here , the input neron weight is different and the four figures to adjust

the amplitute of the input neuron is not recorded in the test.txt file, so next time how could useer use these four parameters?
Thanks.

You are probably using an old ROOT release.
In new versions (v5-04-00 ans following), DumpWeights not only stores neurons weights and synapses weights, but also input and output normalization.

All the symptoms you mentioned come from there. In old releases, the normalization was computed from the training sample. A different sample implies a different normalization, especially if the sample is small or not representative.

Concerning your second question, the error is defined as the sum in quadrature, divided by two, of the error on each individual output neuron. This is described in the documentation.

Thanks for your prompt reply.
About the error , I know it is defined as a as the sum in quadrature, divided by two, of the error on each individual output neuron. But is this error a total one or an average? Has it been divided by the number of traing samples?
Thanks.

I used ver.5.14, and I find the normalization parameters are recorded in the
output file listed in the following as you said :
#input normalization
-0.375509 0.120853
-0.000236403 0.433001
-0.917722 0.140722
-2.64029 -8.34179
#output normalization
-3.69039 -4.59429
#neurons weights
-2.37508
0.715118
-1.32538
4.14307

Compared with the NN.py file, I find the neuron weights of the input layers are not used. Why they are produced and recorded but not used???
Thanks for your kindly help.

Hi,

First, concerning the error, it is indeed divided by the number of entries in the dataset. If you use datasets of different size, the error should remain similar.

Now, about the input weights, they are used in the python file. You will find them at the very beginning of the code. For example, dumping weights in the tutorial gives#input normalization 0.0509152 0.459987 0.0656804 0.188581 16.5033 134.719 #output normalization 1 0 #neurons weights -0.1934 0.336673 ...

while the python file starts with class test: def value(self,index,in0,in1,in2): self.input0 = (in0 - 0.459987)/0.0509152 self.input1 = (in1 - 0.188581)/0.0656804 self.input2 = (in2 - 134.719)/16.5033 if index==0: return ((self.neuron0x95ea8f8()*1)+0); return 0. def neuron0x95e8e98(self): return self.input0 def neuron0x95e9028(self): return self.input1 def neuron0x95e9200(self): return self.input2 (...)