I have a list of questions in the following part, could any one give me a answer? Thanks for that.
Question 1. I train Sample. A to test Sample.B.
After the training of sample A finished, I got a file C recording the neuron weights and the synapse weights.
During the testing of Sample.B , I know that it is necessary to reload file C. But why do I have to reload Sample A at the same time?
I have made a sall test,I randomly choose 100 events in Sample.A to make Sample A1,Sample A has nearly 7500 events.
Case1: I reload Sample A1 and file C, and at the begining I got
ptrk -> 0.0474875 +/- 0.0421531
pt -> 0.0474875 +/- 0.0421531
normPH -> 0.0474875 +/- 0.0421531
goodHits -> 0.0474875 +/- 0.0421531
tof1m2 -> 0.0474875 +/- 0.0421531
zhit1 -> 0.0474875 +/- 0.0421531
ph1 -> 0.0474875 +/- 0.0421531
tof2m2 -> 0.0474875 +/- 0.0421531
zhit2 -> 0.0474875 +/- 0.0421531
ph2 -> 0.0474875 +/- 0.0421531
Case 2: I reload Sample A and file C, and at the beging I got
ptrk -> 0.0271719 +/- 0.0359351
pt -> 0.0271719 +/- 0.0359351
normPH -> 0.0271719 +/- 0.0359351
goodHits -> 0.0271719 +/- 0.0359351
tof1m2 -> 0.0271719 +/- 0.0359351
zhit1 -> 0.0271719 +/- 0.0359351
ph1 -> 0.0271719 +/- 0.0359351
tof2m2 -> 0.0271719 +/- 0.0359351
zhit2 -> 0.0271719 +/- 0.0359351
ph2 -> 0.0271719 +/- 0.0359351
And results are a bit different in the two cases. And when I set A1 has only 1 event , distribution is very bad, compared with 100 events and 7500 events.
And my question is , what leads to this difference ?
Question 2:
In the Error-epoch graph, what does the error mean???
Question 3:
I use the following codes to train a network:
void epoch()
{
TFile fin("…/neural.root",“read”);
TTree t_in = (TTree)fin.Get(“tree_dEdx”);
TMultiLayerPerceptron* mlp = new TMultiLayerPerceptron(“ptrk,pt,normPH,goodHits:8:3:type”,t_in);
TMLPAnalyzer* mlpa = new TMLPAnalyzer(mlp);
mlpa->GatherInformations();
mlpa->CheckNetwork();
mlp->LoadWeights("…/dEdx.txt");
mlp->Train(100,“text,update=10”);
mlp->DumpWeights("…/test.txt");
mlp->Export(“NN”,“c++”);
mlp->Export(“NN”,“Python”);
In the test.txt file, I findn 59 synapse weights and 16 neuron weight:
#neurons weights
0.354302
0.292847
0.175663
-0.346992
0.124682
-1.13705
0.634699
-1.75328
4.47485
0.62524
-1.87484
-5.58502
-1.63711
-0.703122
0.264243
0.924091
But in the NN.py, there are altoghter 63 figures,some of them is showed beneath:
class NN:
def value(self,index,in0,in1,in2,in3):
self.input0 = (in0 - 0.389542)/0.127415
self.input1 = (in1 - 0.0140051)/0.362617
self.input2 = (in2 - 1.11147)/1.19032
self.input3 = (in3 - 0.677148)/0.0913964
if index==0: return self.neuron0xa27a380()
return 0.
def neuron0xa279db8(self):
return self.input0
def neuron0xa279f20(self):
return self.input1
def neuron0xa27a088(self):
return self.input2
def neuron0xa27a1f0(self):
return self.input3
def neuron0xa27a478(self):
input = 0.124682
input = input + self.synapse0xa27a5c8()
input = input + self.synapse0xa27a5f0()
input = input + self.synapse0xa27a618()
input = input + self.synapse0xa27a640()
return ((1/(1+exp(-input)))*1)+0
def neuron0xa27a668(self):
input = -1.13705
input = input + self.synapse0xa27a7d8()
input = input + self.synapse0xa27a800()
input = input + self.synapse0xa27a828()
input = input + self.synapse0xa27a850()
return ((1/(1+exp(-input)))*1)+0
Here , the input neron weight is different and the four figures to adjust
the amplitute of the input neuron is not recorded in the test.txt file, so next time how could useer use these four parameters?
Thanks.