Hello, I have a few questions about ROOT MLP package.
Maybe somebody who understands it better can answer it.
-
What is the meaning of weights in a call to train NN?
For example, after styling my program after the tutorial
mlpHiggs.C, I define NN as:TMultiLayerPerceptron *mlp = new TMultiLayerPerceptron
(“int1,int2,int3,int4,int5,npeaks:5:3:type”,
“weight”,simu,“Entry$%2”,"(Entry$+1)%2");Now, when I set weight=1 for all events, everything is OK.
I tried to set the weights in such a way that event that is
more probable to be a signal has larger weight, but then
the learing failed. Why? I guess, I misunderstand the
meaning of “weight” variable? -
Can the weight be omitted? E.g. can I have a NN definition
like
TMultiLayerPerceptron *mlp = new TMultiLayerPerceptron
(“int1,int2,int3,int4,int5,npeaks:5:3:type”,
,simu,“Entry$%2”,"(Entry$+1)%2");
-
What is the meaning of normalization? What is normalized
to what? Will the definitionsTMultiLayerPerceptron *mlp = new TMultiLayerPerceptron
(“int1,int2,int3,int4,int5,npeaks:5:3:type”,
“weight”,simu,“Entry$%2”,"(Entry$+1)%2");
and
TMultiLayerPerceptron *mlp = new TMultiLayerPerceptron
("@int1,@int2,@int3,@int4,@int5,@npeaks:5:3:type",
"weight",simu,"Entry$%2","(Entry$+1)%2");
produce different results? I can see any significant difference
in my example …
- in the call
mlp->Evaluate(0,params)
what is the meaning of the first variable (index)? I use
the value 0 as in the ROOT tutorials example mlpHiggs.C,
but I don’t really understad why …
-
OK, what is the smartest way to define the NN? How many
hidden layers and how many neurons in hidden layers
will give the best results? Can somebody suggest
Getting-Started/Simple-and-Elementary introduction
(book/article/paper/URL) to designing NN that best
fits the problem at hand?Or is everything just trial and error? (I doubt that …)
Cheers, Emil