MLP output class value function

Hi rooters,

I have been using MLP for some time now and I find it very frustrating not to be able to pass the parameters to my exported NN class in an efficient way. I am using many different NN with different input variables and shapes. Often I want to train several NN with a different number of variables. However, accessing the result of the value(…) function of the NN class for a different number of input nodes can be very painful, because the number of parameters of the function depends on the amount of nodes. It would be way more efficient to be able to pass them using a array, which allows for any number of variables.

Until now I have been concatenating a string to contain my call to the NN class:

[code]TString def;
char buffer[500];
Float_t *fparams;
Int_t varnum,type, proc, parnum(0);

fparams = new Float_t[varnum]; // varnum is the number of NN input variables

// Loading the class, which is named by the number of epochs trained and by run number

gROOT->LoadMacro(Form(“nnf/nnf_%d_%d.cxx+”,nepochs,run));
gROOT->ProcessLine(Form(“nnf_%d_%d *nn%d”,nepochs,run,run));
gROOT->ProcessLine(Form(“nn%d = new nnf_%d_%d”,run, nepochs,run));

// Close your eyes, the following is just horrible, but needed since my code won’t otherwise compile nor get the value from the ProcessLine command (see below).

Float_t *local_nnout = new Float_t;
gROOT->ProcessLine(Form(“Float_t *nnout = 0x%x”,local_nnout));

// Reading the variable names from def and putting the value of the parameter stored in fparams (filled using SetBranchAddress pointing to my fparams array)

TString str;
str = Form("*nnout = (Float_t) nn%d->value(0,",run);
for(Int_t l = 1 ; l <= 30 ; l++) if(def.Contains(Form(“Var%d,”,l)) || def.Contains(Form(“Var%d:”,l))){ sprintf(buffer,"%.8f,",fparams[parnum++]); str += buffer; }
str.Remove(TString::kTrailing,’,’);
str += “);”;

gROOT->ProcessLine(str.Data());

// str is like “*nnout = nn100->value(0,2.2131432,4.2455432,145.565234,3241.12341)”[/code]

You will agree with me that not only this is horrible: mixing interpreted and compiled code. However I have no other choice (unless there is another C++ construct that I can use for this purpose). If someone can help ? (I have actually the same problem for loading the class which has a different name according to the number of epochs & run.)

Therefore I suggest to add a new “morph” of the value(…) function. This would imply changing/adding the following:
[This is actually supported in TMultiLayerPerceptron::Evaluate(…)]

headerfile << " double value(int index"; sourcefile << "double " << classname << "::value(int index"; for (i = 0; i < fFirstLayer.GetEntriesFast(); i++) { headerfile << ",double in" << i; sourcefile << ",double in" << i; } headerfile << ");" << endl; sourcefile << ") {" << endl; for (i = 0; i < fFirstLayer.GetEntriesFast(); i++) sourcefile << " input" << i << " = (in" << i << " - " << ((TNeuron *) fFirstLayer[i])->GetNormalisation()[1] << ")/" << ((TNeuron *) fFirstLayer[i])->GetNormalisation()[0] << ";" << endl; sourcefile << " switch(index) {" << endl; TNeuron *neuron; TObjArrayIter *it = (TObjArrayIter *) fLastLayer.MakeIterator(); Int_t idx = 0; while ((neuron = (TNeuron *) it->Next())) sourcefile << " case " << idx++ << ":" << endl << " return neuron" << neuron << "();" << endl; sourcefile << " default:" << endl << " return 0.;" << endl << " }" << endl; sourcefile << "}" << endl << endl;

would be changed (or better, added) INTO

[By the way, I think it should be Value(…), like suggested in the ROOT guide, since it is a function of a class).]

headerfile << " double Value(int index, double *in);" << endl; sourcefile << "double " << classname << "::Value(int index, double in[]){" << endl; for (i = 0; i < fFirstLayer.GetEntriesFast(); i++) sourcefile << " input" << i << " = (in[" << i << "] - " << ((TNeuron *) fFirstLayer[i])->GetNormalisation()[1] << ")/" << ((TNeuron *) fFirstLayer[i])->GetNormalisation()[0] << ";" << endl; sourcefile << " switch(index) {" << endl; TNeuron *neuron; TObjArrayIter *it = (TObjArrayIter *) fLastLayer.MakeIterator(); Int_t idx = 0; while ((neuron = (TNeuron *) it->Next())) sourcefile << " case " << idx++ << ":" << endl << " return neuron" << neuron << "();" << endl; sourcefile << " default:" << endl << " return 0.;" << endl << " }" << endl; sourcefile << "}" << endl << endl;

So that the output would be like :

double nnf_150_100::Value(int index, double in[]) { input0 = (in[0] - 0)/1; input1 = (in[1] - 0)/1; input2 = (in[2] - 0)/1; input3 = (in[3] - 0)/1; switch(index) { case 0: return neuron0x3e374e0(); default: return 0.; } }

instead of

double nnf_150_100::value(int index,double in0,double in1,double in2,double in3) { input0 = (in0 - 0)/1; input1 = (in1 - 0)/1; input2 = (in2 - 0)/1; input3 = (in3 - 0)/1; switch(index) { case 0: return neuron0x3e374e0(); default: return 0.; } }

This should have no implications (besides changing the v to V, which I understand can be tricky) thanks to polymorphism.

For the time being I was thinking of myself modifying the source before compilation but it would be better if implemented in the export() function.

I was also wondering why this didn’t come up before. Is everyone saving his MLP output as an Object stored in a TBranch? I have to admit that I still need to lookup what might be a very interesting technique.

I hope everything is clear. If not, feel free to ask additional information.

Thanks for the great job done so far.

Karolos Potamianos

Thanks for the suggestion. I will implement it when I find time.

To avoid mixing compiled and interpreted code, you have two solutions. As you suggest, the simplest is to save the network in a root file, and to load it from there when needed. It allows you to change your network without recompiling anything when you train a new one.
Another solution is to dump the weights, and to load them back when needed. The drawback is that you must yourself instanciate a network with the good structure… that can be a hassle to maintain.

Thank you.

I will probably implement it this way. ROOT has this nice thing that you keep discovering features every day :wink:

Karolos