Can TMultiLayerPerceptron be ctor'ed with an output TTree?

I have a weird problem with TMLP. The net was trained with a pre-existing
N-tuple file, and I have the weights file (from DumpWeights()) to recreate the
TMLP for analysis. I want to be able to evaluate its output for N-tuple rows as they’re being produced, so I tried to instantiate a new TMLP specifying the same TTree as is being written to disk.

When I do this, and reload the weights recorded previously, the TMLP always gives NaN as output. I have verified this by processing the identical event (a) on readback from my final N-tuple file, and (b) in a jobs which writes out that “final” N-tuple file. Is this familiar to anyone? I have used what little informational capability that TMLP and TTree provide, to verify that the actual data content is correct in both cases.

Hi Michael,

Could you be a bit more explicit about the way you write/read the MLP
from your Tree. The class TMultiLayerPerceptron can (in principle) be
made persistent, although I do not have much experience with it.
It could be that when reading some internal transient pointers are
not set for what you want to do.

Rene

[quote=“brun”]Could you be a bit more explicit about the way you write/read the MLP
from your Tree. The class TMultiLayerPerceptron can (in principle) be
made persistent, although I do not have much experience with it.
It could be that when reading some internal transient pointers are
not set for what you want to do.[/quote]

Hi, Rene! Thanks for the reply. I do not actually make the TMLP “persisten” in a ROOT-file sense. Instead, we went through a training cycle with sample data, and at the end of that cycle we called TMLP::DumpWeights(“weights.mlp”) to write the trained connection weight values to a text file.

When I want to make use of the trained TMLP, I do

     const char* layout = ".......";     // The same string used in training above
     _mlp = new TMultiLayerPerceptron(layout, datatree);
     _mlp->LoadWeights("weights.mlp");

The problem I have is with the “datatree” second argument to the TMLP ctor. I have two kinds of jobs I want to run. In one kind, I open an N-tuple file and datatree is read back from that file (using ‘TTree* datatree = (TTree*)_myfile->Get(“myDataTree”);’). With this job, the TMLP created above works perfectly, exact as I would expect. I copy some of the tree data into a double par[] array, and call ‘_mlp->Evaluate(0,par);’.

The other kind of job involves the production of the N-tuple. There, I open an output file for writing, and use ‘TTree* datatree = new TTree(“myDataTree”);’ followed by all of the necessary ::Branch() calls to make the tree work. After I fill all of the branches for one event, I want to call ‘_mlp->Evaluate(0,par);’ exactly as above. In this job, that call always returns NaN, even when the data being written out is exactly the same event that was read back by the first kind of job.