ROOT Version: v6-28-02-5
Platform: Ubuntu 22.04.2 LTS
Compiler: (GCC) 12.2.0
Dear experts! I’ve been using ROOT for Particle Physics research and found one interensting pattern which don’t know how to resolve. I got ROOT as part of AliPhysics installation. I use PyKeras interface to train neural network via PyROOT using standart approach by prepairing the trees, using factory.BookMethod(dataloader, ROOT.TMVA.Types.kPyKeras, “PyKeras”, string_of_options) and then calling factory.TrainAllMethods(). From this I get a weights.xml file. The trained model is very basic:
model = keras.Sequential(
[
keras.Input(shape=(20,)),
keras.layers.Dense(128, activation="relu"),
keras.layers.Dropout(0.5),
keras.layers.Dense(128, activation="relu"),
keras.layers.Dropout(0.5),
keras.layers.Dense(128, activation="relu"),
keras.layers.Dense(2, activation="softmax"),
],
name="sequential_model"
)
Since the code of my main task is written in C++ then I try to call trained network in it to obtain its inference results. The excerpt of the code where this is done is posted below. All the other code in my task does not interact with this excerpt in any way. I’ve changed fMVAvariable values for dummy values for simplicity, but the behaviour of the code does not change with this. fNNCandidateVariablesNames values are also unimportant in this context.
When I run the code with this excerpt the RAM usage steadily increases and after a couple of minutes takes all available RAM. If I train and then use for interence another methods, like BDT or neural networsk implementation TMVA::kDL the RAM usage does not increase. Also, if I comment out one line with EvaluateMVA(“KerasNN”); the memory usage is also okay, no leakage occurs. Important thing to note that when using the line with EvaluateMVA(“KerasNN”); the inference results are produced(I get the number which is networks prediction), so in this sense the code works, though with ever increasing RAM usage. I will really appreciate any comment on my situation.
Int_t NumberOfVariables = 20;
TString *fNNCandidateVariablesNames = new TString[NumberOfVariables];
fNNCandidateVariablesNames[0] = "massCand";
fNNCandidateVariablesNames[1] = "ptLb";
fNNCandidateVariablesNames[2] = "pt_Prong0";
fNNCandidateVariablesNames[3] = "pt_Prong1";
fNNCandidateVariablesNames[4] = "d0_Prong0";
fNNCandidateVariablesNames[5] = "cosThetaStar";
fNNCandidateVariablesNames[6] = "Ct";
fNNCandidateVariablesNames[7] = "Prodd0";
fNNCandidateVariablesNames[8] = "cosp";
fNNCandidateVariablesNames[9] = "cospXY";
fNNCandidateVariablesNames[10] = "NormDL";
fNNCandidateVariablesNames[11] = "ImpPar";
fNNCandidateVariablesNames[12] = "dca";
fNNCandidateVariablesNames[13] = "ptLc";
fNNCandidateVariablesNames[14] = "d0_Prong0Lc";
fNNCandidateVariablesNames[15] = "d0_Prong1Lc";
fNNCandidateVariablesNames[16] = "pt_Prong0Lc";
fNNCandidateVariablesNames[17] = "pt_Prong1Lc";
fNNCandidateVariablesNames[18] = "cosv0pointinganglexyLc";
fNNCandidateVariablesNames[19] = "normalizedv0decaylengthxyLc";
for (Int_t icand = 0; icand < 100000; icand++) {
TMVA::Reader* MyReader = new TMVA::Reader();
Float_t *fMVAvariable = new Float_t[NumberOfVariables];
MyReader->AddVariable(fNNCandidateVariablesNames[0], &fMVAvariable[0]);
MyReader->AddVariable(fNNCandidateVariablesNames[1], &fMVAvariable[1]);
MyReader->AddVariable(fNNCandidateVariablesNames[2], &fMVAvariable[2]);
MyReader->AddVariable(fNNCandidateVariablesNames[3], &fMVAvariable[3]);
MyReader->AddVariable(fNNCandidateVariablesNames[4], &fMVAvariable[4]);
MyReader->AddVariable(fNNCandidateVariablesNames[5], &fMVAvariable[5]);
MyReader->AddVariable(fNNCandidateVariablesNames[6], &fMVAvariable[6]);
MyReader->AddVariable(fNNCandidateVariablesNames[7], &fMVAvariable[7]);
MyReader->AddVariable(fNNCandidateVariablesNames[8], &fMVAvariable[8]);
MyReader->AddVariable(fNNCandidateVariablesNames[9], &fMVAvariable[9]);
MyReader->AddVariable(fNNCandidateVariablesNames[10], &fMVAvariable[10]);
MyReader->AddVariable(fNNCandidateVariablesNames[11], &fMVAvariable[11]);
MyReader->AddVariable(fNNCandidateVariablesNames[12], &fMVAvariable[12]);
MyReader->AddVariable(fNNCandidateVariablesNames[13], &fMVAvariable[13]);
MyReader->AddVariable(fNNCandidateVariablesNames[14], &fMVAvariable[14]);
MyReader->AddVariable(fNNCandidateVariablesNames[15], &fMVAvariable[15]);
MyReader->AddVariable(fNNCandidateVariablesNames[16], &fMVAvariable[16]);
MyReader->AddVariable(fNNCandidateVariablesNames[17], &fMVAvariable[17]);
MyReader->AddVariable(fNNCandidateVariablesNames[18], &fMVAvariable[18]);
MyReader->AddVariable(fNNCandidateVariablesNames[19], &fMVAvariable[19]);
MyReader->BookMVA("KerasNN", TString("./dataset/weights/Keras_model_PyKeras.weights.xml"));
/* MyReader->BookMVA("BDT", TString("./dataset/weights/Keras_model_BDT.weights.xml")); */
fMVAvariable[0] = 1;
fMVAvariable[1] = 1;
fMVAvariable[2] = 1;
fMVAvariable[3] = 1;
fMVAvariable[4] = 1;
fMVAvariable[5] = 1;
fMVAvariable[6] = 1;
fMVAvariable[7] = 1;
fMVAvariable[8] = 1;
fMVAvariable[9] = 1;
fMVAvariable[10] = 1;
fMVAvariable[11] = 1;
fMVAvariable[12] = 1;
fMVAvariable[13] = 1;
fMVAvariable[14] = 1;
fMVAvariable[15] = 1;
fMVAvariable[16] = 1;
fMVAvariable[17] = 1;
fMVAvariable[18] = 1;
fMVAvariable[19] = 1;
Double_t fMVAResponse_Sig = MyReader->EvaluateMVA("KerasNN");
/* Double_t fMVAResponse_Sig = MyReader->EvaluateMVA("BDT"); */
delete MyReader;
delete[] fMVAvariable;
fMVAvariable = nullptr;
}