# Difficulties on obtaining DNN predictions on PyROOT

Hello everyone,
I would appreciat it if someone could help me.
I have trained a binary classification DNN model and wanted to obtain its confusion matrix on python. So I tried to get the predictions of testing data by TMVA::Reader::EvaluateMVA and manually compute the confusion matrix. If I correctly understood that function according to tutorial file TMVAClassificationApplication.C, it should throw me the prediction of event, i.e. a score in [0,1]. But no matter how I changed the event vector, its evaluation given by TMVA::Reader::EvaluateMVA never changed.
Here are the coes:

import ROOT as rt
from ctypes import c_float

# Read 4 TTrees: sig_train, sig_test, bkg_train, bkg_test, but only use sig_test and bkg_test
for i in ['sig_train', 'sig_test', 'bkg_train', 'bkg_test']:
exec("%s_f = rt.TFile('./datas/%s.root')" % (i,  i))
exec("%s = %s_f.Get('%s')" % (i, i, i))

# Let the editor recognizes these datas
sig_train = sig_train
sig_test = sig_test
bkg_train = bkg_train
bkg_test = bkg_test

# Obtain variables' names
varNames = list()
for branch in sig_train.GetListOfBranches():
varNames.append(branch.GetName())

varFloats = list()
for var in varNames:
varFloats.append(c_float())

# Book method
"./dataset/weights/TMVAClassification_DNN_GPU.weights.xml")

# Print evaluations
for i in range(10):
sig_train.GetEntry(i)



Output:

                         : Booking "DNN_GPU method" of type "DL" from ./dataset/weights/TMVAClassification_DNN_GPU.weights.xml.
DataSetInfo              : [Default] : Added class "Signal"
DataSetInfo              : [Default] : Added class "Background"
: Booked classifier "DNN_GPU" of type: "DL"
0.7310585975646973
0.7310585975646973
0.7310585975646973
0.7310585975646973
0.7310585975646973
0.7310585975646973
0.7310585975646973
0.7310585975646973
0.7310585975646973
0.7310585975646973


I wonder if there is any convinient way to obtain confusion matrix, or if there is any mistake on my idea to compute predictions?
Thank you very much again!

ROOT Version: 6.24.06
Platform: Ubuntu20.04

I think @moneta may have an answer.

Hi,

It is probably an issue of passing the C++ pointer to the branch address to the Reader. Maybe @etejedor knows more how to do this in PyROOT.
As a workaround you could use the other Reader interface allowing you to pass a vector of the event variables, see ROOT: TMVA::Reader Class Reference.
The drawback will be that this will be much slower.

I would recommend then to use the new experimental interface of the RReader class, allowing you to directly pass an RTensor containing all events.

Best regards

Lorenzo

Instead of using c_float, you can try storing those floats in an array.array or a numpy array of size 1, and that’s what you use when setting the branch address. See an example in the PyROOT docs of TTree:

https://root.cern.ch/doc/master/classTTree.html

I’m sorry for the stupid question I asked. I have tried several times and finally found out that it’s because \mathrm{sigmoid}(1)\approx0.7310585975646973 and \mathrm{sigmoid}(0)=0.5 ( sigmoid was used as my output activation). So the code did work, and the model performed well (most of signal events were classified as “signal”).
Sorry for my bother again, and thank you very much for the help!

Hi,
Why does TMVA just throw me sigmoid(0) for background events and sigmoid(1) for signal events instread of 0 for background and 1 for signal? It seems that the values before sigmoid() are already range in [0,1] and corresponding to background or signal.

Here are my training settings:

layoutString = "Layout=RELU|512,RELU|512,RELU|512,RELU|256,SIGMOID"

trainingStrategy = "TrainingStrategy="+training0+"|"+training1+"|"+training2

dnnOptions = "!H:V:ErrorStrategy=CROSSENTROPY:VarTransform=N:Architecture=GPU"
dnnOptions += ":"+layoutString+":"+trainingStrategy


Or, should I open a new topic for this question?
Thanks!

Hi,

The DNN in TMVA adds automatically a Sigmoid for theist layer when using the CrossEntropy loss function, so you don’t need to add it in your model. You need to use LINEAR as last activation function. See for example the network configuration (layout string) used in this tutorial: https://root.cern.ch/doc/master/TMVA__Higgs__Classification_8C.htmlhttps://root.cern.ch/doc/master/TMVA__Higgs__Classification_8C.html.

Cheers

Lorenzo