Root and Deep Neural Network

You can always use the “manual” approach: read ROOT tree -> convert relevant columns to csv -> read with TF (or use Keras or similar frameworks).

Hi,

You can also use Keras (and its Tensorflow implementation) directly from TMVA.
You provide as input an input description model which can be written by Keras.
See http://nbviewer.jupyter.org/github/iml-wg/tmvatutorials/blob/master/TMVA_PyMVA.ipynb
for a simple Keras example

Best Regards

Lorenzo

Dear Moneta,
is there and example on how to use DNNClassification of TensorFlow? from your link it’s not clear to me how I can do that?
Regards

Dear behrenhoff,
is there a code with already allow make root file into cvs files?
Regards

Dear experts,
any ideas?
Regards

Hi,

You can find code example in the TMVA tutorial directory
https://root.cern.ch/doc/master/group__tutorial__tmva.html

For example, TMVA_Classification.C provides examples of using all TMVA methods (including the DNN)

https://root.cern.ch/doc/master/TMVAClassification_8C.html

and then in the eras directory (unfortunatly not linked from the previous web page) you see examples on using the Keras interface.
See then these in GitHub:

Lorenzo

Hello,

converting to .csv depends on the structure of your TTree. Basically you read in the branches (loop) and write them to ‘txt’ file one event per row.

Cheers,

Dear Moneta,
TMVA DNN is quite new right? if I look here (1) the current release is "TMVA version 4.2.0 " made in 2013, and I wonder if it include DNN? It is the right place to get the last TMVA? Maybe should I work with a special root_TMVA?
Regards

(1)
http://tmva.sourceforge.net/

TMVA is included in ROOT, the sourceforge page is outdated. Find the TMVA user guide here: https://root.cern.ch/guides/tmva-manual

Dear Behrenhoff,

  • thank you.
  • I ran (1) with root -l ./TMVAClassification.C. In that code I set only “Use[ “DNN_GPU”] = 1”. So all the other method are set to 0. When I run I got the error message (2). I have root "6.10/04 ". Should I install more things on my side?
    Regards

(1)
https://root.cern.ch/doc/master/TMVAClassification_8C_source.html

(2)
: Transformation, Variable selection :
: Input : variable ‘myvar1’ <—> Output : variable ‘myvar1’
: Input : variable ‘myvar2’ <—> Output : variable ‘myvar2’
: Input : variable ‘var3’ <—> Output : variable ‘var3’
: Input : variable ‘var4’ <—> Output : variable ‘var4’
: CUDA backend not enabled. Please make sure you have CUDA installed and it was successfully detected by CMAKE.
: CUDA backend not enabled. Please make sure you have CUDA installed and it was successfully detected by CMAKE.
***> abort program execution
terminate called after throwing an instance of ‘std::runtime_error’
what(): FATAL error

When I use “Use[“DNN_CPU”] = 1” I have the error message (1)?
Regards

(1)
: Transformation, Variable selection :
: Input : variable ‘myvar1’ <—> Output : variable ‘myvar1’
: Input : variable ‘myvar2’ <—> Output : variable ‘myvar2’
: Input : variable ‘var3’ <—> Output : variable ‘var3’
: Input : variable ‘var4’ <—> Output : variable ‘var4’
: Multi-core CPU backend not enabled. Please make sure you have a BLAS implementation and it was successfully detected by CMake as well that the imt CMake flag is set.
: Multi-core CPU backend not enabled. Please make sure you have a BLAS implementation and it was successfully detected by CMake as well that the imt CMake flag is set.
***> abort program execution
terminate called after throwing an instance of ‘std::runtime_error’
what(): FATAL error

I can see from the manual that the"standard backend" can be use on any platform but when I use Architecture=STANDARD, as option, it says that it is not available (1)!
Regards

(1)
: Transformation, Variable selection :
: Input : variable ‘myvar1’ <—> Output : variable ‘myvar1’
: Input : variable ‘myvar2’ <—> Output : variable ‘myvar2’
: Input : variable ‘var3’ <—> Output : variable ‘var3’
: Input : variable ‘var4’ <—> Output : variable ‘var4’
: The STANDARD architecture has been deprecated. Please use Architecture=CPU or Architecture=CPU.See the TMVA Users’ Guide for instructions if you encounter problems.
: The STANDARD architecture has been deprecated. Please use Architecture=CPU or Architecture=CPU.See the TMVA Users’ Guide for instructions if you encounter problems.
***> abort program execution
terminate called after throwing an instance of ‘std::runtime_error’
what(): FATAL error

Dear experts,
is there a way to access the “standard backend”? right now I can’t use the other DNN options.
Regards

Hi,

I am sorry, this is not possible. :confused:

Cheers,
Kim

Dear experts,
when I used “DNN_GPU”, it trains and take many times (1), what puzzle me is that i do not see the progress bar despite my request (2). Is it implemented for DNN? if yes is there something wrong?
Regards

(1)
TFHandler_DNN_GPU : Variable Mean RMS [ Min Max ]
: -----------------------------------------------------------
: Mll01: -0.85850 0.11802 [ -1.0000 1.0000 ]
: DRll01: -0.29341 0.35139 [ -1.0000 1.0000 ]
: Ptll01: -0.80687 0.12644 [ -1.0000 1.0000 ]
: SumPtJet: -0.77872 0.15662 [ -1.0000 1.0000 ]
: -----------------------------------------------------------
: Start of neural network training on GPU.
:
: Training phase 1 of 1:
: Epoch | Train Err. Test Err. GFLOP/s Conv. Steps
: --------------------------------------------------------------

HERE I KEEP WAITING BUT I DO NOT KNOW IF SOMETHING IS WRONG AS I DO NOT SEE THE PROGRESS BAR?!

(2)
TMVA::Factory *factory = new TMVA::Factory( “TMVAClassification”, outputFile,
“!V:!Silent:Color:DrawProgressBar:Transformations=I;D;P;G,D:AnalysisType=Classification” );

Hi,

Thanks for reporting this. On my machine it takes quite some time for the network to be constructed and data loaded. It then output erroneous results. I will make a bug report.

Two questions: What OS/Cuda versions are you running? If you are not running cuda-7.5, do you have the possibility to test with this cuda version? (I have so far only been able to test with cuda 8.0 and 9.0).

Cheers,
Kim

Dear Kailbert,
it works fine with me, even if yes it takes a long time… But my question is why I do not see the “progress bar”?
Regards

That was weird… Running it again, the output now seems fine.

To your question then. Currently the DNN does not use the DrawProgressBar option to the factory. This because the training epoch reports should give you an idea of the progress. If you want quicker feedback, you can change the TestRepetitions option to the DNN. The validation set will then be evaluated more often along with a textual output.

Also, a side note: Please create a new topic for each separate question you have. This will make it easier for others looking for answers to similar questions.

Cheers,
Kim

Dear Kialbert,
Ok, thank you for your answer.
Regards