root [0] Processing TMVA_CNN_Classification.C... Running with nthreads = 24 Filling ROOT tree Generating image event ... 0 Generating image event ... 1000 Generating image event ... 2000 Generating image event ... 3000 Generating image event ... 4000 Info in : Signal and background tree with images data written to the file images_data_16x16.root ****************************************************************************** *Tree :sig_tree : signal_tree * *Entries : 5000 : Total = 5208596 bytes File Size = 4659410 * * : : Tree compression factor = 1.12 * ****************************************************************************** *Br 0 :vars : vector * *Entries : 5000 : Total Size= 5208178 bytes File Size = 4657565 * *Baskets : 167 : Basket Size= 32000 bytes Compression= 1.12 * *............................................................................* ****************************************************************************** *Tree :bkg_tree : background_tree * *Entries : 5000 : Total = 5208604 bytes File Size = 4658494 * * : : Tree compression factor = 1.12 * ****************************************************************************** *Br 0 :vars : vector * *Entries : 5000 : Total Size= 5208178 bytes File Size = 4656623 * *Baskets : 167 : Basket Size= 32000 bytes Compression= 1.12 * *............................................................................* DataSetInfo : [dataset] : Added class "Signal" : Add Tree sig_tree of type Signal with 5000 events DataSetInfo : [dataset] : Added class "Background" : Add Tree bkg_tree of type Background with 5000 events Factory : Booking method: BDT : : Rebuilding Dataset dataset : Building event vectors for type 2 Signal : Dataset[dataset] : create input formulas for tree sig_tree : Using variable vars[0] from array expression vars of size 256 : Building event vectors for type 2 Background : Dataset[dataset] : create input formulas for tree bkg_tree : Using variable vars[0] from array expression vars of size 256 DataSetFactory : [dataset] : Number of events in input trees : : : Number of training and testing events : --------------------------------------------------------------------------- : Signal -- training events : 4000 : Signal -- testing events : 1000 : Signal -- training and testing events: 5000 : Background -- training events : 4000 : Background -- testing events : 1000 : Background -- training and testing events: 5000 : Factory : Booking method: TMVA_DNN_GPU : : Parsing option string: : ... "!H:V:ErrorStrategy=CROSSENTROPY:VarTransform=None:WeightInitialization=XAVIER:Layout=DENSE|100|RELU,BNORM,DENSE|100|RELU,BNORM,DENSE|100|RELU,BNORM,DENSE|100|RELU,DENSE|1|LINEAR:TrainingStrategy=LearningRate=1e-3,Momentum=0.9,Repetitions=1,ConvergenceSteps=5,BatchSize=100,TestRepetitions=1,MaxEpochs=20,WeightDecay=1e-4,Regularization=None,Optimizer=ADAM,DropConfig=0.0+0.0+0.0+0.:Architecture=GPU" : The following options are set: : - By User: : : - Default: : Boost_num: "0" [Number of times the classifier will be boosted] : Parsing option string: : ... "!H:V:ErrorStrategy=CROSSENTROPY:VarTransform=None:WeightInitialization=XAVIER:Layout=DENSE|100|RELU,BNORM,DENSE|100|RELU,BNORM,DENSE|100|RELU,BNORM,DENSE|100|RELU,DENSE|1|LINEAR:TrainingStrategy=LearningRate=1e-3,Momentum=0.9,Repetitions=1,ConvergenceSteps=5,BatchSize=100,TestRepetitions=1,MaxEpochs=20,WeightDecay=1e-4,Regularization=None,Optimizer=ADAM,DropConfig=0.0+0.0+0.0+0.:Architecture=GPU" : The following options are set: : - By User: : V: "True" [Verbose output (short form of "VerbosityLevel" below - overrides the latter one)] : VarTransform: "None" [List of variable transformations performed before training, e.g., "D_Background,P_Signal,G,N_AllClasses" for: "Decorrelation, PCA-transformation, Gaussianisation, Normalisation, each for the given class of events ('AllClasses' denotes all events of all classes, if no class indication is given, 'All' is assumed)"] : H: "False" [Print method-specific help message] : Layout: "DENSE|100|RELU,BNORM,DENSE|100|RELU,BNORM,DENSE|100|RELU,BNORM,DENSE|100|RELU,DENSE|1|LINEAR" [Layout of the network.] : ErrorStrategy: "CROSSENTROPY" [Loss function: Mean squared error (regression) or cross entropy (binary classification).] : WeightInitialization: "XAVIER" [Weight initialization strategy] : Architecture: "GPU" [Which architecture to perform the training on.] : TrainingStrategy: "LearningRate=1e-3,Momentum=0.9,Repetitions=1,ConvergenceSteps=5,BatchSize=100,TestRepetitions=1,MaxEpochs=20,WeightDecay=1e-4,Regularization=None,Optimizer=ADAM,DropConfig=0.0+0.0+0.0+0." [Defines the training strategies.] : - Default: : VerbosityLevel: "Default" [Verbosity level] : CreateMVAPdfs: "False" [Create PDFs for classifier outputs (signal and background)] : IgnoreNegWeightsInTraining: "False" [Events with negative weights are ignored in the training (but are included for testing and performance evaluation)] : InputLayout: "0|0|0" [The Layout of the input] : BatchLayout: "0|0|0" [The Layout of the batch] : RandomSeed: "0" [Random seed used for weight initialization and batch shuffling] : ValidationSize: "20%" [Part of the training data to use for validation. Specify as 0.2 or 20% to use a fifth of the data set as validation set. Specify as 100 to use exactly 100 events. (Default: 20%)] : Will now use the GPU architecture ! Factory : Booking method: TMVA_CNN_GPU : : Parsing option string: : ... "!H:V:ErrorStrategy=CROSSENTROPY:VarTransform=None:WeightInitialization=XAVIER:InputLayout=1|16|16:Layout=CONV|10|3|3|1|1|1|1|RELU,BNORM,CONV|10|3|3|1|1|1|1|RELU,MAXPOOL|2|2|1|1,RESHAPE|FLAT,DENSE|100|RELU,DENSE|1|LINEAR:TrainingStrategy=LearningRate=1e-3,Momentum=0.9,Repetitions=1,ConvergenceSteps=5,BatchSize=100,TestRepetitions=1,MaxEpochs=20,WeightDecay=1e-4,Regularization=None,Optimizer=ADAM,DropConfig=0.0+0.0+0.0+0.0:Architecture=GPU" : The following options are set: : - By User: : : - Default: : Boost_num: "0" [Number of times the classifier will be boosted] : Parsing option string: : ... "!H:V:ErrorStrategy=CROSSENTROPY:VarTransform=None:WeightInitialization=XAVIER:InputLayout=1|16|16:Layout=CONV|10|3|3|1|1|1|1|RELU,BNORM,CONV|10|3|3|1|1|1|1|RELU,MAXPOOL|2|2|1|1,RESHAPE|FLAT,DENSE|100|RELU,DENSE|1|LINEAR:TrainingStrategy=LearningRate=1e-3,Momentum=0.9,Repetitions=1,ConvergenceSteps=5,BatchSize=100,TestRepetitions=1,MaxEpochs=20,WeightDecay=1e-4,Regularization=None,Optimizer=ADAM,DropConfig=0.0+0.0+0.0+0.0:Architecture=GPU" : The following options are set: : - By User: : V: "True" [Verbose output (short form of "VerbosityLevel" below - overrides the latter one)] : VarTransform: "None" [List of variable transformations performed before training, e.g., "D_Background,P_Signal,G,N_AllClasses" for: "Decorrelation, PCA-transformation, Gaussianisation, Normalisation, each for the given class of events ('AllClasses' denotes all events of all classes, if no class indication is given, 'All' is assumed)"] : H: "False" [Print method-specific help message] : InputLayout: "1|16|16" [The Layout of the input] : Layout: "CONV|10|3|3|1|1|1|1|RELU,BNORM,CONV|10|3|3|1|1|1|1|RELU,MAXPOOL|2|2|1|1,RESHAPE|FLAT,DENSE|100|RELU,DENSE|1|LINEAR" [Layout of the network.] : ErrorStrategy: "CROSSENTROPY" [Loss function: Mean squared error (regression) or cross entropy (binary classification).] : WeightInitialization: "XAVIER" [Weight initialization strategy] : Architecture: "GPU" [Which architecture to perform the training on.] : TrainingStrategy: "LearningRate=1e-3,Momentum=0.9,Repetitions=1,ConvergenceSteps=5,BatchSize=100,TestRepetitions=1,MaxEpochs=20,WeightDecay=1e-4,Regularization=None,Optimizer=ADAM,DropConfig=0.0+0.0+0.0+0.0" [Defines the training strategies.] : - Default: : VerbosityLevel: "Default" [Verbosity level] : CreateMVAPdfs: "False" [Create PDFs for classifier outputs (signal and background)] : IgnoreNegWeightsInTraining: "False" [Events with negative weights are ignored in the training (but are included for testing and performance evaluation)] : BatchLayout: "0|0|0" [The Layout of the batch] : RandomSeed: "0" [Random seed used for weight initialization and batch shuffling] : ValidationSize: "20%" [Part of the training data to use for validation. Specify as 0.2 or 20% to use a fifth of the data set as validation set. Specify as 100 to use exactly 100 events. (Default: 20%)] : Will now use the GPU architecture ! Info in : Building convolutional keras model sh: 1: python: not found Warning in : Error creating Keras model file - skip using Keras Info in : Using Convolutional PyTorch Model sh: 1: python: not found Warning in : PyTorch is not installed or model building file is not existing - skip using PyTorch Factory : Train all methods Factory : Train method: BDT for Classification : BDT : #events: (reweighted) sig: 4000 bkg: 4000 : #events: (unweighted) sig: 4000 bkg: 4000 : Training 400 Decision Trees ... patience please : Elapsed time for training with 8000 events: 2.31 sec BDT : [dataset] : Evaluation of BDT on training sample (8000 events) : Elapsed time for evaluation of 8000 events: 0.0601 sec : Creating xml weight file: dataset/weights/TMVA_CNN_Classification_BDT.weights.xml : Creating standalone class: dataset/weights/TMVA_CNN_Classification_BDT.class.C : TMVA_CNN_ClassificationOutput.root:/dataset/Method_BDT/BDT Factory : Training finished : Factory : Train method: TMVA_DNN_GPU for Classification : : Start of deep neural network training on GPU. : TCudaTensor::create cudnn handle ! output bnorm shape : { 100 , 100 , 1 } Layout : ColMajor reshaped data shape : { 1 , 100 , 1 , 100 } Layout : RowMajor output bnorm shape : { 100 , 100 , 1 } Layout : ColMajor reshaped data shape : { 1 , 100 , 1 , 100 } Layout : RowMajor output bnorm shape : { 100 , 100 , 1 } Layout : ColMajor reshaped data shape : { 1 , 100 , 1 , 100 } Layout : RowMajor : ***** Deep Learning Network ***** DEEP NEURAL NETWORK: Depth = 8 Input = ( 1, 1, 256 ) Batch size = 100 Loss function = C Layer 0 DENSE Layer: ( Input = 256 , Width = 100 ) Output = ( 1 , 100 , 100 ) Activation Function = Relu Layer 1 BATCH NORM Layer: Input/Output = ( 100 , 100 , 1 ) Norm dim = 100 axis = -1 Layer 2 DENSE Layer: ( Input = 100 , Width = 100 ) Output = ( 1 , 100 , 100 ) Activation Function = Relu Layer 3 BATCH NORM Layer: Input/Output = ( 100 , 100 , 1 ) Norm dim = 100 axis = -1 Layer 4 DENSE Layer: ( Input = 100 , Width = 100 ) Output = ( 1 , 100 , 100 ) Activation Function = Relu Layer 5 BATCH NORM Layer: Input/Output = ( 100 , 100 , 1 ) Norm dim = 100 axis = -1 Layer 6 DENSE Layer: ( Input = 100 , Width = 100 ) Output = ( 1 , 100 , 100 ) Activation Function = Relu Layer 7 DENSE Layer: ( Input = 100 , Width = 1 ) Output = ( 1 , 100 , 1 ) Activation Function = Identity : Using 6400 events for training and 1600 for testing : Compute initial loss on the validation data : Training phase 1 of 1: Optimizer ADAM (beta1=0.9,beta2=0.999,eps=1e-07) Learning rate = 0.001 regularization 0 minimum error = 3.26901 : -------------------------------------------------------------- : Epoch | Train Err. Val. Err. t(s)/epoch t(s)/Loss nEvents/s Conv. Steps : -------------------------------------------------------------- : Start epoch iteration ... : 1 Minimum Test error found - save the configuration : 1 | 0.6559 0.645503 0.083462 0.00380388 80343.4 0 : 2 Minimum Test error found - save the configuration : 2 | 0.485082 0.514456 0.0303061 0.00545064 257488 0 : 3 | 0.393822 0.670152 0.0290015 0.00446053 260789 1 : 4 | 0.363282 0.807334 0.0283016 0.00298498 252798 2 : 5 | 0.332055 0.526533 0.029594 0.00294232 240135 3 : 6 Minimum Test error found - save the configuration : 6 | 0.326272 0.396278 0.0298535 0.00353909 243212 0 : 7 | 0.317478 0.707726 0.0286433 0.00354919 255040 1 : 8 | 0.300685 0.588516 0.0285488 0.00293658 249881 2 : 9 | 0.279701 0.433559 0.0303355 0.00313811 235316 3 : 10 | 0.284127 0.703957 0.0292724 0.00294197 243065 4 : 11 | 0.29122 0.429142 0.0287937 0.00380634 256130 5 : 12 | 0.262938 0.427948 0.0282065 0.00405442 264987 6 : : Elapsed time for training with 8000 events: 1.27 sec : Evaluate deep neural network on GPU using batches with size = 100 : output bnorm shape : { 100 , 100 , 1 } Layout : ColMajor reshaped data shape : { 1 , 100 , 1 , 100 } Layout : RowMajor output bnorm shape : { 100 , 100 , 1 } Layout : ColMajor reshaped data shape : { 1 , 100 , 1 , 100 } Layout : RowMajor output bnorm shape : { 100 , 100 , 1 } Layout : ColMajor reshaped data shape : { 1 , 100 , 1 , 100 } Layout : RowMajor TMVA_DNN_GPU : [dataset] : Evaluation of TMVA_DNN_GPU on training sample (8000 events) : Elapsed time for evaluation of 8000 events: 0.0763 sec : Creating xml weight file: dataset/weights/TMVA_CNN_Classification_TMVA_DNN_GPU.weights.xml : Creating standalone class: dataset/weights/TMVA_CNN_Classification_TMVA_DNN_GPU.class.C Factory : Training finished : Factory : Train method: TMVA_CNN_GPU for Classification : : Start of deep neural network training on GPU. : CONV FWD Algo used for convolution of input shape { 100 , 1 , 16 , 16 } is 0 CONV BWD Data Algo used is 1 CONV BWD Filter Algo used is 0 output shape : { 100 , 10 , 256 } Layout : ColMajor tmp shape : { 100 , 10 , 16 , 16 } Layout : RowMajor output2 shape : { 100 , 10 , 16 , 16 } Layout : RowMajor output bnorm shape : { 100 , 10 , 16 , 16 } Layout : RowMajor reshaped data shape : { 100 , 10 , 16 , 16 } Layout : RowMajor CONV FWD Algo used for convolution of input shape { 100 , 10 , 16 , 16 } is 0 CONV BWD Data Algo used is 4 CONV BWD Filter Algo used is 0 : ***** Deep Learning Network ***** DEEP NEURAL NETWORK: Depth = 7 Input = ( 1, 16, 16 ) Batch size = 100 Loss function = C Layer 0 CONV LAYER: ( W = 16 , H = 16 , D = 10 ) Filter ( W = 3 , H = 3 ) Output = ( 100 , 10 , 16 , 16 ) Activation Function = Relu Layer 1 BATCH NORM Layer: Input/Output = ( 100 , 10 , 16 , 16 ) Norm dim = 10 axis = 1 Layer 2 CONV LAYER: ( W = 16 , H = 16 , D = 10 ) Filter ( W = 3 , H = 3 ) Output = ( 100 , 10 , 16 , 16 ) Activation Function = Relu Layer 3 POOL Layer: ( W = 15 , H = 15 , D = 10 ) Filter ( W = 2 , H = 2 ) Output = ( 100 , 10 , 15 , 15 ) Layer 4 RESHAPE Layer Input = ( 10 , 15 , 15 ) Output = ( 1 , 100 , 2250 ) Layer 5 DENSE Layer: ( Input = 2250 , Width = 100 ) Output = ( 1 , 100 , 100 ) Activation Function = Relu Layer 6 DENSE Layer: ( Input = 100 , Width = 1 ) Output = ( 1 , 100 , 1 ) Activation Function = Identity : Using 6400 events for training and 1600 for testing : Compute initial loss on the validation data : Training phase 1 of 1: Optimizer ADAM (beta1=0.9,beta2=0.999,eps=1e-07) Learning rate = 0.001 regularization 0 minimum error = 4.84612 : -------------------------------------------------------------- : Epoch | Train Err. Val. Err. t(s)/epoch t(s)/Loss nEvents/s Conv. Steps : -------------------------------------------------------------- : Start epoch iteration ... : 1 Minimum Test error found - save the configuration : 1 | 0.732093 0.68767 0.0653515 0.0068065 109318 0 : 2 Minimum Test error found - save the configuration : 2 | 0.683785 0.682559 0.0682406 0.00676098 104099 0 : 3 Minimum Test error found - save the configuration : 3 | 0.664277 0.643345 0.0627568 0.00612343 113008 0 : 4 Minimum Test error found - save the configuration : 4 | 0.612306 0.585245 0.0638318 0.00621164 111072 0 : 5 Minimum Test error found - save the configuration : 5 | 0.511985 0.487486 0.064733 0.00613805 109224 0 : 6 | 0.4437 0.49346 0.062495 0.00575824 112802 1 : 7 | 0.403745 0.520643 0.061497 0.00515613 113594 2 : 8 Minimum Test error found - save the configuration : 8 | 0.405981 0.421225 0.0668043 0.00633418 105837 0 : 9 Minimum Test error found - save the configuration : 9 | 0.369501 0.392145 0.0632022 0.00616529 112208 0 : 10 Minimum Test error found - save the configuration : 10 | 0.364771 0.379478 0.0640006 0.00613913 110609 0 : 11 | 0.347021 0.395213 0.06301 0.00461902 109606 1 : 12 | 0.342402 0.395853 0.0627147 0.00587432 112596 2 : 13 | 0.337662 0.417005 0.0617089 0.00461034 112087 3 : 14 Minimum Test error found - save the configuration : 14 | 0.340655 0.372926 0.0641502 0.00613443 110315 0 : 15 | 0.322447 0.380955 0.0626203 0.00460993 110325 1 : 16 | 0.313871 0.428894 0.0625551 0.00595729 113078 2 : 17 | 0.306304 0.373046 0.0613397 0.00461758 112831 3 : 18 | 0.301677 0.386632 0.0638296 0.00465381 108152 4 : 19 | 0.295637 0.404738 0.0636031 0.00519614 109576 5 : 20 | 0.290941 0.40311 0.0632944 0.00561481 110958 6 : : Elapsed time for training with 8000 events: 2.41 sec : Evaluate deep neural network on GPU using batches with size = 100 : CONV FWD Algo used for convolution of input shape { 100 , 1 , 16 , 16 } is 0 CONV BWD Data Algo used is 1 CONV BWD Filter Algo used is 0 output shape : { 100 , 10 , 256 } Layout : ColMajor tmp shape : { 100 , 10 , 16 , 16 } Layout : RowMajor output2 shape : { 100 , 10 , 16 , 16 } Layout : RowMajor output bnorm shape : { 100 , 10 , 16 , 16 } Layout : RowMajor reshaped data shape : { 100 , 10 , 16 , 16 } Layout : RowMajor CONV FWD Algo used for convolution of input shape { 100 , 10 , 16 , 16 } is 6 CONV BWD Data Algo used is 4 CONV BWD Filter Algo used is 0 TMVA_CNN_GPU : [dataset] : Evaluation of TMVA_CNN_GPU on training sample (8000 events) : Elapsed time for evaluation of 8000 events: 0.0846 sec : Creating xml weight file: dataset/weights/TMVA_CNN_Classification_TMVA_CNN_GPU.weights.xml : Creating standalone class: dataset/weights/TMVA_CNN_Classification_TMVA_CNN_GPU.class.C Factory : Training finished : : Ranking input variables (method specific)... BDT : Ranking result (top variable is best ranked) : -------------------------------------- : Rank : Variable : Variable Importance : -------------------------------------- : 1 : vars : 1.066e-02 : 2 : vars : 9.834e-03 : 3 : vars : 9.800e-03 : 4 : vars : 9.580e-03 : 5 : vars : 9.550e-03 : 6 : vars : 9.531e-03 : 7 : vars : 9.300e-03 : 8 : vars : 9.134e-03 : 9 : vars : 9.092e-03 : 10 : vars : 9.009e-03 : 11 : vars : 9.000e-03 : 12 : vars : 8.981e-03 : 13 : vars : 8.838e-03 : 14 : vars : 8.788e-03 : 15 : vars : 8.766e-03 : 16 : vars : 8.605e-03 : 17 : vars : 8.465e-03 : 18 : vars : 8.431e-03 : 19 : vars : 8.368e-03 : 20 : vars : 8.337e-03 : 21 : vars : 8.248e-03 : 22 : vars : 8.123e-03 : 23 : vars : 8.082e-03 : 24 : vars : 8.071e-03 : 25 : vars : 7.996e-03 : 26 : vars : 7.899e-03 : 27 : vars : 7.894e-03 : 28 : vars : 7.854e-03 : 29 : vars : 7.781e-03 : 30 : vars : 7.754e-03 : 31 : vars : 7.726e-03 : 32 : vars : 7.550e-03 : 33 : vars : 7.536e-03 : 34 : vars : 7.499e-03 : 35 : vars : 7.464e-03 : 36 : vars : 7.337e-03 : 37 : vars : 7.278e-03 : 38 : vars : 7.255e-03 : 39 : vars : 7.235e-03 : 40 : vars : 7.219e-03 : 41 : vars : 7.117e-03 : 42 : vars : 7.099e-03 : 43 : vars : 7.024e-03 : 44 : vars : 6.938e-03 : 45 : vars : 6.847e-03 : 46 : vars : 6.578e-03 : 47 : vars : 6.535e-03 : 48 : vars : 6.485e-03 : 49 : vars : 6.478e-03 : 50 : vars : 6.423e-03 : 51 : vars : 6.400e-03 : 52 : vars : 6.389e-03 : 53 : vars : 6.346e-03 : 54 : vars : 6.304e-03 : 55 : vars : 6.293e-03 : 56 : vars : 6.250e-03 : 57 : vars : 6.205e-03 : 58 : vars : 6.198e-03 : 59 : vars : 6.121e-03 : 60 : vars : 6.082e-03 : 61 : vars : 6.047e-03 : 62 : vars : 5.996e-03 : 63 : vars : 5.996e-03 : 64 : vars : 5.786e-03 : 65 : vars : 5.678e-03 : 66 : vars : 5.642e-03 : 67 : vars : 5.566e-03 : 68 : vars : 5.497e-03 : 69 : vars : 5.474e-03 : 70 : vars : 5.459e-03 : 71 : vars : 5.449e-03 : 72 : vars : 5.446e-03 : 73 : vars : 5.414e-03 : 74 : vars : 5.403e-03 : 75 : vars : 5.377e-03 : 76 : vars : 5.335e-03 : 77 : vars : 5.271e-03 : 78 : vars : 5.238e-03 : 79 : vars : 5.214e-03 : 80 : vars : 5.184e-03 : 81 : vars : 5.159e-03 : 82 : vars : 5.124e-03 : 83 : vars : 5.106e-03 : 84 : vars : 5.017e-03 : 85 : vars : 5.015e-03 : 86 : vars : 4.914e-03 : 87 : vars : 4.893e-03 : 88 : vars : 4.827e-03 : 89 : vars : 4.806e-03 : 90 : vars : 4.765e-03 : 91 : vars : 4.748e-03 : 92 : vars : 4.722e-03 : 93 : vars : 4.721e-03 : 94 : vars : 4.690e-03 : 95 : vars : 4.689e-03 : 96 : vars : 4.655e-03 : 97 : vars : 4.617e-03 : 98 : vars : 4.566e-03 : 99 : vars : 4.551e-03 : 100 : vars : 4.508e-03 : 101 : vars : 4.489e-03 : 102 : vars : 4.470e-03 : 103 : vars : 4.450e-03 : 104 : vars : 4.401e-03 : 105 : vars : 4.277e-03 : 106 : vars : 4.269e-03 : 107 : vars : 4.231e-03 : 108 : vars : 4.228e-03 : 109 : vars : 4.116e-03 : 110 : vars : 4.112e-03 : 111 : vars : 4.070e-03 : 112 : vars : 4.046e-03 : 113 : vars : 3.990e-03 : 114 : vars : 3.961e-03 : 115 : vars : 3.957e-03 : 116 : vars : 3.939e-03 : 117 : vars : 3.911e-03 : 118 : vars : 3.894e-03 : 119 : vars : 3.881e-03 : 120 : vars : 3.880e-03 : 121 : vars : 3.879e-03 : 122 : vars : 3.873e-03 : 123 : vars : 3.856e-03 : 124 : vars : 3.819e-03 : 125 : vars : 3.775e-03 : 126 : vars : 3.767e-03 : 127 : vars : 3.711e-03 : 128 : vars : 3.667e-03 : 129 : vars : 3.666e-03 : 130 : vars : 3.660e-03 : 131 : vars : 3.515e-03 : 132 : vars : 3.494e-03 : 133 : vars : 3.489e-03 : 134 : vars : 3.414e-03 : 135 : vars : 3.408e-03 : 136 : vars : 3.380e-03 : 137 : vars : 3.378e-03 : 138 : vars : 3.366e-03 : 139 : vars : 3.322e-03 : 140 : vars : 3.321e-03 : 141 : vars : 3.246e-03 : 142 : vars : 3.207e-03 : 143 : vars : 3.194e-03 : 144 : vars : 3.179e-03 : 145 : vars : 3.154e-03 : 146 : vars : 3.147e-03 : 147 : vars : 3.137e-03 : 148 : vars : 3.136e-03 : 149 : vars : 3.106e-03 : 150 : vars : 3.105e-03 : 151 : vars : 3.104e-03 : 152 : vars : 3.093e-03 : 153 : vars : 3.080e-03 : 154 : vars : 3.061e-03 : 155 : vars : 3.055e-03 : 156 : vars : 3.029e-03 : 157 : vars : 3.017e-03 : 158 : vars : 2.978e-03 : 159 : vars : 2.884e-03 : 160 : vars : 2.804e-03 : 161 : vars : 2.798e-03 : 162 : vars : 2.726e-03 : 163 : vars : 2.718e-03 : 164 : vars : 2.686e-03 : 165 : vars : 2.679e-03 : 166 : vars : 2.631e-03 : 167 : vars : 2.621e-03 : 168 : vars : 2.613e-03 : 169 : vars : 2.591e-03 : 170 : vars : 2.554e-03 : 171 : vars : 2.550e-03 : 172 : vars : 2.524e-03 : 173 : vars : 2.520e-03 : 174 : vars : 2.504e-03 : 175 : vars : 2.486e-03 : 176 : vars : 2.444e-03 : 177 : vars : 2.399e-03 : 178 : vars : 2.366e-03 : 179 : vars : 2.329e-03 : 180 : vars : 2.253e-03 : 181 : vars : 2.253e-03 : 182 : vars : 2.243e-03 : 183 : vars : 2.214e-03 : 184 : vars : 2.199e-03 : 185 : vars : 2.169e-03 : 186 : vars : 2.160e-03 : 187 : vars : 2.117e-03 : 188 : vars : 2.109e-03 : 189 : vars : 2.093e-03 : 190 : vars : 2.088e-03 : 191 : vars : 2.081e-03 : 192 : vars : 2.045e-03 : 193 : vars : 2.034e-03 : 194 : vars : 2.021e-03 : 195 : vars : 1.972e-03 : 196 : vars : 1.952e-03 : 197 : vars : 1.849e-03 : 198 : vars : 1.794e-03 : 199 : vars : 1.765e-03 : 200 : vars : 1.710e-03 : 201 : vars : 1.701e-03 : 202 : vars : 1.663e-03 : 203 : vars : 1.660e-03 : 204 : vars : 1.611e-03 : 205 : vars : 1.560e-03 : 206 : vars : 1.503e-03 : 207 : vars : 1.478e-03 : 208 : vars : 1.472e-03 : 209 : vars : 1.432e-03 : 210 : vars : 1.383e-03 : 211 : vars : 1.362e-03 : 212 : vars : 1.154e-03 : 213 : vars : 9.974e-04 : 214 : vars : 5.719e-04 : 215 : vars : 0.000e+00 : 216 : vars : 0.000e+00 : 217 : vars : 0.000e+00 : 218 : vars : 0.000e+00 : 219 : vars : 0.000e+00 : 220 : vars : 0.000e+00 : 221 : vars : 0.000e+00 : 222 : vars : 0.000e+00 : 223 : vars : 0.000e+00 : 224 : vars : 0.000e+00 : 225 : vars : 0.000e+00 : 226 : vars : 0.000e+00 : 227 : vars : 0.000e+00 : 228 : vars : 0.000e+00 : 229 : vars : 0.000e+00 : 230 : vars : 0.000e+00 : 231 : vars : 0.000e+00 : 232 : vars : 0.000e+00 : 233 : vars : 0.000e+00 : 234 : vars : 0.000e+00 : 235 : vars : 0.000e+00 : 236 : vars : 0.000e+00 : 237 : vars : 0.000e+00 : 238 : vars : 0.000e+00 : 239 : vars : 0.000e+00 : 240 : vars : 0.000e+00 : 241 : vars : 0.000e+00 : 242 : vars : 0.000e+00 : 243 : vars : 0.000e+00 : 244 : vars : 0.000e+00 : 245 : vars : 0.000e+00 : 246 : vars : 0.000e+00 : 247 : vars : 0.000e+00 : 248 : vars : 0.000e+00 : 249 : vars : 0.000e+00 : 250 : vars : 0.000e+00 : 251 : vars : 0.000e+00 : 252 : vars : 0.000e+00 : 253 : vars : 0.000e+00 : 254 : vars : 0.000e+00 : 255 : vars : 0.000e+00 : 256 : vars : 0.000e+00 : -------------------------------------- : No variable ranking supplied by classifier: TMVA_DNN_GPU : No variable ranking supplied by classifier: TMVA_CNN_GPU TH1.Print Name = TrainingHistory_TMVA_DNN_GPU_trainingError, Entries= 0, Total sum= 4.29256 TH1.Print Name = TrainingHistory_TMVA_DNN_GPU_valError, Entries= 0, Total sum= 6.8511 TH1.Print Name = TrainingHistory_TMVA_CNN_GPU_trainingError, Entries= 0, Total sum= 8.39076 TH1.Print Name = TrainingHistory_TMVA_CNN_GPU_valError, Entries= 0, Total sum= 9.25163 Factory : === Destroy and recreate all methods via weight files for testing === : : Reading weight file: dataset/weights/TMVA_CNN_Classification_BDT.weights.xml : Reading weight file: dataset/weights/TMVA_CNN_Classification_TMVA_DNN_GPU.weights.xml : Reading weight file: dataset/weights/TMVA_CNN_Classification_TMVA_CNN_GPU.weights.xml Factory : Test all methods Factory : Test method: BDT for Classification performance : BDT : [dataset] : Evaluation of BDT on testing sample (2000 events) : Elapsed time for evaluation of 2000 events: 0.0143 sec Factory : Test method: TMVA_DNN_GPU for Classification performance : : Evaluate deep neural network on GPU using batches with size = 1000 : output bnorm shape : { 1000 , 100 , 1 } Layout : ColMajor reshaped data shape : { 1 , 100 , 1 , 1000 } Layout : RowMajor output bnorm shape : { 1000 , 100 , 1 } Layout : ColMajor reshaped data shape : { 1 , 100 , 1 , 1000 } Layout : RowMajor output bnorm shape : { 1000 , 100 , 1 } Layout : ColMajor reshaped data shape : { 1 , 100 , 1 , 1000 } Layout : RowMajor TMVA_DNN_GPU : [dataset] : Evaluation of TMVA_DNN_GPU on testing sample (2000 events) : Elapsed time for evaluation of 2000 events: 0.0179 sec Factory : Test method: TMVA_CNN_GPU for Classification performance : : Evaluate deep neural network on GPU using batches with size = 1000 : CONV FWD Algo used for convolution of input shape { 1000 , 1 , 16 , 16 } is 0 CONV BWD Data Algo used is 1 CONV BWD Filter Algo used is 0 output shape : { 1000 , 10 , 256 } Layout : ColMajor tmp shape : { 1000 , 10 , 16 , 16 } Layout : RowMajor output2 shape : { 1000 , 10 , 16 , 16 } Layout : RowMajor output bnorm shape : { 1000 , 10 , 16 , 16 } Layout : RowMajor reshaped data shape : { 1000 , 10 , 16 , 16 } Layout : RowMajor CONV FWD Algo used for convolution of input shape { 1000 , 10 , 16 , 16 } is 6 CONV BWD Data Algo used is 4 CONV BWD Filter Algo used is 0 TMVA_CNN_GPU : [dataset] : Evaluation of TMVA_CNN_GPU on testing sample (2000 events) : Elapsed time for evaluation of 2000 events: 0.0195 sec Factory : Evaluate all methods Factory : Evaluate classifier: BDT : BDT : [dataset] : Loop over test events and fill histograms with classifier response... : : Dataset[dataset] : variable plots are not produces ! The number of variables is 256 , it is larger than 200 Factory : Evaluate classifier: TMVA_DNN_GPU : TMVA_DNN_GPU : [dataset] : Loop over test events and fill histograms with classifier response... : : Evaluate deep neural network on GPU using batches with size = 1000 : output bnorm shape : { 1000 , 100 , 1 } Layout : ColMajor reshaped data shape : { 1 , 100 , 1 , 1000 } Layout : RowMajor output bnorm shape : { 1000 , 100 , 1 } Layout : ColMajor reshaped data shape : { 1 , 100 , 1 , 1000 } Layout : RowMajor output bnorm shape : { 1000 , 100 , 1 } Layout : ColMajor reshaped data shape : { 1 , 100 , 1 , 1000 } Layout : RowMajor : Dataset[dataset] : variable plots are not produces ! The number of variables is 256 , it is larger than 200 Factory : Evaluate classifier: TMVA_CNN_GPU : TMVA_CNN_GPU : [dataset] : Loop over test events and fill histograms with classifier response... : : Evaluate deep neural network on GPU using batches with size = 1000 : CONV FWD Algo used for convolution of input shape { 1000 , 1 , 16 , 16 } is 0 CONV BWD Data Algo used is 1 CONV BWD Filter Algo used is 0 output shape : { 1000 , 10 , 256 } Layout : ColMajor tmp shape : { 1000 , 10 , 16 , 16 } Layout : RowMajor output2 shape : { 1000 , 10 , 16 , 16 } Layout : RowMajor output bnorm shape : { 1000 , 10 , 16 , 16 } Layout : RowMajor reshaped data shape : { 1000 , 10 , 16 , 16 } Layout : RowMajor CONV FWD Algo used for convolution of input shape { 1000 , 10 , 16 , 16 } is 1 CONV BWD Data Algo used is 4 CONV BWD Filter Algo used is 3 : Dataset[dataset] : variable plots are not produces ! The number of variables is 256 , it is larger than 200 : : Evaluation results ranked by best signal efficiency and purity (area) : ------------------------------------------------------------------------------------------------------------------- : DataSet MVA : Name: Method: ROC-integ : dataset TMVA_CNN_GPU : 0.910 : dataset TMVA_DNN_GPU : 0.898 : dataset BDT : 0.841 : ------------------------------------------------------------------------------------------------------------------- : : Testing efficiency compared to training efficiency (overtraining check) : ------------------------------------------------------------------------------------------------------------------- : DataSet MVA Signal efficiency: from test sample (from training sample) : Name: Method: @B=0.01 @B=0.10 @B=0.30 : ------------------------------------------------------------------------------------------------------------------- : dataset TMVA_CNN_GPU : 0.350 (0.440) 0.729 (0.788) 0.923 (0.932) : dataset TMVA_DNN_GPU : 0.275 (0.415) 0.685 (0.788) 0.913 (0.928) : dataset BDT : 0.205 (0.300) 0.537 (0.640) 0.808 (0.874) : ------------------------------------------------------------------------------------------------------------------- : Dataset:dataset : Created tree 'TestTree' with 2000 events : Dataset:dataset : Created tree 'TrainTree' with 8000 events : Factory : Thank you for using TMVA! : For citation information, please visit: http://tmva.sf.net/citeTMVA.html --- Launch TMVA GUI to view input file: TMVA_CNN_ClassificationOutput.root === Note: inactive buttons indicate classifiers that were not trained, === === or functionalities that were not invoked during the training === root [1] --- Opening root file TMVA_CNN_ClassificationOutput.root in read mode --- Found directory for method: BDT::BDT containing MVA_BDT_S/_B --- Mean and RMS (S): 0.0405805, 0.0536003 --- Mean and RMS (B): -0.0356769, 0.0539038 Info in : file dataset/plots/mva_BDT.png has been created --- Found directory for method: DL::TMVA_DNN_GPU containing MVA_TMVA_DNN_GPU_S/_B --- Mean and RMS (S): 0.747917, 0.272904 --- Mean and RMS (B): 0.226963, 0.26449 Info in : file dataset/plots/mva_TMVA_DNN_GPU.png has been created --- Found directory for method: DL::TMVA_CNN_GPU containing MVA_TMVA_CNN_GPU_S/_B --- Mean and RMS (S): 0.741874, 0.261995 --- Mean and RMS (B): 0.216507, 0.248904 Info in : file dataset/plots/mva_TMVA_CNN_GPU.png has been created --- Found 2 classifier types --- Found 1 instance(s) of the method Method_BDT --- Found 2 instance(s) of the method Method_DL --- Found 2 classifier types --- Found 1 instance(s) of the method Method_BDT --- Found 2 instance(s) of the method Method_DL Info in : file dataset/plots/TrainingHistory.png has been created