Hello, I am usually running ML programs that are not using simple signal+background relations.
In the process I basically deal with 32-seconds time window (vectors/histograms). These latter are either pure signal or pure noise. They are combined in the Fourier domain before being processed by classic programs. So in practice classical algorithms use : either « signal involuted with background » as input or « background without signal »
It seems TMVA is expecting either signal or background from the documentation…
Would you have any suggestion how to deal with ? Is there some documentation, program exemple dealing with such input?
I can either preprocess data and convolute signal and background or input pure signal and pure background use there is a preprocessing step where this convolution can be performed.