I have downloaded ROOT 6.32.06 on my device but it doesnot have ‘pyroot’ and ‘cuda’ capabilities maybe because i downloaded it form the ‘.exe’ installer on the website can someone guide me what to do in order to install the full version of root on windows manually because I think installer will not include the features that I want.
ROOT Version: 6.32.06
Platform: W11 23H2
Compiler: Visual Studio??
Hi Aditya,
Over here it’s holiday season: expect perhaps some delays in the answers.
Let me add in the loop @bellenot .
In the meanwhile, I can propose to have a look to 6.34.02: Release 63402 - ROOT
Let us know how this goes.
Cheers,
D
Hi Danilo,
Thank you for the update! I completely understand that it’s the holiday season and that there may be some delays in responses. I’ll wait for you to come back with more details and for @bellenot to be added to the loop.
I would also like to ask if there is a way to install/build root on windows from scratch without requiring the MVSC compiler that comes with visual studio because that compiler is highly unoptimized and folders created by visual studio take enormous amounts of space.
A rough comparison of performance can be checked in ROOT Performance in Linux vs Windows - ROOT - ROOT Forum
Looking forward to hearing back from you.
Cheers,
Aditya
Hi,
First, sorry for the delay. Then, PyROOT should be available with the binaries, but not CUDA (we never tried to enable CUDA on Windows). And Visual Studio is required. Note that you can also use WSL as an alternative…
Cheers, Bertrand.
Hi Bellenot,
Sure, I will try installing CUDA on my WSL, in the meantime can you tell me detailed steps to install root in WSL with CUDA. I tried to find detaild instructions but I was unable to do so. I am planning to use Ubuntu 24.04.1 LTS available on Microsoft Store (which I guess is WSL2).
Also, I was not able to find a proper list of compatible CUDA versions (like TensorFlow supports version 12.3 and PyTorch supports version 12.4), can I get more info on that so that I download other things in WSL accordingly.
Cheers,
Aditya
I have no experience with CUDA either on Windows or Ubuntu. Did you try to search the forum? Or maybe @jonas or @moneta can help
CUDA related:
@ferhue, the GitHub items you linked are about compiling all of ROOT with the Nvidia compiler, not the CUDA features in particular.
@Aditya_Sharma, are you sure you need the “cuda” features? For now, the only CUDA features in ROOT are the TMVA and RooFit CUDA backends, and most of our ROOT users don’t even use RooFit or TMVA.
Nobody here has experience with ROOT+Cuda on WSL, so getting help will be hard. Let’s first try to figure out if this is really necessary. If you just want to enable “cuda” for the sake of having a “full build of ROOT”, I can already tell you it’s not worth it.
Hi Jonas,
I don’t remember all the features I used earlier when I first wrote a code in ROOT about a year ago, I remember that at the very end I had to use ROO Stats to analyze data from 10 files combined, this took a lot of time on the window’s native version of ROOT, and this was frustrating for me as many times the code didn’t work. So, I wanted CUDA so that I can analyze files much quicker.
Also, I now wanted to try to write a Deep Learning code for these files, so I was hoping to have CUDA C++ support.
What kind of fits are you doing in your RooStats analysis? The CUDA backend can give a significant speedup for unbinned fits where the likelihood function sums over millions of events or more. But it doesn’t give any advantage for binned fits, because often one doesn’t have many bins anyway and potential for parallelization is low. For the big unbinned fits it’s worth it though!
And what do you mean with “deep learning code”? Nowadays, one usually trains the deep neural networks outside ROOT with pytorch or tensorflow. Or you plan to use something else? In that case, this has nothing to do with the CUDA features of ROOT.
As far as I can tell by looking at the code, the code I wrote was for ‘unbinned’ fits, where I did not explicitly group data, I used RooDataSet to perform the fit.
For Deep Learning part, I was mostly planning to write code in PyRoot using PyTorch and if I have some free time then trying to write a DL code from scratch in CUDA C++ for “.root” files.