Hi, my current working environment is ROOT 6.26.14, Python 3.8.19, Windows 10, Pytorch 2.2
I have decided to start my work based on the example code(“tutorials\tmva\pytorch\RegressionPyTorch.py”) given in the tutorial. It does work well, but I found that this code does not use GPU and only works with the CPU. However, PyTorch is clearly capable of working with the GPU, so I tried to insert model.cuda() in the middle, and it resulted in an error when running it again. The error message is listed as below:
: Option SaveBestOnly: Only model weights with smallest validation loss will be stored
: Failed to run python code: trained_model = fit(model, train_loader, val_loader, num_epochs=numEpochs, batch_size=batchSize,optimizer=optimizer, criterion=criterion, save_best=save_best, scheduler=(schedule, schedulerSteps))
: Python error message:
Traceback (most recent call last):
File “”, line 1, in
File “RegressionPyTorch.py”, line 87, in train
output = model(X)
File “C:\Users\phoenixAspies.conda\envs\Python38\lib\site-packages\torch\nn\modules\module.py”, line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File “C:\Users\phoenixAspies.conda\envs\Python38\lib\site-packages\torch\nn\modules\module.py”, line 1520, in _call_impl
return forward_call(*args, **kwargs)
I believe my PyTorch and CUDA settings are correct, as when I directly use “import torch; x = torch.randn(4,4); x.cuda()”, the variable is successfully placed on the GPU.
Does anyone know how to let GPU work when using TMVA & PyTorch?