Hi, I’m trying to use ROOT’s CNN to train a classifier. The dataset is about 3GB but I only have 32GB memory, so every time it crashes when loading the data: log1.txt (5.7 KB).
I think if we use SGD to train, in each step we only use a mini batch of data. So it seems that the memory problem can be addressed if we only keep that part of data in the memory. But I have no idea if such memory management is possible, could anyone give me some advice? Thanks in advance!
If the dataset is 3GB it should fit in memory with 32 GB. I see from the log the input images are 2 x 60 x 60. How many events do you have ? From the log is not clear if there is a memory error.
Unfortunately TMVA requires now to have all events in memory, although only a mini -batch will be used for the computation. The GPU for example requires only the mini-batch to be in memory.
If this is an issue, the solution would be to use the low-level interface where the user needs to input directly the events to the DNN, but one would need to re-implement all the iterative optimization procedures, since the MethodDL class cannot be used.
Thanks for your help! I forgot to mention that the 3GB data file is in sparse matrix form, since many of the image pixels are zero. But for the input of the network, I think it should be converted to original image, which makes it much larger than 3GB.
So it seems that currently it’s not trivial to train this network with TMVA… I’ll try to study the low-level interface to see if I’m going to reimplement it.
If your image is space and with few pixels that have hits, you could maybe identify a cluster and use the cluster locations and only the pixels around the cluster as input to the CNN.
Just an idea, but doing some pre-processing before could help you also reducing the input size