Hi all, I noticed that a compiled analysis code was using increasing amounts of memory depending on a command-line parameter, and I think I have boiled the issue down to the simple code in the attached tar file.
ram.tar.gz (79.9 KB)
It does the following. Define an array of TH2Ds, instantiate them with nbinsx=nbinsy, then delete them. The problem is that I don’t get all the memory back.
If the TH2D array has 100 members, and each TH2D is 1000x1000 bins, I lose ~1.22MB per TH2D.
I use the ‘getRSS’ routine to show the memory usage at various places in the code, and the memory values I get are consistent with that I see with a ‘top’ command.
Here’s the output from the attached code.
we defined 100 TH2Ds, each with 1000x1000 bins
expected memory for one th2d (MB) = 7.65994
expected memory for 100 th2ds (MB) = 765.994
guess at memory at step1 (MB) = 779.764
step 0 usage=13.7695
step 1 usage=779.977
step 2 usage=136.535
diff (MB) = 122.766
diff (MB) per TH2D = 1.22766
I am running root version 5.34.13 compiled 64-bit under OS X 10.6.8.
When i run the same code on an ‘rcas’ node of the RHIC computing facility, I get back essentially all the memory.
The version of root running there is 5.34.09 compiled 32bit for sl64_gcc447
we defined 100 TH2Ds, each with 1000x1000 bins
expected memory for one th2d (MB) = 7.65994
expected memory for 100 th2ds (MB) = 765.994
guess at memory at step1 (MB) = 784.162
step 0 usage=18.168
step 1 usage=784.312
step 2 usage=18.3008
diff (MB) = 0.132812
diff (MB) per TH2D = 0.00132813
Back to when I’m running on the OS X machine. The amount of lost memory does not depend on the number of TH2Ds, but only on the number of bins in the TH2Ds.
I also made a plot of the amount of unrecovered memory per TH2D (in MB) versus nbins, which is kind-of neat.
I would appreciate any suggestions on what I could do to get all the memory back in this little test program, as I think the same issue is handcuffing my full analysis code.
Thanks, and have a great weekend
bill