Strange behaviour of new/delete with ROOT objects

Using: ROOT v5.27/04 compiled from source with gcc 4.4.3 (Ubuntu 4.4.3-4ubuntu5)

Dear ROOT support,
We have encountered a strange problem: we don’t seem to be in control of which
objects get deleted and when!
The attached code illustrates the problem, using a dynamically-allocated array
of pointers to TNamed (we first encountered the problem using a TList, which didn’t seem to
delete its - owned - members, we then realised that the problem didn’t come from the TList):
First, the pointers are all allocated and immediately deleted in the same loop.
Then, the pointers are all allocated in one loop, and then deleted in a following loop -
we do this twice.
After the last loop, we deallocate the array of pointers.
We then repeat the whole procedure.
We find the same surprising result either when compiling and running as an executable
outside of ROOT, or when compiling under CINT and executing the function test_TNamed twice
as is done in main():

[john@laptop ~/work]> g++ root-config --cflags --libs test_TNamed.cxx
[john@laptop ~/work]> ./a.out
begin
Mem used = 621 MB
1st time:
Mem used = 659 MB
2nd time:
Mem used = 1576 MB
3rd time:
Mem used = 1576 MB
after delete [] p:
Mem used = 1538 MB
begin
Mem used = 1538 MB
1st time:
Mem used = 1538 MB
2nd time:
Mem used = 1576 MB
3rd time:
Mem used = 1576 MB
after delete [] p:
Mem used = 622 MB

I expected ‘Mem used = 621 MB’ or ‘Mem used = 659 MB’ in all cases!
The first time that test_TNamed is executed, neither
for(int i=0;i<N;i++) delete p[i];
nor
delete [] p;
seem to have any effect on the amount of occupied memory (although we have
checked with gObjectTable that the objects have all been deleted - at least in the
sense that they no longer appear in the object table)!
The second time of execution, however, the “memory leak” does not continue,
and finally after ‘delete [] p’ we get back the memory we started with.
We have checked the used memory with ‘top’ and with the GNOME system monitor GUI,
in all cases they give the same values as given by gSystem->GetMemInfo.
Finally, if used in an interactive session, after executing test_TNamed(10000000) once,
we have found that we can cause the memory to be released by
(1) pressing return
(2) using the ‘tab’ key to force the interpreter to complete a variable name or method
(3) typing ‘gObjectTable->Print()’
(4) typing new TCanvas/TBrowser
but not all of these methods work every time, and the correct one at any given time seems to be quite random.

Please tell us that we are being very stupid, because otherwise we don’t know how to carry on!!
Thanks a lot
test_TNamed.cxx (1.19 KB)

Hi,

this is more of a test of your allocator than of ROOT… On Ubuntu 10.04 64bit I see

begin
Mem used = 6135 MB
1st time:
Mem used = 6211 MB
2nd time:
Mem used = 7741 MB
3rd time:
Mem used = 7741 MB
after delete [] p:
Mem used = 7665 MB
begin
Mem used = 7665 MB
1st time:
Mem used = 7665 MB
2nd time:
Mem used = 7740 MB
3rd time:
Mem used = 7740 MB
after delete [] p:
Mem used = 6136 MB

and with
$ LD_PRELOAD=/usr/local/lib/libtcmalloc.so ./a.out
I get

begin
Mem used = 6140 MB
1st time:
Mem used = 6217 MB
2nd time:
Mem used = 7312 MB
3rd time:
Mem used = 7312 MB
after delete [] p:
Mem used = 7312 MB
begin
Mem used = 7312 MB
1st time:
Mem used = 7312 MB
2nd time:
Mem used = 7312 MB
3rd time:
Mem used = 7312 MB
after delete [] p:
Mem used = 7312 MB

As you can see it’s the malloc library’s decision what to give back to the OS what what to keep.

Cheers, Axel.

Hi Axel,

Thanks for the reply… which worries me even more!
The original problem was that we were tracking a memory leak due to storing object pointers
in a TList (which owned the objects), and when calling TList::Clear or TList::Delete the memory
was not freed, so we assumed that the list was not deleting its objects like it should.
We have debugged many memory leaks of this type in our software in the past
in this way, the basic hypothesis behind the analysis being “if we ‘delete’ everything that we
’new’ when we should, the memory used by the programme will remain constant”.
I don’t know how we could have managed to track & fix these leaks if we had
been in the situation which you have confirmed independently, i.e. it is out of our
hands whether the available memory gets saturated or not (I see from your post that you
have apparently got round the problem by buying a machine with >=8GB of RAM :laughing: )

Are you sure this situation is not new (no pun intended) ?
I have checked if I can reproduce this behaviour using arrays of 'double’s in a completely non-ROOT
context, but (so far) I get the behaviour I expect and have always observed up to now.

Thanks a lot

Hi,

This is not the case, it’s well known as memory fragmentation, see e.g. en.wikipedia.org/wiki/Memory_fragmentation

ROOT has a new specialized tool (TMemStat) to visualize fragmentation issues - though it might not be production quality yet. For leak detection you could also use valgrind.

Cheers, Axel.

OK Axel, thanks for your help.
It looks like we might need to invest in some “memory pooling”… :open_mouth:

Cheers
John