Home | News | Documentation | Download

When to compile macros


#1

Hello,

Doing most of my daily work in ROOT v5 I used to compile all macros that were doing something more than reading several histograms from a file and drawing them. It was on one hand due to performance reasons (some macros heavy on loops, fstream io, fitting, working with many objects seemed to really work 100x faster when compiled), on the other (and mainly) due to C++ - some codes were simply too complicated for CINT to interpret. This had unfortunately a side effect - when I was starting several executions of the same macro with different arguments at the same time

for i in 1 2 3 4 ; do
  root -b -l -q 'macro.C+("'$i'")' >& log.$i.txt &
done

I would get crashes presumably because of the library being rewritten by 4 processes at the same time or maybe because ROOT was unable to decide if it needs to recompile or not.

However now with Cling in ROOT v6 it looks like that it is capable of interpreting all the macros I checked and I see no performance improvements (it might be that I didn’t hit a case of a macro which would be interpreted by CINT, but which would require ACLiC only for performance reasons). It just adds few seconds of overhead for compilation.

So at the end I started to wonder if it actually makes sense to compile macros when using Cling in ROOT v6 and if so, in what cases (apart from the obvious one when interpreter would fail to parse the macro)? The https://root.cern.ch/compiling-macros page seems to be still mainly about CINT in ROOT v5.

Regards,
Antoni


#2

Hi Antoni,

nice questions.
Let me start from the bottom of your message:

So at the end I started to wonder if it actually makes sense to compile macros when using Cling in ROOT v6 and if so, in what cases (apart from the obvious one when interpreter would fail to parse the macro)?

This should not be a reason to use aclic: cling is powered by clang and as such it should be able to digest all C++ (to our best knowledge this is the case: don’t hesitate to post a message if you stumble on any exception!)

However now with Cling in ROOT v6 it looks like that it is capable of interpreting all the macros I checked and I see no performance improvements (it might be that I didn’t hit a case of a macro which would be interpreted by CINT, but which would require ACLiC only for performance reasons). It just adds few seconds of overhead for compilation.

The libraries built with aclic preserve the compilation flags of ROOT, which by default contain a “-O3” optimisation level. The interpreted code is not transformed in binary with such a high optimisation level because we want to provide immediate feedback to the user in interactive usage (the more the optimisations, the more time the compiler needs to compile). There can be cases where a macro compiled with aclic can beat its interpreted version, also coping with the initial compilation time - these cases are the ones where the “-O3” optimisations shine.

An additional feature which provides aclic is the automatic generation of dictionaries of all classes contained in the macro, therewith allowing to do IO with those. This is not directly related to runtime performance if you want, but more to usability aspects.

I hope this helps!

Cheers,
D


#3

Hi Danilo,

Oh it helps, thank you.

So indeed it seems that while for CINT the encouraged way was to compile macros as a rule and use interpreter only for simplest tasks and for fast prototyping, it changes for Cling: the cases when compilation is necessary are much rarer. Given that for really complicated things, like several large classes and some functions, I anyway build a library rather than stuffing everything in a single macro (if I need IO I would not even consider not having a separate lib), it looks like I can quite safely rely on the interpreter.

So to what optimization level the interpreted code corresponds to roughly? With CINT I always had an impression that when there is a loop it will more or less reinterpret its body on each iteration.

Cheers,
Antoni


#4

As as side note, this

can be fixed in v5 and v6 with something like:

root -b -l -q -e '.L macro.C+' >& log.compilation.txt
for i in 1 2 3 4 ; do
  root -b -l -q 'macro.C+("'$i'")' >& log.$i.txt &
done

#5

@pcanal Thanks! Why I’ve never noticed -e usability in this case…

It rises another question about Cling. Supposing I have a compiled library which I load in .rootlogon.C so that the macro already can use it. Supposing that I want to kick it out from .rootlogon.C (I actually need to operate with 2 different versions of root, so now I load it conditionally on value of gROOT->GetVersionInt()). Now I see that I could probably do something like

root -b -l -q -e '.L library.so; .x macro.C'

having a search path for libraries in .rootrc. Is there a way to preload library before macro execution without loading it in .rootlogon.C nor having above construction, in such a way that Cling will load the library when it sees certain namespace name, class name or function? I thought that .rootmap is meant for this, but I failed to set it up.


#6

@pcanal Never mind, I understood rootmap. Thanks again for the ‘-e’ solution.


#7

would need to be written:

root -b -l -q -e '.L library.so;' -e '.x macro.C'

because you can only have one meta command (.L, .x) on a command line.

To load library from a script you can also consider using R__LOAD_LIBRARY but using a rootmap file is your best solution.


#8

@dpiparo

The libraries built with aclic preserve the compilation flags of ROOT, which by default contain a “-O3” optimisation level. The interpreted code is not transformed in binary with such a high optimisation level

So to what optimization level the cling interpreted code corresponds to roughly?


#9

That would be -O0 – interpreted code is not optimized.


#10