JIT optimization level?

In cling’s interactive help text, there’s an “O” option to “Set[…] the optimization level” but it it’s marked as “(not yet implemented)”. Is this a feature that can be toggled either as a flag to cling, or during cling’s compilation? Or is the feature completely unavailable?

I did a simple performance test (add numbers to elements of a vector 10,000 times) and found that cling’s execution time was about 30% faster than “clang++ -O0” but orders of magnitude slower than “clang++ -O3 -march=native”. Naive question: can the LLVM JIT get anywhere near the performance of “clang++ -O3 -march=native” ?


Just double checking the possibilities; will get back to you.


You are right the .O option is not yet implemented. Currently we do JIT optimizations equivalent to -O1.

If you are interested in comparing the different optimization levels, you could tweak this by modifying the line: interpreter/cling/lib/Interpreter/IncrementalExecutor.cpp:92 to either one of

enum Level { None, // -O0 Less, // -O1 Default, // -O2, -Os Aggressive // -O3 };

By definition the JIT (standing for Just-In-Time compiler) cannot yield the same performance as static compilation (coming from clang). Usually the JIT compiler introduces not so big compilation overhead coming from the per-function compilation at runtime.

In your example, perhaps you have a loop testing the JIT performance, which means that the JIT will kick in only once to compile the function (containing the loop) and thus the performance results should be very close to the ones coming from static compilation with clang.

That said, I’d be very interested to see this performance study :wink:

PS: Another important aspect: if the optimization levels are increased this would mean that the JIT will spend more time trying to optimize each function it compiles, which might result at runtime slowdown. In this respect we cannot turn it blindly into O2 or O3 (O3 sometimes breaks the correctness of the code).

…plus changing the optimization level at runtime is a non-obvious operation for a compiler (…library). But we will offer a parameter to set this at startup.

Cheers, Axel.

gcc.gnu.org/onlinedocs/gcc/Opti … tions.html: “Certain ABI changing flags are required to match in all compilation-units … This includes options such as -freg-struct-return and -fpcc-struct-return.”

Unfortunately i was not able to find a definite list of ABI-changing optimization options anywhere – doesn’t this mean that the statically compiled code and the JIT-ted code has to be compiled with the exact same options (e.g. -O3) to be on the safe side? IOW shouldn’t the default be

$ llvm-config --cxxflags

… -O3 -fomit-frame-pointer -std=c++11 -fvisibility-inlines-hidden -fno-exceptions -fno-rtti -fPIC -ffunction-sections -fdata-sections

for JIT (instead of the current -O1)?


On top of that we use clang for the JIT and you are referring to GCC.

So far we have not see optimization dependent issues. But changing the opt level during the lifetime of a compiler (or runtime of cling, same thing) is a completely different story…

Cheers, Axel.

Hopefully i am being worried for nothing.

I think this is not a concern for people using cling as an interpreter because that means the interpreted code is self-contained, and does not communicate with the statically compiled code (cling itself). Using cling as a JIT compiler means passing objects across the boundary of the statically and dynamically compiled code (e.g. construct an object in cling, then process it with the JIT-ed code).

Sorry for the GCC link, but that’s all i found on the topic (nothing regarding cling). To some extent it does apply, as the statically compiled code (cling itself) is compiled with GCC, so if i change the small struct passing flag when building cling things will break for me (not for someone using the interpreter), but that’s very contrived, and i would deserve what i get. (On the other hand i would like to change some options used for building cling, e.g. omitting the frame pointers means i currently cannot profile.)

Still, the problem itself is not compiler specific. To give you a made up example (do not know how GCC or clang implements rtti) cling compiles with no rtti, and i suspect that if i JIT code within cling with rtti turned on, then try getting rtti for an object passed in from the statically compiled part things will break (the vtable layout may be different, missing some information, or the vtable of a base class may have been reused for a derived class, making them indistinguishable). As above with the struct flag, I will not do this because i already suspect it can be problematic, but i am worried that there are flags where it is not obvious that changing them leads to subtle bugs down the road (hopefully there is no such flag and will never be in the diff between O3/O1, but i do not know for sure – maybe you do).


We use cling to call into multiple GBs of shared libraries, in thousands of processes per hour. Large scale, production grade. We have not seen this issue. We have seen others (little dark corners of ABI incompatibility between clang and GCC) - but not this one.

Combining binary parts with RTTI on or off is always problematic. We propagate the RTTI flag from cling’s compilation into the runtime - so we should be safe.

If we run into such a flag that triggers an incompatible ABI we’ll fix it!

Cheers, Axel.