Long time of starting root.exe or rootls

Fedora 34/ v6-24-00-patches @5af1fa4d3d

Hi! After some rebuilds, both root.exe and rootls are stuck for a long time (many minutes) with this:

50281  |       |   \_ root.exe -l -h
50611  |       |       \_ sh -c LC_ALL=C ccache   -xc++ -E -v /dev/null 2>&1 | sed -n -e '/^.include/,${' -e '/^ \/.*++/p' -e '}'
50614  |       |           \_ ccache -xc++ -E -v /dev/null
50615  |       |           \_ sed -n -e /^.include/,${ -e /^ \/.*++/p -e }

Any idea what is going on?
Thank you!

Hi @adrian_sev ,
I’m not sure what the cause might be, but I think support for Fedora 34 is not quite there yet. @Axel will be able to comment with more authority (when he’s back next week).

Cheers,
Enrico

erm… i doubt that it has anything to do with a distro support (especially that all fedoras built and distribute root for quite some time)

so, i found this sed line in

interpreter/cling/lib/Interpreter/CIFactory.cpp:114
interpreter/cling/lib/Interpreter/CMakeLists.txt:256

i do not understand what is going on (as i see no cache written down) but it would be nice if this can be reduced … on a sata ssd i have to wait for a few good minutes for most times i rebuild

FYI, root-ls (from Go-HEP/groot) is a statically built binary that doesn’t invoke anything (no sed, no interpreter).
it’s quite fast.

actually, some people use it for their [TAB]-complete shell:

hth,
-s

Seeing ccache in your output is quite suspicious. I guess that’s what’s slowing things down. We should probably file a bug with Fedora to build without ccache. It looks like that gets recorded as the compiler, which ROOT then uses to figure out which include paths it needs to check for headers at startup.

no, this is my build, and of course i use ccache as not to lose 30mins for each build!! i do not know what is the command intention but i wonder what would do g++ faster that it’s invocation through ccache when ccache have all object files cached … if there is an operation that goes faster directly to g++ than through ccache than it would be a bug in ccache

Nominally this code path (in Cling) is gathering the list of include directories. On my machine this takes 0.078s (according to ‘time’).

Can you try in the same shell where you use root:

LC_ALL=C ccache   -xc++ -E -v /dev/null 2>&1 | sed -n -e '/^.include/,${' -e '/^ \/.*++/p' -e '}'

and see how long this takes (and for “comparison’s sake” do the same thing swapping ccache with g++).

Cheers,
Philippe.

The slowdown may not come from using ccache in this command, but from another reason that depends on this. At least on my machine, I get this:

epsftws ~ $ LC_ALL=C g++ -xc++ -E -v /dev/null 2>&1 | sed -n -e '/^.include/,${' -e '/^ \/.*++/p' -e '}'
 /usr/lib/gcc/x86_64-pc-linux-gnu/10.3.0/include/g++-v10
 /usr/lib/gcc/x86_64-pc-linux-gnu/10.3.0/include/g++-v10/x86_64-pc-linux-gnu
 /usr/lib/gcc/x86_64-pc-linux-gnu/10.3.0/include/g++-v10/backward
epsftws ~ $ LC_ALL=C ccache -xc++ -E -v /dev/null 2>&1 | sed -n -e '/^.include/,${' -e '/^ \/.*++/p' -e '}'
epsftws ~ $ 

This means that only when you use gcc/g++ you actually get the list of includes as intended, and the list is empty when the compiler is set to ccache. This may cause unnecessary searches later on. I think that in ROOT we need to make sure that we never set the compiler to ccache, because that is broken. If you break the comment at the first pipe, you get:

epsftws ~ $ ccache -xc++ -E -v /dev/null 2>&1
ccache: invalid option -- '+'

which clearly shows that it doesn’t work. You’d need to call it like this:

epsftws ~ $ LC_ALL=C ccache gcc -xc++ -E -v /dev/null 2>&1 | sed -n -e '/^.include/,${' -e '/^ \/.*++/p' -e '}'
 /usr/lib/gcc/x86_64-pc-linux-gnu/10.3.0/include/g++-v10
 /usr/lib/gcc/x86_64-pc-linux-gnu/10.3.0/include/g++-v10/x86_64-pc-linux-gnu
 /usr/lib/gcc/x86_64-pc-linux-gnu/10.3.0/include/g++-v10/backward

to get the desired result. In any case, we should probably file an issue to get this sorted out.

Cheers,

@adrian_sev Could you please file an issue in our repository, attaching the files CMakeCache.txt and recmake_initial.sh from your build of ROOT? Thank you.

well, it seems that this in an heisenbug as i cannot reproduce it again :frowning: if it happens again i will try to make it always reproductible before reporting it … Thank you!

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.