Floating point error handling: Linux vs OS X

ROOT Version: 5.34
Platform: Ubuntu 16.04, OS X El Capitan
Compiler: gcc, cling


I have two installations of root 5.34, one on Ubuntu 16.04, one on OS X El Capitan. I noticed that when I run
I get 7 on Ubuntu and 0 on OS X. That means that without running TSystem::SetFPEMask(), if for example I divide by zero in my program, on Ubuntu it is a fatal error, while on OS X the result is inf and program continues.
Is this difference in default behaviour intended? If yes, why?


ROOT never calls feenableexcept itself - which would mean that we use the defaults. OTOH in a hand-compiled C++ source I see no difference in behavior on Linux and MacOS.

On Macs, ROOT uses SSE2 and its fp exceptions; on Linux it’s using feenableexcept. I’m not sure that this explains a difference in behavior.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.