There has been a post recently about issues with opening large files using C style file I/O. I have a related question about large file I/O using fstream.
When I use ifstream to open large (over 2 GB) files from within CINT, it fails on Ubuntu Linux and Solaris; but works just fine on Windows Vista (how embarrassing!), even for the 32 bit version. I am using ROOT 5.24.00
After this, I did a lot of googling to see how I can get large file support in fstream, and did not find a clear answer.
What is the best way to open large files using fstream in ROOT, on 32 bit Linux/Unix/Solaris ? Looks like there has got to be a way, since it works just fine on Windows without any special flags etc. from within CINT.
Thanks a lot.
As a first step, did you try the same exercise after compiling the script with ACLiC (to tell whether the problem is with CINT or iostream)?
I tried on solaris, and looks like fstream fails to open files over 2GB in standalone code as well. I tried compiling the attahced file test.c standalone on solaris, and when I run it, the output is:
The first number shows the max file size fstream can handle, and the next line is the error message from the part where the code checks to see if the stream opened correctly.
From CINT, using the file test_root.c, I always get an error on large file; with and without compiling using ACLIC.
Now, I am able to open large files from within ROOT on Windows (32bit). So I am confused about why this different behavior on Windows vs. Unix.
By the way I tried things like -D_LARGEFILE_SOURCE and -D_FILE_OFFSET_BITS=64 flags during standalone compiliation, and it does not help.
test_root.c (190 Bytes)
test.c (243 Bytes)
[quote]I tried on solaris, and looks like fstream fails to open files over 2GB in standalone code as well[/quote]Then this is an using with the Solaris compiler and/or implementation of the C++ standard library. I am not familiar with this compiler and you should probably try to contact Sun about this issue.
32-bit computing constrains file sizes to no more than two gigabytes. That limits many applications in active use today, especially those in the areas of database management, video processing, application service and a range of enterprise software. 64-bit operating systems, however, manage file systems with files larger than most other hardware can yet support.
From: software.intel.com/en-us/article … computing/
it looks like your STL implemetation is REALLY limited by 32 bit.
I do read large files on 32-bit system though using strictly “C” interface fopen(),fread(), fclose() etc with #define _FILE_OFFSET_BITS 64
So, for this one - ditch STL
Thanks for the info.
By the way, I forgot to mention that the problem is seen not only on Solaris, but also on Ubuntu Linux, which uses GNU compiler. While I have not tried it on other linux flavors, but maybe this issue exists on other unix like OS’s as well.
Also, if fstream can’t handle large files, this means that TTree::ReadFile() method also becomes limited, since it won’t be able to read large files on several platforms, which is unfortunate since ability handle large data volumes is one of the major strengths of ROOT.
I am not suggesting at all that the issue is actually in ROOT. But it certainly will be good to have a solution that works universally.
We already sorted out that ifstream does not work and that it is not part of ROOT. What is wrong with ROOT ? I can only guess what exactly you want. Maybe you want ROOT to provide a substitute for ifstream? It might already exist, I am not sure. But in any case, why reinvent a wheel ? Just get a good STL implementation. Here is one very popular for you : stlport.sourceforge.net/
P.S. what is wrong with fread() ?
since recently (i.e. the trunk of ROOT in subversion) we support stlport as provided by open solaris. You can give that a try if you want.
Alternatively you might be able to split your file and then read in the parts and merge the resulting trees: your data is safe once it is in ROOT files