I was trying to measure the throughput of both compressed and uncompressed ROOT files in Linux system using eclipse IDE and in Windows using Visual Studio 2013. ROOT version was 5.34. I am finding for Linux the throughput is degrading for Windows system. Is this degradation correct? The underlying hardware I am using is Intel® Core™ i5- 4400E CPU@ 2.70GHz 2.70GHz and having SSD ATA Device.
Hi, What do you mean with degradation? The throughput degrades over time? comparing Windows vs. Linux? Can you quantify? Is the software build with equivalent optimization flags?
By degradation I mean, I have observed write throughput for data till 500 MB in ROOT uncompressed file on Linux platform as 65MB/s using eclipse IDE where as on Windows platform as 20 MB/s using Visual Studio. somebody has any idea why this difference of rates,this difference of rates is what i am saying on degradation on different OS? Can someone please help me and tell how to validate the results ? how to check if the throughput rates are correct ?
Measuring the throughput is a very good overall measurement, unfortunately is made of many ingredients. Basically any detail of your system influences it (processor type, processor clock, disk, I/O connection, OS and version, compiler, optimization flags, software (ROOT), etc, etc.). So, unless you ensure that you run on the same hardware (processor, disk), and with the same compiler and software (ROOT) would allow you to say something about differences in the performance due to the operating system.