High data rate and ROOT

Hello,

I have a question if ROOT is the right tool for me and how I can learn to handle high data rates.

  • For a image acquisition setup I will have 1GB/s data transfer rate.

This data I have to store and also I want to give during the measurement a snapshot of the images.

So the questions are if I should use ROOT and also where I can learn more about high data rate handling. Has somebody some suggestions for books or lecture notes for me? Thank you!

I am afraid that we need more details about your configuration!
Do you mean 1 Gigabit/s or 1 Gigabyte/s?
How many images does it mean per second? only one?
Do you want to compress the images on output?
cpu type, disk type?

Just to give you an idea: on one single core/thread, ROOT I/O can write 200 MBytes/s of uncompressed data
on a hard disk and about 80 MBytes/s when compressing the data.

Rene

Thanks for your answer.

It would be around 1Gigabyte/s. The number of Images would be between 100/s and 1000/s and is something like a fluorescence microscopy camera. The image data should be written uncompressed as raw data on the disk. The storage system is not yet decided and should be dimensioned based on the needed performance. During the data acquisition a more or less live image should be shown.

Interesting setup.
As a first approximation, I would say that you need as many CPUs and disks than images per second.

Rene

[quote=“brun”]Interesting setup.
As a first approximation, I would say that you need as many CPUs and disks than images per second.

Rene[/quote]

Really? I think such a big system we can’t afford. But how could I start? Aren’t there some books or lectures to read about such a similar topic? With google I didn’t find anything how to start such a task.

Writing at the rate of 1 GByte/s means writing 86 TeraBytes/day, 2.5 PetaBytes/month.
To do that, you must obviously make sure that you have the sufficient bandwidth (cpus, memory, network, disks, etc).
LHC experiments are writing data close to this speed. They have the correesponding computing infrastructure to support the load.

Rene

[quote=“brun”]Writing at the rate of 1 GByte/s means writing 86 TeraBytes/day, 2.5 PetaBytes/month.
To do that, you must obviously make sure that you have the sufficient bandwidth (cpus, memory, network, disks, etc).
LHC experiments are writing data close to this speed. They have the correesponding computing infrastructure to support the load.

Rene[/quote]

Oh sorry. We wouldn’t use the system so a long time. I think 2-3 hours a day and only for some days in a month. The preparing of the probes will need a lot of time and also the interpretation of the data. I think not more than 60TB/month of data will be collected. When we have interpreted the data we can compress them and store them on another computer system.