For our experiment multiple devices are read out every 100 ms, their data is transferred to some buffers and these are then filled into a TTree, at the moment a couple of thousand Double_t per event. This TTree is then saved to a TFile, which is read in different instances for online and offline analysis.
The following simplified semi-code seems to work:
// For creating the tree
TFile* treeFile = new TFile(treeFilename, "RECREATE");
TTree* tree = new TTree("tree", "data tree");
// Adding some branches with the addresses of the relevant buffers
// Every time a new event comes in and all buffers have been updated
tree->Fill();
tree->Write(0, TObject::kWriteDelete);
treeFile->Flush();
Now when I open the file in another session:
TFile* dataTreeFile = new TFile(dataTreeFilename, "READ"); // Read the new tree file
dataTreeFile->ls();
if (dataTreeFile->GetListOfKeys()->Contains("tree")){ // If the treeFile contains a tree, and is not a zombie
TTree* tree = (TTree*)dataTreeFile->Get("tree"); // And get the new tree from the file
}
I get the warning that the tree had to be recovered as the file was not closed properly.
Warning in <TFile::Init>: file [filename] probably not closed, trying to recover
Info in <TFile::Recover>: [filename], recovered key TTree:tree at address [address]
Warning in <TFile::Init>: successfully recovered 1 keys
TFile** [filename]
TFile* [filename]
KEY: TTree tree;6133 data tree
This gives me the correct TTree, however the warning worries me a bit. I understand that it has to do with the not correctly closing the TFile, I have tried this before by adding
treeFile->Close();
But this slows down the system drastically up to the point here it can not keep up with the datastream, as it has to be reopened so often. And from being able to recover the TTree it seems to be that we are not limited by read or write speeds. What would be the proper way to do this?
Thanks in advance, Anno