I have data of the form p=(p1,p2,…,pn), where p is a vector of floats. I have a set of points p that grows in time, so at time t I have t points.
Is it more efficient to store these points as an n by t matrix, a t by n matrix, or does it matter?
it depends on what you need access to in parallel. If you’re working on each time slice separately you shuold use a TTree, with branches p1,…,pn. You can fill (and read) these vectors once for each time slice - the TTree calls it “entries” (i.e. you’d get t entries). This is far more efficient regarding persistency and memory consumption - but again, it only works if it’s fine for you to only have one vector p(t) in memory at any given time, iterating over the time slices.
Another option (esp if you don’t need to apply any linearalgabra calculation to your data) would be to use a simple TVectorF, where you fold the p*t elements into a one dimensional array (see e.g. TH2::Fill(x,y) on how to do that, http://root.cern.ch/root/html/src/TH2.cxx.html#TH2:Fill).