I wrote a code in python to read a ROOT file, which contains a list of particle information, i.e. positions, directions, energies. The directions are in cartesian coordinates and I want to convert them in spherical coordinates. Here below my code:
root_file="Info_particles.root" df=ROOT.RDataFrame("tree1", root_file) npyFile=df.AsNumpy() for i in range(len(npyFile['PosX'])): posList.append(np.array([npyFile['PosX'][i], npyFile['PosY'][i], npyFile['PosZ'][i]])) eneList.append(npyFile['Ene'][i]) dirList.append(np.array([npyFile['DirX'][i], npyFile['DirY'][i], npyFile['DirZ'][i]])) for direct in dirList: r=np.linalg.norm([direct, direct, direct]) theta, phi=cs.cart2sp(x=direct, y=direct, z=direct)[1:] theta, phi=np.arctan2(mom, mom),np.arccos(mom/r) rThePhiList.append([float(math.degrees(theta)), float(math.degrees(phi))])
The total number of particles is 6.7e6. The 1st
for cycle takes around 13 seconds but the 2nd one 1 minute. Considering that I have to read other 100 and more files, in order to have a faster reading:
There is a way, in ROOT and python, to speed up the conversion between cartesian to spherical coordinates? And, in general, to speed up the reading procedure?
There is a way to parallelize this process in a cluster of tens of cores? I tried different solutions in python that didn’t worked. Indeed, python allows to parallelize multi processes but there is no a straightforward way to parallelize with multi-threading on different cores.
Thanks in advance for your time.