Dear root users, I’m using eve to visualize geometry and events with projection and multi-view, just like alice_esd.C, but I am using gdml file to import geometry instead of extracted shapes.
To learn about projection, I modified projection.C, replacing the geometry element “TEveGeoShapeExtract*” with “TEveGeoTopNode*”, and the projection axes didn’t show up. I can see the axes element in the hierarchy, but it doesn’t show up in the viewer.
Can projection work with gdml models? How can I solve this problem?
By the way, I saw that “TEveGeoShapeExtract” was used in many tutorials, is there any tutorial showing how to make objects of this class? TGeoManager seems to close the geometry immediately after gdml is imported, can I still extract shapes after geometry is closed?
Hmmh, TEveGeo[Top]Node is not projectable so I don’t think you’ll see anything useful in there. Also, projections always project into x-y plane and this is what is shown in projected views. So, while this might work for a view along z axis, it won’t work in general. It’s possible the axes are not drawn because the bounding box is not known.
Thanks to you I have projections now, but I am confused with the “RhoZ” projection. Although I can see that “RPhi” is somehow representing the top view, I can’t figure out coordinate system of “RhoZ” and how the projection is done.
I trying to show the top view and side view of the geometry. It seems to me that the projection angle is determined in the constructor of ProjectionManager, and it only provides “RhoZ” and “RPhi” options.
How can side view or front view be done? (One way I can think of is to use RPhi projection with a rotated geometry, but it doesn’t seem smart and perhaps causes trouble when it comes to event display).
Rho is cylindrical/transverse radius of each point and upper (+y) and lower (-y) hemispheres are shown in the top / bottom part of the view. So it’s not really a side view.
If you want a side view you can get by by just using a 3D view and using orthographic camera.
Alternatively, if you also want advanced features of projections (fish-eye and linear scaling), one would have to implement the specific views as projection types. This should be rather simple for, say, x-z top view.
It’s a plastic scintillator aiming to construct the track of muon, It’s very similar to a “Jenga game” where each block is a plastic scintillator. In general, it’s cubic, so I think a simple x-z view or x-y view would be great.
And yes, I want the advanced features like axes and scaling of projections, these features help display the muon track more accurately.
I have no idea how to implement view types, it’s there any examples?
RPhi is the x-y view. For x-z one would have to implement a new projection type. Look at TEveProjections.h/cxx and see how TEveRPhiProjection is implemented, you more or less just need to copy the class and implement ProjectPoint method for TEveRZProjection. There might be another enum in TEveProjectionManager and enum-to-text translation TEveProjectionManagerEditor.cxx.
Well, I think I can get a hang of the projection system thanks to the high quality and well-documented source codes.
I take it as I have to make my own subclass of TEveProjection and implement ProjectPoint() by copying the codes .
But let’s say I have a “TEveXXXProjection” subclass already, how can I inform the TEveProjectionManager to use my subclass? As far as I can see, TEveProjectionManager choses which subclass to use according to the enum EPType_e in a “switch case” statement in it’s SetProjection() , I can’t neither add a new case nor replace a unused case.
Oh, sorry, I was not clear … you would make the change in Eve sources directly and make a pull request (or I can review your branch beforehand). So we would just add your new projection type to the enum.
And yes, you make a new subclass of TEveProjection, you can just put it into TEveProjections.h/cxx.
This screen shot shows the RPhi view and XZ view with its ProjectPoint() definition on the right. In PriciontPoint(), I just rotate the points
around X-Aixs before doing the “RPhi” projection.
As you can see, the Image of XZ projection is squeezed on the vertical direction. If I actually increase the rotate degree to Pi()/2, it would be so squeezed that it disappears. Judging from the Axes, the dimensions of the projection is correct. Also, the squeeze happens only after the rotate degree reaches some where around Pi()/4.
I’d actually expect you could get by by just doing:
x = x;
y = z;
z = d; // depth
ProjectPoint() takes the 3D point and returns its coords in the projected space.
To apply the various pre-scaling factors for R (as is done for RPhi), just use x and z to calculate radius (and ignore y altogether). The phi here is phi in projected space. The fisheye itself (distortion) works in the projected x-y space, it doesn’t care about radii and phis.
Image size is calculated based on bounding boxes of the contents. If you add things into scne after the view is initialized, you have to reset the camera (or double-click into the window):
Thank you for the tips, I think I can understand it.
Actually, my first attempt is pretty much what you say, swapping y and z while copying the pre-scaling code in the original RPhi version, and it turns out to be the case of " squeezed to disapper" and I decided to rotate it bit by bit to find out what was going on.
I think I should check the simple solution again, and if it doesn’t work, I’ll start looking into the image parts, which could take a while
Yes, you were rotating into projected ‘z’ and so you started to “lose” some volume — the projected view is x-y only, it’s a 2D thing. z is used for layering of objects (depth as in pdf or svg documents) … so you can put geomtery outline in the back and tracks and hits in the front.
I’m actually having a hard time locating the problem. Though I’m not very sure, I see the projection process as:
1: Interpret 3D elements to 2D projected elements with the help of ProjectPoint() and it’s variations.
2: Draw graphics according to the 2D elements.
Since the dimensions of the geometry in the squeezed projection are still correct judging from the axes (in the early reply), I think probably the ProjectPoint(), using either rotation or axes swapping, is doing the job correctly. So maybe 1 is fine.
As for 2, I didn’t find anything that might break the x-y proportion.
What did I missed
(ps: since rotate operation is like y=cos(t)*y+sin(t)*z, maybe rotating and swapping are equivalent when t=90deg)
Axis drawing is also using ProjectPoint to determine the locations of tick marks
Now, rotation with t=pi/2 should be similar (+/-, maybe) to swapping y and z. Remember, what you rotate into z, gets removed (is not seen, and z is reserved for depth of the projected object), that’s why it gets shrunk.
I see in the side of the screenshot, under if(prescale), z = …, this should probably also be y = …
Do you have your code in your github clone of root so I can look at it? You could also make a simple demo with your geometry so I can look at it (or I could use existing projection demos, just flip the projection type).