Axes don't show up in the projection of a imported gdml geometry in eve

Dear root users, I’m using eve to visualize geometry and events with projection and multi-view, just like alice_esd.C, but I am using gdml file to import geometry instead of extracted shapes.

To learn about projection, I modified projection.C, replacing the geometry element “TEveGeoShapeExtract*” with “TEveGeoTopNode*”, and the projection axes didn’t show up. I can see the axes element in the hierarchy, but it doesn’t show up in the viewer.

Can projection work with gdml models? How can I solve this problem?

By the way, I saw that “TEveGeoShapeExtract” was used in many tutorials, is there any tutorial showing how to make objects of this class? TGeoManager seems to close the geometry immediately after gdml is imported, can I still extract shapes after geometry is closed?

May be @matevz can help you.


Hmmh, TEveGeo[Top]Node is not projectable so I don’t think you’ll see anything useful in there. Also, projections always project into x-y plane and this is what is shown in projected views. So, while this might work for a view along z axis, it won’t work in general. It’s possible the axes are not drawn because the bounding box is not known.

To export geometry extract create a 3d view of the geometry using TEveGeoTopNode. Then use the list tree view widget to expand elements you want to see (only elements expanded into the list-tree will be saved)), setup their visibility and colors/transparencies. When done, call SaveExtract on the top-node:

This will create a root file which you can then load back using static constructor TEveGeoShape::ImportExtract:

These TEveGeoShapes will be properly projecatble so you can simply register the top-level element into corresponding projection manager.

Alternatively, you can build your own hierarchy of TEveGeoShapes by extracting TGeoShapes and corresponding transformation matrices from TGeo and then calling TEveGeoShape::SaveExtract on the root node of your hierarchy:
Here’s an example of how this is done for elements with known geo-path:


1 Like

It worked!Thanks very much!


Yay, I’m glad you got it to work … it did sound a bit complicated after I wrote it all down :slight_smile:


Thanks to you I have projections now, but I am confused with the “RhoZ” projection. Although I can see that “RPhi” is somehow representing the top view, I can’t figure out coordinate system of “RhoZ” and how the projection is done.
I trying to show the top view and side view of the geometry. It seems to me that the projection angle is determined in the constructor of ProjectionManager, and it only provides “RhoZ” and “RPhi” options.
How can side view or front view be done? (One way I can think of is to use RPhi projection with a rotated geometry, but it doesn’t seem smart and perhaps causes trouble when it comes to event display).

For RhoZ take a look at this:

Rho is cylindrical/transverse radius of each point and upper (+y) and lower (-y) hemispheres are shown in the top / bottom part of the view. So it’s not really a side view.

If you want a side view you can get by by just using a 3D view and using orthographic camera.

Alternatively, if you also want advanced features of projections (fish-eye and linear scaling), one would have to implement the specific views as projection types. This should be rather simple for, say, x-z top view.

What kind of experiment / detector do you have?


It’s a plastic scintillator aiming to construct the track of muon, It’s very similar to a “Jenga game” where each block is a plastic scintillator. In general, it’s cubic, so I think a simple x-z view or x-y view would be great.

And yes, I want the advanced features like axes and scaling of projections, these features help display the muon track more accurately.

I have no idea how to implement view types, it’s there any examples?

Many thanks.

RPhi is the x-y view. For x-z one would have to implement a new projection type. Look at TEveProjections.h/cxx and see how TEveRPhiProjection is implemented, you more or less just need to copy the class and implement ProjectPoint method for TEveRZProjection. There might be another enum in TEveProjectionManager and enum-to-text translation TEveProjectionManagerEditor.cxx.

Do you feel up to the task to give it a try?


Well, I think I can get a hang of the projection system thanks to the high quality and well-documented source codes.

I take it as I have to make my own subclass of TEveProjection and implement ProjectPoint() by copying the codes .

But let’s say I have a “TEveXXXProjection” subclass already, how can I inform the TEveProjectionManager to use my subclass? As far as I can see, TEveProjectionManager choses which subclass to use according to the enum EPType_e in a “switch case” statement in it’s SetProjection() , I can’t neither add a new case nor replace a unused case.


Oh, sorry, I was not clear … you would make the change in Eve sources directly and make a pull request (or I can review your branch beforehand). So we would just add your new projection type to the enum.

And yes, you make a new subclass of TEveProjection, you can just put it into TEveProjections.h/cxx.


OK, I can give it a shot.
I’ll let you know after I finish it (or when I encounter further problems).


Hi matevz, I’ve met a problem.

This screen shot shows the RPhi view and XZ view with its ProjectPoint() definition on the right. In PriciontPoint(), I just rotate the points
around X-Aixs before doing the “RPhi” projection.

As you can see, the Image of XZ projection is squeezed on the vertical direction. If I actually increase the rotate degree to Pi()/2, it would be so squeezed that it disappears. Judging from the Axes, the dimensions of the projection is correct. Also, the squeeze happens only after the rotate degree reaches some where around Pi()/4.

What controls the width and height of the image?


(it seems the image is missing in your previous post)

I uploaded it instead, thanks.

Hi, sorry it took me so long to respond.

I’d actually expect you could get by by just doing:
x = x;
y = z;
z = d; // depth

ProjectPoint() takes the 3D point and returns its coords in the projected space.

To apply the various pre-scaling factors for R (as is done for RPhi), just use x and z to calculate radius (and ignore y altogether). The phi here is phi in projected space. The fisheye itself (distortion) works in the projected x-y space, it doesn’t care about radii and phis.

Image size is calculated based on bounding boxes of the contents. If you add things into scne after the view is initialized, you have to reset the camera (or double-click into the window):

I hope this all makes sense :slight_smile:

Thank you for the tips, I think I can understand it.
Actually, my first attempt is pretty much what you say, swapping y and z while copying the pre-scaling code in the original RPhi version, and it turns out to be the case of " squeezed to disapper" and I decided to rotate it bit by bit to find out what was going on.

I think I should check the simple solution again, and if it doesn’t work, I’ll start looking into the image parts, which could take a while :frowning:


Yes, you were rotating into projected ‘z’ and so you started to “lose” some volume — the projected view is x-y only, it’s a 2D thing. z is used for layering of objects (depth as in pdf or svg documents) … so you can put geomtery outline in the back and tracks and hits in the front.


So, swapping y and z axes still causes the squeezing problem (with or without pre-scaling parts).

I’m actually having a hard time locating the problem. Though I’m not very sure, I see the projection process as:
1: Interpret 3D elements to 2D projected elements with the help of ProjectPoint() and it’s variations.
2: Draw graphics according to the 2D elements.

Since the dimensions of the geometry in the squeezed projection are still correct judging from the axes (in the early reply), I think probably the ProjectPoint(), using either rotation or axes swapping, is doing the job correctly. So maybe 1 is fine.

As for 2, I didn’t find anything that might break the x-y proportion.

What did I missed :dizzy_face:


(ps: since rotate operation is like y=cos(t)*y+sin(t)*z, maybe rotating and swapping are equivalent when t=90deg)

Axis drawing is also using ProjectPoint to determine the locations of tick marks :wink:

Now, rotation with t=pi/2 should be similar (+/-, maybe) to swapping y and z. Remember, what you rotate into z, gets removed (is not seen, and z is reserved for depth of the projected object), that’s why it gets shrunk.

I see in the side of the screenshot, under if(prescale), z = …, this should probably also be y = …

Do you have your code in your github clone of root so I can look at it? You could also make a simple demo with your geometry so I can look at it (or I could use existing projection demos, just flip the projection type).