# Arbitrary 3d solids in ROOT

I’m wondering if ROOT can create arbitrary 3d objects. We’d like to recreate, as a solid, the target of a scan and examine cut-aways. We’ll be plotting the solid in xyz with a value of interest as color at that point. As well, we’ll be selecting points of interest on the 3d solid.

I was having a difficult time figuring out how to do this reading
through the documentation. I see where we can create 3d objects using various predefined solids, but not based on an arbitrary set of xyz points defining a solid.

Do you mean something like this? en.wikipedia.org/wiki/Volume_rendering

Is your data result of a scan?

However, if all you need is a simple surface, you might get by by using EVE and its triangle-set class. You have to provide the triangulation of the surface yourself.

See demo in \$ROOTSYS/tutorials/eve/triangleset.C

Some further development will be needed to select individual triangles (or vertices), but it is within the scope of the existing rendering/interaction framework.

Can you tell us a bit more about your input data and what kind of algorithms you already apply on it?

Best,
Matevz

Yes, Timur, I do mean volume rendering similar to the link. My data will come from a scan. I believe the scan will measure density. We will use CT to reconstruct the target, and allow the user to then select points to perform analysis or further (more time consuming) scans.

I don’t have an example of the data I expect to get. I believe I will receive values x,y, and z as floats, then another float for density rating. I expect I’ll receive a rough maximum of 6 million of these points, depending on the scanning parameters. Being a CT scan, there is a lot more data than what I will receive in order to display the solid.

Because we’re measuring density through the target, we’ll be able to calculate densities of features within the target. The ability to cut-away/clip a quadrant of the solid will be an essential tool of analysis.

Matevz, unfortunately I can’t tell you more about the data or algorithms already applied on it. I’m new to the project. I was brought in to assist with the GUI, which includes visualization (as you can tell, I’m also new to visualization).

OK, very good, then Timur is your man, at least for the start.

I can help later with interaction techniques / objects.

What is the time-scale for the project? Are you (or somebody else) also willing to participate in development?

Well, modern volume-rendering with GL requires good GPU, big size of texture memory, good version of OpenGL library and drivers. Matevz and me are planning to enable OpenGL 2.0 usage in ROOT’s gl, this will give us GLSL == vertex and fragment shaders. So, this makes volume rendering visualisation possible in ROOT. Here there is a question, how to proceed.
What I tried a year ago was simple texture-based volume-render.
I got several problems with volume-rendering :

1. how to find good transfer function.
2. how to model the light transport in a medium (to got a nice realistic picture).
These are all solved problems, but to get real working code from muddy and intentionally obfuscated articles I have to solve this problem again myself - since nobody except me was interested - I stopped )
In principle, we can have the basic volume-rendering framework inside ROOT, with all interesting work done by shaders, and shaders can be different for different types of volume-rendering and this is already up to user to write these shaders )) - GLSL, is a really nice and small language, pleasant in use )

As soon as you install QtRoot plugin root.bnl.gov/QtRoot/How2Install4Unix.html you should be able to
install and use star.bnl.gov/public/comp/vis/StarEvent.html