Questions about User Defined Numeric Types

I have a User Defined Numeric Type (UDNT) that I’ve been trying to modify to work with Root, and I have some questions about optimizing the user experience in the interpreter. My type is a 16-bit IEEE754 compatible floating point type, based on the ILM Base library “half” class. The class and my modifications work flawlessly in compiled code (gcc and icc on x86 linux). I would like it to behave as much like a float in the interpreter as possible, and I have run across two issues:

  1. Displaying the value in the interpreter

When an expression of built-in (primitive) type is evaluated by the interpreter, the value of that expression is printed to the display:

root [4] int i(2)
root [5] i
(int)2

For class types, this is not the case:

root [2] half h(2)
root [3] h
(class half)149229480

The output appears to me to be a pointer to the evaluated instance. Is there some function I can implement to get the behavior of primitive types here? For instance (hypothetically speaking) something like “formatValue(char *buf, half& h)” to obtain

root [2] half h(2)
root [3] h
(class half)2.00000e00

  1. UDTs and conversion operators to primitive types

The interpreter silently and incorrectly allows the assignment of class types to primitive types. This is generally no more than annoying:

root [9] TTree t
root [10] int i = t
root [11] i
(int)151730928
root [12] t
(class TTree)151730928

For a UDNT, this is a disaster:

root [13] half h(2)
root [14] double d = h
root [15] d
(double)1.51499160000000000e+08
root [16] h
(class half)151499160

For interoperability reasons I would like to permit such conversions, but would like to restrict them somewhat as good class design dictates:

class half {
public:
half();
explicit half(float f);
operator float() const;
}

In compiled code, this allows correct conversions to all primitive types, through operator float(). Within the interpreter, however, we still have failure:

root [22] half h(2)
root [23] float f = h
root [24] f
(float)2.00000000000000000e+00
root [25] double d = h
root [26] d
(double)1.51506336000000000e+08
root [27] h
(class half)151506336

I can work around this by explicitly supplying ALL “operator () const” functions, but I wonder if there is another way to suppress this behavior? I know from searching the Forum and documentation that there are many known issues with CINT and conversion operators like this, but I wasn’t able to find a complete list or a set of workarounds anywhere.

Your comments would be appreciated.

Hi,

Implement G__ateval() like this:

struct L { int l; }; G__ateval(const L& l) { printf("L: l=%d\n", l.l); }

[quote=“krlynch”]2) UDTs and conversion operators to primitive types

I can work around this by explicitly supplying ALL “operator () const” functions, but I wonder if there is another way to suppress this behavior?[/quote]
I am afraid implementing all the conversions is the only workaround I can offer for now. The conversions are a known issue in CINT; I agree that we need to improve CINT’s conversion analysis.

Cheers, Axel.