# High precision time? (Micro, possibly nanoseconds)

I’m trying to measure how long it takes for my program to go through a certain function. Currently it looks something like this:

``````void function {
int start_s = clock();
//code here
int stop_s = clock();
double time = (stop_s-start_s)/double (CLOCKS_PER_SEC)*1000;
cout << ... endl;
``````

however this is millisecond precision only, I’d need micro, but possibly nanoseconds. I cannot use chrono, as it’s part of C++11, and I’d need something that works with C++98.

any ideas?
thanks

if you’re POSIX, then, `clock_gettime(3)` and `timespec` (from `<time.h>`) should provide the needed precision:

``````int clock_gettime(clockid_t clk_id, struct timespec *tp);

struct timespec {
time_t   tv_sec;        /* seconds */
long     tv_nsec;       /* nanoseconds */
};
``````

hth,
-s

1 Like

Hi lawlieto,
being limited to c++98 in 2017 sounds indeed painful.
Note however that the resolution of `clock` depends on your system, not on the c++ standard used: in particular resolution depends on the value of CLOCKS_PER_SEC.

If you post here you are probably looking for a ROOT solution. Unfortunately ROOT 6 requires at least c++11.
So this leaves us with ROOT5. The only facility I know of is TStopWatch but the docs do not say anything about its resolution (results are returned in seconds, as a `double`).

A different approach might be executing the function you want to time N times and then divide the measured time by N. This is always a better approach as it also averages over fluctuations of the runtime due to factors external to your function, and in principle yields arbitrary precision in the measurement.

Cheers,
Enrico

1 Like

TStopwatch uses the libc function `times` which has CLK_TCK resolution.

1 Like

hi eguiraud,

thanks it seems that the highest precision you can get with TStopWatch is microseconds. I downloaded the softwares I need from our remote server where I work and I’m installing them (will take some time) and then I’ll just use a system profiler on my laptop, which should be accurate.

thanks, unfortunately it doesn’t seem to work on my macbook, but I solved the issue by downloading the softwares I need from our remote server, and I’ll be using a system profiler.

The real suggestion was to execute the workload N times and then divide the measured time by N

In any case if you found what you were looking for you can pick an answer and mark it as a solution for the thread

I was planning to plot a histogram from the run times (the function was called around 10000 + times). Anyway, i’ll choose your post Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.