What is the correct way of updating graphs on a real-time stream?

After reading this, which is the only source I’ve been able to find online : Real time plotting in C++ using ROOT – mightynotes

This is the class I instantiate on a loop that is pulling real time data. It seems to work fine, but after some time, root slows down for some reason, and the data stream starts first drifting and then lagging. Sometimes it just recovers from the lag and keeps on going on sync, sometimes it doesn’t and keeps lagging. I am running two processes of the same app, one with root and one without root, and the one without root works just fine. Is this the correct way to update a graph on a real-time scenario ? What I’m doing wrong ? The stream is not even that heavy, just like 10 to 20 points per second, so I doubt it’s a performance problem ? NOTE : I’m running this over SSH with X11 forward on my local network, I don’t know if this would influence it.

int argc;
char **argv;

TApplication theApp("App", &argc, argv);


namespace RootInterface
{
    class TrRootApp
    {
    public:
        unique_ptr<TCanvas> c1;
        unique_ptr<TGraph> gr1;
        unique_ptr<TGraph> gr2;
        unique_ptr<TGraph> gr3;

        TrRootApp(const char *m)
        {


            this->c1 = make_unique<TCanvas>(m, m, 200, 10, 700, 500);
            this->gr1 = make_unique<TGraph>();
            this->gr2 = make_unique<TGraph>();
            this->gr3 = make_unique<TGraph>();


            c1->Divide(1, 3);

            c1->cd(1);
            gr1->Draw("apl");

            c1->cd(2);
            gr2->Draw("apl");

            c1->cd(3);
            gr3->Draw("apl");
        };
        void graph(TradeBuffer &buff, string &timebuff)
        {
            std::vector<int> buffersize(boost::counting_iterator<int>(1), boost::counting_iterator<int>(buff.t.size() + 1));

            for (int i = 0; i < buffersize.size(); i++)
            {
                gr1->SetPoint(i, buffersize[i], buff.t[i]);
                gr2->SetPoint(i, buffersize[i], buff.n[i]);
                gr3->SetPoint(i, buffersize[i], buff.b[i]);

                const char *time = timebuff.c_str();
                gr1->SetTitle(time);
            }
            c1->Draw();
            c1->Update();

            gSystem->ProcessEvents();
        }
    };

ROOT Version: 6.24/06
Platform: linux
Compiler: g++10


I would guess there is a memory leak somewhere but in the code you posted it is not obvious where. May be in the way you call this class ? in a loop or something like that ?

Ok so I went as far as to create a separate module, to clear things up and make sure nothing else on my program is pushing a leak. This module receives a message through a socket, parses and pushes data to a buffer and then ROOT paints it. It’s the exact same structure as on my main program but isolated and simpler, just like in the documentation I have found online :

#include "TCanvas.h"
#include "TROOT.h"
#include "TGraphErrors.h"
#include "TF1.h"
#include "TLegend.h"
#include "TArrow.h"
#include "TLatex.h"
#include "TAxis.h"
#include "TFrame.h"
#include "TApplication.h"
#include "timescale.h"
#include <iostream>
#include "TMultiGraph.h"
#include "fft.h"
#include <cstdint>
#include "peakdetect.h"
#include "socket.h"
#include "conversion.h"
#include "TSystem.h"

using namespace std;

// pqxx::result const &R

void macro1(SubSocket &sock,int argc, char* argv[])
{
    TApplication theApp("App", &argc, argv);
    MessageBuffer msgb;

    TCanvas *c1 = new TCanvas("c1", "TS", 200, 10, 700, 500);
    c1->Divide(1, 3);



    auto gr1 = new TGraph();
    gr1->SetMarkerStyle(20);
    gr1->SetTitle("P");
    auto gr2 = new TGraph();
    gr2->SetTitle("Q");
    gr2->SetMarkerStyle(21);
    auto gr3 = new TGraph();



    c1->cd(1);
    gr1->Draw("apl");
    c1->cd(2);
    gr2->Draw("apl");


    while (true)
    {
        string message = sock.receive();

        ParseJsonMessage jm(message);
        jm.parse_t();
        msgb.push_tp(jm);
        msgb.push_tq(jm);
        msgb.push_tevent(jm);
        msgb.push_im(jm);


        for (int i = 0; i <  msgb.etime.size(); i++)
        {
            gr1->SetPoint(i, msgb.etime[i],msgb.tpb[i]);
            gr2->SetPoint(i, msgb.etime[i], msgb.tqb[i]);
        }


		c1->cd(1);
		c1->Update();
		c1->Pad()->Draw();
		c1->cd(2);
		c1->Update();
		c1->Pad()->Draw();

		gSystem->ProcessEvents();

    }


int main(int argc, char **argv)
{
    TradeSubSocket sock("SL");
    string s = "s";
    // DbWorks db(s);
    // pqxx::result R = db.read_data();

    macro1(sock,argc,argv);


    return 0;
}

This has exactly the same outcome as the implementation on my main program … it randomly starts drifting in time until it’s out of sync and it starts lagging well behind the actual data stream. I honestly don’t know what’s going on. It’s not a problem with the data stream, because when I run a separate process wihout root, it works just fine. The data stream is on sync with local time and keeps on going forever wich is the correct way of function. The buffer is just a 500 size buffer that gets updated as messages come in.

It’s not a leak, but X11 unable to handle such rates over the network. It’s an strange effect but makes sense, since X11 events somehow get on a bottleneck and the data stream and the program itself starts lagging. So the solution if you need real-time plotting over the network with high data volume is to make your program send messages over via fast proto ( EPGM here ) and run root locally, plotting and executing X11 on your computer

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.