Problems using TGraphErrors::SetPointError

Hi all,

I’ve been given some data to plot stored in a ROOT file, in the form of a TGraphErrors (which I open in my program as upData), but the y-values (and hence y-errors) are of a different order of magnitude to the values I am plotting them against. I’ve tried implementing a for loop to simply call up the offending values, divide them by 1e9, and stick them back in as follows:

   Double_t NewErry[upData->GetN()], Errx[upData->GetN()];
   for (Int_t i = 0; i != upData->GetN();i++){
 	NewErry[i] = (upData->GetErrorY(i))/1e9;
 	Errx[i] = upData->GetErrorX(i);
        upData->TGraphErrors::SetPointError(i,Errx[i],NewErry[i]);    //Returns a segfault if included in program
  upData->TGraphErrors::SetPointError(30,1,1e-2);    //Also gives a segfault

The two lines indicated above, if not commented out of the macro, cause the program to terminate with a segmentation fault, citing an issue with TGraphAsymmErrors::Divide (Specifically claiming that I’m passing it a pointer with zero value, when as far as I’m aware nothing I’m doing calls that function, or even anything in that class). I’ve also attempted taking it out of the for loop and running the function once with arbitrary values, but that does exactly the same thing. Commenting out all mention of SetPointError prevents this from happening, but of course that way I don’t change the error values either.

Any suggestions as to how I can fix this - am I calling the function incorrectly for instance?
Many thanks,


I cannot reproduce this problem. I tried with:

   Double_t x[5]  = {1,2,3,4,5};
   Double_t y[5]  = {5,4,3,2,1};
   Double_t ex[5] = {1,1,1,1,1}; 
   Double_t ey[5] = {500,400,300,200,100};
   TGraphErrors *ge = new TGraphErrors(5,x,y,ex,ey); 
   for (int i=0; i<5; i++) {
      Double_t nex = ge->GetErrorX(i);
      Double_t ney = ge->GetErrorY(i)/100.;

Hi, cheers for trying that out.
A sudden thought (which may be a silly thing but could be a possibility?) - could it be something to do with the fact that I’m using a floating point as opposed to a simple double in the function? It seems to work alright with the 1e9 in the SetPoint function (As well as in the TGraphErrors constructor, but that would require redefining all of the style variables which were stored in the original graph), but maybe SetPointErrors is rejecting that for some reason… I will give it a go later with “1000000000” instead of “1e9” and see if that works.
Thanks again,

Difficult to tell what the problem is, as you did not provide a small example reproducing it.

I’ve had another look at my code and I think I’ve worked it out. It seems to be somethig specific with that particular graph - I think I was assuming it to be a TGraphErrors when it should possibly be an AsymmErrors instead! #-o - Tested another graph in the same program with the function without any problem at all.
Sorry about that, should have properly checked which class the object belonged to - hopefully that will sort things out when I have a chance to try it out later.