Trouble with TVirtualFFT (Solved)

EDIT (1Dec10): I found through testing that my FFT function is working properly for identically sized histograms. The problem is in my data, my background, or both.

Hello,

I’m using three FFTs in an attempt to reduce the effect of a background on some data. The trouble is, the resulting histogram looks like the FFTs I used are out of phase.

void BackgroundReducer::SetBackground(TH1* bckgnd) {
    //if(background)delete background;
    background=(TH1*)(bckgnd->Clone("bck"));
    BDim=background->GetNbinsX();
    Double_t bIntegral=background->Integral();
    background->Scale(1/bIntegral);
    //if(bR2C)delete bR2C;
    bR2C=TVirtualFFT::FFT(1,&BDim,"R2C EX K");
    for(Int_t n=1;n<=BDim;++n){
        bR2C->SetPoint(n-1,background->GetBinContent(n));
    }
    bR2C->Transform();
}

TH1* BackgroundReducer::ReduceBackground(TH1* data) {
    if(!background){
        printf("No background data loaded");
        return 0;
    }
    Double_t dIntegral=data->Integral();
    data->Scale(1/dIntegral);
    if(dimset==false){
        DDim=data->GetNbinsX();
        dR2C=TVirtualFFT::FFT(1,&DDim,"R2C EX K");
        C2R=TVirtualFFT::FFT(1,&DDim,"C2R EX K");
        dimset=true;
    }
    for(Int_t n=1;n<=DDim;++n){
        dR2C->SetPoint(n-1,data->GetBinContent(n));
    }
    dR2C->Transform();
    Double_t dReal,dImag,bReal,bImag,sReal,sImag;
    Int_t bgVal;
    for(Int_t n=0;n<DDim;++n){
        dR2C->GetPointComplex(n,dReal,dImag);
        bgVal=background->FindBin(data->GetBinCenter(n+1));
        bR2C->GetPointComplex(bgVal-1,bReal,bImag);
        sReal=(dReal*bReal+dImag*bImag)/(bReal*bReal+bImag*bImag);
        sImag=(bReal*dImag-dReal*bImag)/(bReal*bReal+bImag*bImag);
        C2R->SetPoint(n,sReal,sImag);
    }
    C2R->Transform();
    TString rName=data->GetName();
    rName+="_signal";
    //if(rthist)delete rthist;
    rthist=(TH1*)data->Clone(rName);
    rthist->Reset();
    for(Int_t n=0;n<DDim;++n){
        C2R->GetPointComplex(n,sReal,sImag);
        rthist->SetBinContent(n+1,sReal);
    }
    //TH1::TransformHisto(C2R,rthist,"RE");
    rthist->SetEntries(data->GetEntries());
    Double_t sIntegral=rthist->Integral();
    rthist->Scale(dIntegral/sIntegral);
    return rthist;
}

I tried to set up my code to allow for background histograms with a larger range than the data histograms purely for the sake of keeping the code general. The first set of data and background histograms I tested this on both had the same number of bins and the same minimum and maximum. I got the same result when I replaced the line “bR2C->GetPointComplex(bgVal-1,bReal,bImag);” with “bR2C->GetPointComplex(n,bReal,bImag);”






The next thing I tried was a background histogram that had the same bin width (4), but had an xmin of -2000 instead of -100. The resulting histogram looks much closer to what I expected to get, but translated much too far to the right. (I’m guessing that the translation comes from fftw using the array positions as x and my method doesn’t really account for that.)




Any ideas on what I’m doing wrong here?

Thanks,
Stephen