I have a pseudocode which looks like this

```
double fitFunc(...){
vector<double> x, y, x_err, y_err;
// fill vectors
if(x.size() <= 1) return 0;
TGraphErrors gr(x.size(), &x[0], &y[0], &x_err[0], &y_err[0]);
gr.Fit("pol1", "Q");
TF1 fit = *gr.GetFunction("pol1");
// Adding this part makes code `killed` after 100k events
for(int j=0; j < n_refits; ++j){
int idx = // pick some bad idx to throw away;
x.erase(x.begin() + idx);
// ...
if(x.size() <= 1) return 0;
gr = TGraphErrors(x.size(), &x[0], &y[0], &x_err[0], &y_err[0]);
gr.Fit("pol1", "Q");
fit = *gr.GetFunction("pol1");
}
return fit.GetParameter(0);
}
```

The function is called for each event inside with RDataFrame

Process gets `killed`

after 100k events, which doesn’t happen with `n_refits = 0`

.

Because of that I think refit part of the code is very bad…

I am not good at mem leakages and this is probably basic C++ question

Is there obvious mistake I am missing? Or how to improve this part of the code?

cheers

*ROOT Version:* master

*Platform:* Centos 7

*Compiler:* gcc 10