Caculate the number of electron at time step under space charge

Hello,
I hope you are doing well
I was wonder if my code gives me the right information … i just want to compare number of electrons in time with and without space charge effect
and what is the importance of the member function EnableMC();
and if I want to simulate a 3D grid, what should I use
Thank you for your time and considerations

#include <TApplication.h>
#include <TCanvas.h>
#include <TH1F.h>
#include <TSystem.h>
#include <vector>
#include <fstream>
#include <iostream>
#include <numeric>
#include "Garfield/AvalancheGridSpaceCharge.hh"
#include "Garfield/AvalancheMicroscopic.hh"
#include "Garfield/ComponentParallelPlate.hh"
#include "Garfield/MediumMagboltz.hh"
#include "Garfield/Plotting.hh"
#include "Garfield/Sensor.hh"
#include "Garfield/TrackHeed.hh"
#include "Garfield/ViewSignal.hh"

using namespace Garfield;

#define LOG(x) std::cout << x << std::endl;

int main(int argc, char *argv[]) {
  LOG("Start Single-Gap RPC Space Charge Example");

  TApplication app("app", &argc, argv);
  plottingEngine.SetDefaultStyle();

  double voltage = -25000.;
  MediumMagboltz gas;
  gas.EnableAutoEnergyLimit(false);
  gas.SetMaxElectronEnergy(100.);
  std::string path = "CO2_C2H2F4_isoC4H10_SF6_30_64.5_4.5_1_T_293_P_723.8.gas";
  gas.LoadGasFile(path);
  gas.Initialise(true);

  double d_bakelite = 0.2;
  double d_gas = 0.2;
  double y_mid = 0.5 * (2 * d_bakelite + d_gas);
  std::vector<double> layers = {d_bakelite, d_gas, d_bakelite};
  std::vector<double> eps = {8.0, 1.0, 8.0};

  ComponentParallelPlate cmp;
  cmp.Setup(static_cast<int>(layers.size()), eps, layers, voltage, {});
  std::string label = "readout";
  cmp.AddPlane(label);
  cmp.SetMedium(&gas);

  Sensor sens(&cmp);
  sens.AddElectrode(&cmp, label);
  sens.SetTimeWindow(0, 25. / 200., 200);
  sens.SetArea(-10., y_mid - d_gas / 2., -10., 10., y_mid + d_gas / 2., 10.);

  AvalancheGridSpaceCharge avalsc(&sens);
  avalsc.EnableDiffusion(true);
  avalsc.EnableStickyAnode(true);
  avalsc.EnableAdaptiveTimeStepping(true);
  avalsc.SetStopAtK(true);
  avalsc.EnableSpaceChargeEffect(true);
  avalsc.EnableMC();
  avalsc.EnableTOF();

  avalsc.Set2dGrid(y_mid - d_gas / 2. + 1e-8, y_mid + d_gas / 2. - 1e-8, 3 * 400, 0.05, 100);

  AvalancheMicroscopic avalmicro(&sens);
  avalmicro.SetTimeWindow(0., 25.);

  LOG("Muon(100GeV) Interaction Start");

  TrackHeed track(&sens);
  track.SetParticle("muon");
  track.SetMomentum(1.e11);
  track.CrossInactiveMedia(true);
  track.NewTrack(0, y_mid + d_gas / 2. - 1e-6, 0, 0., 0., -1., 0.);

  std::vector<double> electronCounts(200, 0);
  double timeStep = 25. / 200.; // 0.125 ns per bin

  for (const auto &cluster : track.GetClusters()) {
    for (const auto &electron : cluster.electrons) {
      avalmicro.AvalancheElectron(electron.x, electron.y, electron.z, electron.t, 0.1, 0., 0., 0.);
      avalsc.AddElectrons(&avalmicro);

      size_t nEndpoints = avalmicro.GetNumberOfElectronEndpoints();
      LOG("Number of endpoints: " << nEndpoints);
      for (size_t i = 0; i < nEndpoints; ++i) {
        double x0, y0, z0, t0, x1, y1, z1, t1, e0, e1;
        int status;
        avalmicro.GetElectronEndpoint(i, x0, y0, z0, t0, x1, y1, z1, t1, e0, e1, status);
        std::cout << "Electron " << i << ": t1 = " << t1 << " ns, status = " << status << std::endl;
        int bin = static_cast<int>(t1 / timeStep);
        if (bin >= 0 && bin < 200) {
          electronCounts[bin]++;
        }
      }
    }
  }

  LOG("Start Grid Calculation");
  avalsc.StartGridAvalanche();
  avalsc.ExportGrid("my_rpc_grid");
  sens.ExportSignal(label, "my_signal");

  LOG("Electrons per time step (from AvalancheMicroscopic):");
  for (int i = 0; i < 200; ++i) {
    double time = i * timeStep;
    std::cout << "Time [" << time << " - " << time + timeStep << "] ns: " << electronCounts[i] << " electrons" << std::endl;
  }

  std::ofstream outFile("electron_counts.txt");
  for (int i = 0; i < 200; ++i) {
    double time = i * timeStep;
    outFile << time << "\t" << electronCounts[i] << "\n";
  }
  outFile.close();

  TH1F *hElectrons = new TH1F("hElectrons", "Electrons per Time Step", 200, 0, 25);
  for (int i = 0; i < 200; ++i) {
    hElectrons->SetBinContent(i + 1, electronCounts[i]);
  }
  TCanvas *cElectrons = new TCanvas("cElectrons", "Electron Counts", 600, 600);
  hElectrons->SetXTitle("Time [ns]");
  hElectrons->SetYTitle("Number of Electrons");
  hElectrons->Draw();
  cElectrons->Update();
  gSystem->ProcessEvents();

  ViewSignal *signal_view = new ViewSignal();
  TCanvas *c_signal = new TCanvas(label.c_str(), label.c_str(), 600, 600);
  signal_view->SetCanvas(c_signal);
  signal_view->SetSensor(&sens);
  signal_view->PlotSignal(label);
  c_signal->SetTitle(label.c_str());
  gSystem->ProcessEvents();

  app.Run();
  return 0;
}

Hi,
I’m looping in @dastocco who is the author of the example and the AvalancheGridSpaceChargeClass

Hi

Unfortunately there is only the 2D method to simulate avalanches with space charges. If EnableMC() is activated it uses the sampling method described in Lippmann’s PhD thesis, if it is disabled (with EnableMC(false) ) then only mean values are considered, i.e., no samples are generated.
Regarding your main question, you can use GetElectronEvolution() to recieve the (sum of) electron evolution in time. It is of type std::vector<std::pair<double, long>>, where the first entry of the pair is the time and the second entry the total electrons active on the grid at that time.
You can then for example turn on/off the space charge effect with EnableSpaceChargeEffect(true/false) an compare the electron evolution. In your approach you simulate the electron evolution using the AvalancheMicroscopic class for 25 nanoseconds (avalmicro.SetTimeWindow(0., 25.); ) before you export them to the grid. This time should be reduced to less than one or two nanosecond (only simulate a gain of around 100 with AvalancheMicroscopic). In this binary approach you would need to merge the electron evolution from the Microscopic approach and the one from the Grid approach, in order to get the total time evolution.

I hope this helps,
Dario

Thank you, Dario, for your time and help
do you know other approaches that can I simulate 3D grid Cartesian ?
I have seen publication ‘’ Parallelization of Garfield++ and neBEM to simulate space-charge effects
in RPCs’’ did that, but I found it difficult
so any hints or suggestions I will be thanfull

There are ideas and implementations of the 3D case using grids (C. Lippmann’s PhD thesis or the group around Ute Ebrecht from TU Eindhoven). Unfortunately, I don’t know where to find these codes nor how to use them … On the other hand, COMSOL Multiphysics also provides a module to simulate avalanches under space charge influence, there should be some tutorials about that but I have never touched it so far.

Best
Dario