Deexcitation Product Error while simulating avalanche & avalanche simulation speed up?

Hello,
I have 2 questions regarding avalanche simulations.

For context, I am attempting to simulate an incident Electron/Proton in the range of MeVs passing through a Triple GEM Detector and the subsequent delta electrons and avalanches. The detector is the Ansys component “triplegem” from the Ansys123 example, with the foil spacing slightly edited. To speed up the simulations I am utilizing OpenMP for parallelization of each Avalanche.

  1. I frequently get this message while the simulation is running,
    AvalancheMicroscopic::TransportElectron: Cannot retrieve deexcitation product 0/1.
    Is this an error to worry about?
    I am guessing this may be a parallelization problem as I don’t believe I ever got this message while simulating without OpenMP.

  2. Is there a way to speed up Avalanche Simulation?
    I am already utilizing parallelization so that each instance of AvalancheMicroscopic has its own CPU core for computing, but it seems that the computation of the avalanche is what is taking a while. Just to get some tangible data promptly, I limited the size of each avalanche to 1000. Ideally, I wanted to get the results with the avalanche limit disabled.

Thanks for any help and/or clarifications you can provide!

Hello,

@hschindl.

Cheers,
D

Hi,

  1. Yes, this looks like a concurrency problem. How did you implement the parallelisation?
  2. Good question… For large avalanches (and large field maps), a microscopic simulation can indeed take a very long time (to the point that it becomes impractical). There are a few ideas to transit to a more macroscopic simulation once the avalanche has reached a certain size, but I don’t have a magic solution at the moment. If you have access to a GPU, you could try the GPU-version of AvalancheMicroscopic`:
    Examples/GemGPU · master · garfield / garfieldpp · GitLab
    printf("Simulating Avalanches...\n");
    int index = electrons.size();
    #pragma omp parallel for schedule(dynamic)
    for (int k = 0; k < index; k++) {

        AvalancheMicroscopic aval;
        aval.SetSensor(&sensor);
        aval.EnableSignalCalculation();
        aval.EnablePlotting(&driftView);
        // aval.DisableAvalancheSizeLimit();
        aval.EnableAvalancheSizeLimit(1000);
        printf("Avalanche Electron %i \n", (k + 1));
        aval.AvalancheElectron(electrons[k].x, electrons[k].y, electrons[k].z, electrons[k].t, 0.1);
        int np = aval.GetNumberOfElectronEndpoints();
        printf("Endpoints = %i\n", np);

        DriftLineRKF drift;
        // AvalancheMC drift;
        drift.SetSensor(&sensor);
        // drift.SetMaximumStepSize(2.e-4);
        // drift.SetDistanceSteps(2.e-4);
        drift.EnableIonTail();
        drift.EnableSignalCalculation();
        drift.EnablePlotting(&driftView);  

        double xe1, ye1, ze1, te1, e1;
        double xe2, ye2, ze2, te2, e2;
        double xi1, yi1, zi1, ti1;
        double xi2, yi2, zi2, ti2;
        int status;

        for (int j = 0; j < np; ++j) {
            aval.GetElectronEndpoint(j, xe1, ye1, ze1, te1, e1, 
                                        xe2, ye2, ze2, te2, e2, status);
            drift.DriftIon(xe1, ye1, ze1, te1);
            nElectrons++;
            // printf("Electron %i \n", nElectrons);
            if (ze2 > 0.38) {
                n_e++;
                histo->Fill(e2);
            }
        }
    }   
    printf("Particle Simulation Complete!\n"); 

The instances of AvalancheMicroscopic and DriftLineRKF are initialized within the parallelized for-loop.

And thank you for the GPU suggestion, I will take a look at the provided example and come back with my results after implementation.

Thanks! I think I know where the concurrency issue is coming from, will try to implement a fix.

Hi,
I’ve just merged some changes to the code to address the parallelisation issue you reported:

Let me know if it fixes the problem.