Using MPI with Cling?

Is it possible to use MPI with Cling?

I can use MPI with one rank by linking Cling to OpenMPI at startup

cling `mpic++ -showme:link`

MPI_Init(), etc., seems to work, but of course there’s only one process. It’s not apparent to me how you might start a Cling session backed by multiple MPI ranks–is there a supported way to do this, or does anyone have tips on what I might try?

You make it sound like you want to run MPI from cling. Is that the case? If so, it’s not going to work: mpi jobs need to start from a batch scheduler to allocate nodes, assign processes and ranks, setup connections, etc., etc… Of course, most schedulers come with an API, which you could use from cling, but that wouldn’t setup an interactive MPI job: after starting the run, your cling session would just wait until completion.

Now if you want to run cling from MPI, my only recommendation would be to stuff MPI_Init into cling’s main, rather than executing it on a cling script, the rest should all work.

If instead of cling you want to run ROOT from MPI, then there is Omar Zapata’s work:
https://indico.cern.ch/event/607822
https://github.com/oprojects/root/tree/master-rmpi

Yes, I’d basically like to invoke the Cling interpreter with MPI–just locally in shared memory. The goal would be to have my STDIN going to the Cling interpreter on each MPI rank and to be able to see the STDOUT (or even files written by) the MPI ranks.

When you say stuff MPI_Init() into Cling’s main, do you literally mean going into the Cling source code and modifying main()?

For the former: even locally, there is ever only one controlling terminal and one process associated with it. You’d have to do the transport of stdin/stdout yourself (you can hand off the terminal within a session of processes, but there would still be only one receiving process, not all). You’d also have to deal with the problem that without a scheduler, every MPI process that you launch will think it’s rank 0.

For the latter: yes, no behavior before MPI_Init() is guaranteed, and Cling has a lot of setup and initialization to do before it gets to user code, so my recommendation is to edit main() and stuff MPI_Init() in there, right at the beginning. Of course, again, that is for when launching cling from mpi.

Is there a particular reason you require the use of MPI? From what you describe, and given that you want to stay local, simply starting multiple processes through popen() is way simpler and can be done from cling no problem.

I’m primarily interested in using MPI in Cling for the purposes of giving demos. I’m working on a high-level PGAS library with a backend in MPI, and it would be super cool to give a demo of how it works in a Jupyter notebook.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.