PyROOT and BATCH job

How should I operate PyROOT with batch jobs (operating at the batch node) so that disk overload (AFS) is kept to a minimum?
How should I optimise the commands
ROOT.gSystem.Load(project + ‘/libLibrary.so’)
ROOT.gROOT.LoadMacro(project + ‘/src/analysis.cc+’)
where “project” would refer to, e.g. a CMSSW development area.

Specifically, is accessing libLibrary.so in the project area by each batch job an unoptimised approach?

Should I suspect that each batch job would recompile the source code ‘analysis.cc’?
Are there any other problems that could be caused by operating outside the development area, e.g. not finding header files?

Please provide me with guidelines and hints for thought.

In principle if OK if several BATCH jobs access the library (read mode) in AFS. AFS is not the fastest file system is reasonable. Many jobs from the experiments run using the libraries from AFS.

The library will be built only once and create a dependency to analysis.cc. If the source code changes, then the library will be rebuild. The problem may happens if two BATCH jobs builds the library at exactly the same time. Try to avoid it.