Hello,
We’re trying to use distributed feature of RDF using the wrappers in our python library bamboo
. One user has a persistent error which is the following. Do you have any idea on this?
Traceback (most recent call last):
File "/cvmfs/sft.cern.ch/lcg/views/LCG_103/x86_64-centos7-gcc11-opt/lib/DistRDF/Backends/Dask/Backend.py", line 70, in get_total_cores_jobqueuecluster
return sum(spec["options"]["cores"] for spec in workers_spec.values())
File "/cvmfs/sft.cern.ch/lcg/views/LCG_103/x86_64-centos7-gcc11-opt/lib/DistRDF/Backends/Dask/Backend.py", line 70, in <genexpr>
return sum(spec["options"]["cores"] for spec in workers_spec.values())
KeyError: 'cores'
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/afs/cern.ch/work/a/.../HH_Analysis_v3/bamboovenv_v3/bin/bambooRun", line 8, in <module>
sys.exit(main())
File "/afs/cern.ch/work/a/.../HH_Analysis_v3/bamboovenv_v3/lib/python3.9/site-packages/bamboo/scripts/bambooRun.py", line 75, in main
modInst.run()
File "/afs/cern.ch/work/a/.../HH_Analysis_v3/bamboovenv_v3/lib/python3.9/site-packages/bamboo/analysismodules.py", line 313, in run
run_notworker(self)
File "/afs/cern.ch/work/a/.../HH_Analysis_v3/bamboovenv_v3/lib/python3.9/site-packages/bamboo/workflow.py", line 804, in run_notworker
stats = backend.writeResults(
File "/afs/cern.ch/work/a/.../HH_Analysis_v3/bamboovenv_v3/lib/python3.9/site-packages/bamboo/dataframebackend.py", line 940, in writeResults
h.Write()
File "/cvmfs/sft.cern.ch/lcg/views/LCG_103/x86_64-centos7-gcc11-opt/lib/DistRDF/Proxy.py", line 198, in _call_action_result
return getattr(self.GetValue(), self._cur_attr)(*args, **kwargs)
File "/cvmfs/sft.cern.ch/lcg/views/LCG_103/x86_64-centos7-gcc11-opt/lib/DistRDF/Proxy.py", line 190, in GetValue
execute_graph(self.proxied_node)
File "/cvmfs/sft.cern.ch/lcg/views/LCG_103/x86_64-centos7-gcc11-opt/lib/DistRDF/Proxy.py", line 57, in execute_graph
node.get_head().execute_graph()
File "/cvmfs/sft.cern.ch/lcg/views/LCG_103/x86_64-centos7-gcc11-opt/lib/DistRDF/HeadNode.py", line 190, in execute_graph
self.npartitions = self.backend.optimize_npartitions()
File "/cvmfs/sft.cern.ch/lcg/views/LCG_103/x86_64-centos7-gcc11-opt/lib/DistRDF/Backends/Dask/Backend.py", line 112, in optimize_npartitions
return get_total_cores(self.client)
File "/cvmfs/sft.cern.ch/lcg/views/LCG_103/x86_64-centos7-gcc11-opt/lib/DistRDF/Backends/Dask/Backend.py", line 85, in get_total_cores
return get_total_cores_jobqueuecluster(client.cluster)
File "/cvmfs/sft.cern.ch/lcg/views/LCG_103/x86_64-centos7-gcc11-opt/lib/DistRDF/Backends/Dask/Backend.py", line 72, in get_total_cores_jobqueuecluster
raise RuntimeError("Could not retrieve the provided worker specification from the Dask cluster object. "
RuntimeError: Could not retrieve the provided worker specification from the Dask cluster object. Please report this as a bug.