Dear ROOT experts,
I have a question regarding creating distributed RDFs with the Spark backend.
I’ve been exploring the capabilities of ROOT, particularly in handling RDFs, and I’m intrigued by the potential of leveraging the Spark backend for distributed computing. I’ve noticed that it’s possible to create RDFs from samples and metadata using
RDF::Experimental::FromSpec() method. However, I’m wondering if there’s a similar functionality available when working with the Spark backend.
Specifically, I’d like to know if there’s a way to create distributed RDFs with Spark backend while incorporating samples’ file names and metadata from a JSON file. This would greatly streamline my workflow and enable me to efficiently analyze large datasets.
Any insights, guidance, or examples on how to achieve this would be greatly appreciated. Apologies if this question has been addressed previously; I’ve tried searching the forums but couldn’t find a definitive answer.
Thank you in advance for your assistance!
ROOT Version: 6.30.2
Platform: Almalinux 9