in /etc/lsst/prolog.sh, export LSST_RUN_TEMP_SPACE="/tmp/lsst/sandbox" . It will not work for Rubin jobs. For rubin jobs, LSST_RUN_TEMP_SPACE  is a shared space for jobs at different worker nodes. Rubin jobs are different from ATLAS jobs that every single job is independent. Rubin jobs are not independent. The first Rubin job in a workflow will create a directory and write some information to LSST_RUN_TEMP_SPACE , then other jobs read and write information in LSST_RUN_TEMP_SPACE.  at the end, the final job will read all information the workflow directory in LSST_RUN_TEMP_SPACE .

 

Slac has a shared POSIX file system mounted on all nodes.
Rubin can support different storages. S3 is supported. - Therefore could use the S3.echo

 

Fabio HernandezFabio Hernandez  1:50 PM
In our site, our Slurm compute nodes have some local storage capacity. We use that capacity for jobs to use as working storage. Once the job finishes, the data in that area is deleted.There is another area that is needed by PanDA and I think that is what you refer to. That area store some information needed by a campaign to update the Butler registry database with the data produced and stored by the jobs in the Butler data store. The data stored in that area is relatively small but indeed needs to be available by the so called “Final Job” to do its data registration work. In our case, that area resides in CephFS and is mounted by all the Slurm compute nodes.It seems to me that is described in some document. Let me try to find out where.
 
Fabio Hernandez
I think this is the document: https://panda.lsst.io/admin/site_environments.htmlThe relevant piece is the area named LSST_RUN_TEMP_SPACE