inline below...
On Sat, Apr 3, 2021 at 4:50 PM Will Dennis wrote:
> Sorry, obvs wasn’t ready to send that last message yet…
>
>
>
> Our issue is the shared storage is via NFS, and the “fast storage in
> limited supply” is only local on each node. Hence the need to copy it over
> from NFS (and th
Sorry, obvs wasn’t ready to send that last message yet…
Our issue is the shared storage is via NFS, and the “fast storage in limited
supply” is only local on each node. Hence the need to copy it over from NFS
(and then remove it when finished with it.)
I also wanted the copy & remove to be diff
If you've got other fast storage in limited supply that can be used for data
that can be staged, then by all means use it, but consider whether you want
batch cpu cores tied up with the wall time of transferring the data. This could
easily be done on a time-shared frontend login node from which
Hi,
"scratch space" is generally considered ephemeral storage that only exists
for the duration of the job (It's eligible for deletion in an epilog or
next-job prolog) .
If you've got other fast storage in limited supply that can be used for
data that can be staged, then by all means use it, but
What I mean by “scratch” space is indeed local persistent storage in our case;
sorry if my use of “scratch space” is already a generally-known Slurm concept I
don’t understand, or something like /tmp… That’s why my desired workflow is to
“copy data locally / use data from copy / remove local cop
Unfortunately this is not a good workflow.
You would submit a staging job with a dependency for the compute job;
however, in the meantime, the scheduler might launch higher-priority jobs
that would want the scratch space, and cause it to be scrubbed.
In a rational process, the scratch space would
Hi all,
We have various NFS servers that contain the data that our researchers want to
process. These are mounted on our Slurm clusters on well-known paths. Also, the
nodes have local fast scratch disk on another well-known path. We do not have
any distributed file systems in use (Our Slurm clu