at {$job -1} we used local scratch and tmpwatch. This had a wrapper script that would exclude files and folders for any user that currently running a job on the node.
This way nothing got removed until the users job had finished even if they hadn't accessed the files for a while and you don't have predict how long a job could run for.. On Tue, 12 Jun 2018 at 22:21, Skylar Thompson <skylar.thomp...@gmail.com> wrote: > On Tue, Jun 12, 2018 at 10:06:06AM +0200, John Hearns via Beowulf wrote: > > What do most sites do for scratch space? > > We give users access to local disk space on nodes (spinning disk for older > nodes, SSD for newer nodes), which (for the most part) GE will address with > the $TMPDIR job environment variable. We have a "ssd" boolean complex that > users can place in their job to request SSD nodes if they know they will > benefit from them. > > We also have labs that use non-backed up portions of their network storage > (Isilon for the older storage, DDN/GPFS for the newer) for scratch space > for processing of pipeline data, where different stages of the pipeline run > on different nodes. > > -- > Skylar > _______________________________________________ > Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing > To change your subscription (digest mode or unsubscribe) visit > http://www.beowulf.org/mailman/listinfo/beowulf >
_______________________________________________ Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf