Hi Kevin, We fixed the issue on github. Thanks!
Best, Chris — Christopher Coffey High-Performance Computing Northern Arizona University 928-523-1167 On 6/17/19, 8:56 AM, "slurm-users on behalf of Christopher Benjamin Coffey" <slurm-users-boun...@lists.schedmd.com on behalf of chris.cof...@nau.edu> wrote: Thanks Kevin, we'll put a fix in for that. Best, Chris — Christopher Coffey High-Performance Computing Northern Arizona University 928-523-1167 On 6/17/19, 12:04 AM, "Kevin Buckley" <kevin.buck...@pawsey.org.au> wrote: On 2019/05/09 23:37, Christopher Benjamin Coffey wrote: > Feel free to try it out and let us know how it works for you! > > https://nam05.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fnauhpc%2Fjob_archive&data=02%7C01%7Cchris.coffey%40nau.edu%7Ca6a27bca726544fd679a08d6f33c5d01%7C27d49e9f89e14aa099a3d35b57b2ba03%7C0%7C0%7C636963837976427575&sdata=pEcZWw0blF4E%2Bctohgawim055T0NQvt%2FArNKjhcZejs%3D&reserved=0 So Chris, testing it out quickly, and dirtily, using an sbatch with a here document, vis: $ sbatch -p testq <<EOF #!/bin/sh module list EOF resulted in the following dirt (ooh-err!) ERROR:11 SLURM_JOB_NAME env not found - jobid: 12345 retrycnt: 7 elapse: 0.013476 but, after I supplied a --job-name= argument, I did see the "two line script" output and the dot-env file, so maybe the code could be modified to handle such a use-case, akin to the "extern" job step you see in the Slurm logs. I also note that the one follow-up email so far (lech.nier...@uni-koeln.de) made quite a few changes - are they in a GitHib clone/branch yet ? Kevin -- Supercomputing Systems Administrator Pawsey Supercomputing Centre Tel: +61 8 6436 8902 SMS: +61 4 9970 3915 Eml: kevin.buck...@pawsey.org.au