I don’t think you can turn them into command-line arguments to your script since Bash treats the #SBATCH lines as comments and can’t see “into” them, but you can, as I’m sure you know, override any #SBATCH options by putting them on the sbatch command-line before specifying the command to run. If it were to work at all, Slurm (probably in sbatch) would have to find such command line arguments and expand them for you in the #SBATCH directives in, say, its copy of your script before running it, and even before parsing the directives. Technically feasible unless your command-line arguments to your sbatch scripts are generated outside that script when sbatch is invoked. If that’s what you’re doing, probably better to just override the option on the sbatch command line:
foo.sh: #!/bin/bash … #SBATCH –n 50 … for i in 50 100 1000; do sbatch –n $i ./foo.sh done Is that what you were thinking? Doing: bar.sh: #!/bin/bash … #SBATCH –n $1 … for i in 50 100 1000; do sbatch bar.sh $i done Would be more challenging for Slurm to handle since it would have to understand and parse the command-line arguments of the command handed to it to be run as the job. That parsing is going to depend on the shebang line (as to what’s being invoked) bash? csh? python? perl? /usr/bin/env X? So, I’d be surprised if there was a mode for this. Also, would you expect Slurm to delete any options it used from your command line or leave them? Best, Bill. -- Bill Barth, Ph.D., Director, HPC bba...@tacc.utexas.edu | Phone: (512) 232-7069 Office: ROC 1.435 | Fax: (512) 475-9445 On 3/18/18, 12:44 PM, "slurm-users on behalf of Jessie Poquérusse" <slurm-users-boun...@lists.schedmd.com on behalf of jessie.poqueru...@gmail.com> wrote: Hello, In trying to modularize and genericize my Bash scripts as much as possible, I was wondering if there was a way to turn #SBATCH options (mainly walltime, mem, and output and error file directories) into externally defined script parameters (e.g. in the same command line as during sbatch job submission). Thank-you!