On 20/5/19 2:11 pm, Brian Andrus wrote:
I know the argument passed to ResumeProgram is the node to be started,
but is there any way to access job info from within that script?
I've no idea, but you could try dumping the environment with env (or
setenv if you're using csh) from the script that
Do you have that resource handy? I looked into the cgroups documentation
but I see very little on tutorials for modifying the permissions.
On Mon, May 20, 2019 at 2:45 AM John Hearns wrote:
> Two replies here.
> First off for normal user logins you can direct them into a cgroup - I
> looked into
All,
I know the argument passed to ResumeProgram is the node to be started,
but is there any way to access job info from within that script?
In particular, the number of nodes and cores actually requested.
Brian Andrus
On Mon, May 20, 2019 at 2:59 PM wrote:
>
>
>
> I did test setting GrpTRESRunMins=cpu=N for each user + account
> association, and that does appear to work. Does anyone know of any other
> solutions to this issue?
No. Your solution is what we currently do. A "...PU" would be a nice, tidy
additio
Esteemed Slurm users,
I am trying to mitigate a use case where jobs can be submitted for the
maximum number of nodes allowed and for the maximum time slipping in
where the queue itself is briefly empty. The general idea is users are
allowed to use up to half of the nodes in the QoS, and jobs a
Why are you sshing into the compute node compute-0-2 ???
On the head node named rocks7:
srun -c 1 --partition RUBY --account y8 --mem=1G xclock
On Mon, 20 May 2019 at 16:07, Mahmood Naderan wrote:
> Hi
> Although proper configuration has been defined as below
>
> [root@rocks7 software]# grep R
Hi
Although proper configuration has been defined as below
[root@rocks7 software]# grep RUBY /etc/slurm/parts
PartitionName=RUBY AllowAccounts=y4,y8 Nodes=compute-0-[1-4]
[root@rocks7 software]# sacctmgr list association
format=account,"user%20",partition,grptres,maxwall | grep kouhikamali3
l
Two replies here.
First off for normal user logins you can direct them into a cgroup - I
looked into this about a year ago and it was actually quite easy.
As I remember there is a service or utility available which does just that.
Of course the user cgroup would not have
Expanding on my theme, it
This doesn't directly answer your question, but in Feb last year on the ML
there was a discussion about limiting user resources on login node
(Stopping compute usage on login nodes).Some of the suggestions
included the use of cgroups to do so, and it's possible that those methods
could be exten