On 04/16/2018 08:20 PM, David Rodríguez Galiano wrote:
Dear Slurm community,
I am a sysadmin who needs to make a fresh installation of Slurm.
When visiting the download website, I can see two different versions.
The first is 17.02.10 and the second one is 17.11.5. I have not found
information on
On Tuesday, 17 April 2018 4:20:10 AM AEST David Rodríguez Galiano wrote:
> I am a sysadmin who needs to make a fresh installation of Slurm.
> When visiting the download website, I can see two different versions.
> The first is 17.02.10 and the second one is 17.11.5. I have not found
> information
On Tuesday, 17 April 2018 12:52:04 AM AEST De Giorgi Jean-Claude wrote:
> According to the man page, I should get these headers:
> Allocated, Associations, Cluster, Count, CPUTime, End, Flags, Idle, Name,
> Nodes, ReservationId, Start, TotalTime
I suspect you're misreading the manual page, you're
Hi Jean-Claude,
Within an ugly Perl script (since dictionaries are easy in Perl), I run:
sreport -n -p -M \"$cluster\" reservation Utilization start=$startdate
end=$enddate -t hours
Format=Name,ReservationID,Associations,TotalTime,Nodes,Allocated,Start,End
and then
sacct -n -a -M \"$clu
Thanks Kilian!
On 04/16/2018 02:15 PM, Kilian Cavalotti wrote:
Hi Andy,
On Mon, Apr 16, 2018 at 8:43 AM, Andy Riebs wrote:
I hadn't realized that jobs can be scheduled to run on a node that is still
in "completing" state from an earlier job. We occasionally use epilog
scripts that can take 3
Hello,
I have Slurm 17.11 installed on a 64 cores server. My 9 partitions are set with
OverSubscribe=NO. I would expect that when all 64 cores are assigned to jobs,
Slurm would just put new jobs in PENDING state. But it starts running new jobs
so that more than 64 cores are assigned. Looking at
Dear Slurm community,
I am a sysadmin who needs to make a fresh installation of Slurm.
When visiting the download website, I can see two different versions.
The first is 17.02.10 and the second one is 17.11.5. I have not found
information on what version to use.
The latest version fixes some error
Hi Andy,
On Mon, Apr 16, 2018 at 8:43 AM, Andy Riebs wrote:
> I hadn't realized that jobs can be scheduled to run on a node that is still
> in "completing" state from an earlier job. We occasionally use epilog
> scripts that can take 30 seconds or longer, and we really don't want the
> next job t
On Mon, Apr 16, 2018 at 6:35 AM, wrote:
>
> According to the above I have the backfill scheduler enabled with CPUs and
> Memory configured as
> resources. I have 56 CPUs and 256GB of RAM in my resource pool. I would
> expect that he backfill
>scheduler attempts to allocate the resources in orde
I hadn't realized that jobs can be scheduled to run on a node that is
still in "completing" state from an earlier job. We occasionally use
epilog scripts that can take 30 seconds or longer, and we really don't
want the next job to start until the epilog scripts have completed.
Other than codin
Here it is,
The command line to get the previous reservation is:
sreport reservation utilization start=2018-02-10T10:00:00
According to the man page, I should get these headers:
Allocated, Associations, Cluster, Count, CPUTime, End, Flags, Idle, Name,
Nodes, ReservationId, Start, TotalTime
But I
Hello,
Some days ago, I posted a question about a job_submit.lua I was
writing to limit "srun" to only one node and one core. My script was
this:
function slurm_job_submit(job_desc, part_list,
submit_uid)
local partition = "interactive"
Hi,
I'm having some trouble with resource allocation in the sense that according to
how I understood
the documentation and applied that to the config file I am expecting some
behavior that does not happen.
Here is the relevant excerpt from the config file:
60 SchedulerType=sched/backfill
13 matches
Mail list logo