[slurm-users] Scheduling GPUS

2019-11-07 Thread Mike Mosley
Greetings all: I'm attempting to configure the scheduler to schedule our GPU boxes but have run into a bit of a snag. I have a box with two Tesla K80s. With my current configuration, the scheduler will schedule one job on the box, but if I submit a second job, it queues up until the first one f

Re: [slurm-users] OverMemoryKill Not Working?

2019-10-28 Thread Mike Mosley
ceeds its memory limits". > > Best regards > Jürgen > > -- > Jürgen Salk > Scientific Software & Compute Services (SSCS) > Kommunikations- und Informationszentrum (kiz) > Universität Ulm > Telefon: +49 (0)731 50-22478 > Telefax: +49 (0)731 50-22471 > > >

Re: [slurm-users] OverMemoryKill Not Working?

2019-10-25 Thread Mike Mosley
thread in the list archive if you search for > "How to automatically kill a job that exceeds its memory limits". > > Best regards > Jürgen > > -- > Jürgen Salk > Scientific Software & Compute Services (SSCS) > Kommunikations- und Informationszentrum (kiz) > Unive

Re: [slurm-users] OverMemoryKill Not Working?

2019-10-25 Thread Mike Mosley
Mark, Thanks for responding. Yes, it will constrain it to the amount of memory the user asked for. In fact I have gotten that to work. That is not the behavior that we desire (at least initially). The test code I ran through (which just allocates chunks of RAM in a loop) would be *constrained

Re: [slurm-users] OverMemoryKill Not Working?

2019-10-25 Thread Mike Mosley
allocation tracking according to documentation: > > https://slurm.schedmd.com/cons_res_share.html > > Also, the line: > > #SBATCH --mem=1GBB > > contains "1GBB". Is this same at job script? > > > Regards; > > Ahmet M. > > > 24.10.2019 23:00 tarih

[slurm-users] OverMemoryKill Not Working?

2019-10-24 Thread Mike Mosley
Hello, We are testing Slurm19.05 on Linux RHEL7.5+ with the intent to migrate from it toTorque/Moab in the near future. One of the things our users are used to is that when their jobs exceed the amount of memory they requested, the job is terminated by the scheduler. We realize the Slurm prefers