Re: [slurm-users] Allocate more memory

2018-02-07 Thread Miguel Gutiérrez Páez
What about if you increase swap memory? Virtual memory would increase as well, and maybe the app would run. Of course if it works, the performance could be very very poor. El mié., 7 feb. 2018 16:53, david vilanova escribió: > Thanks all for your comments, i will look into that > > El El mié, 7

Re: [slurm-users] Allocate more memory

2018-02-07 Thread david vilanova
Thanks all for your comments, i will look into that El El mié, 7 feb 2018 a las 16:37, Loris Bennett escribió: > > I was make the unwarranted assumption that you have multiple processes. > So if you have a single process which needs more than 2GB, Ralph is of > course right and there is nothing

Re: [slurm-users] Allocate more memory

2018-02-07 Thread Loris Bennett
I was make the unwarranted assumption that you have multiple processes. So if you have a single process which needs more than 2GB, Ralph is of course right and there is nothing you can do. However, you are using R, so, depending on your problem, you may be able to make use of a package like Rmpi

Re: [slurm-users] Allocate more memory

2018-02-07 Thread Krieger, Donald N.
use slurm. Best – Don From: slurm-users [mailto:slurm-users-boun...@lists.schedmd.com] On Behalf Of david vilanova Sent: Wednesday, February 7, 2018 10:23 AM To: Slurm User Community List Subject: Re: [slurm-users] Allocate more memory Yes, when working with the human genome you can easily go up

Re: [slurm-users] Allocate more memory

2018-02-07 Thread david vilanova
[mailto:slurm-users-boun...@lists.schedmd.com] On > Behalf Of r...@open-mpi.org > Sent: Wednesday, February 7, 2018 10:03 AM > To: Slurm User Community List > Subject: Re: [slurm-users] Allocate more memory > > Afraid not - since you don’t have any nodes that meet the 3G requir

Re: [slurm-users] Allocate more memory

2018-02-07 Thread Krieger, Donald N.
ng similar. Anyway - best - Don -Original Message- From: slurm-users [mailto:slurm-users-boun...@lists.schedmd.com] On Behalf Of r...@open-mpi.org Sent: Wednesday, February 7, 2018 10:03 AM To: Slurm User Community List Subject: Re: [slurm-users] Allocate more memory Afraid not - since

Re: [slurm-users] Allocate more memory

2018-02-07 Thread r...@open-mpi.org
Afraid not - since you don’t have any nodes that meet the 3G requirement, you’ll just hang. > On Feb 7, 2018, at 7:01 AM, david vilanova wrote: > > Thanks for the quick response. > > Should the following script do the trick ?? meaning use all required nodes to > have at least 3G total memory

Re: [slurm-users] Allocate more memory

2018-02-07 Thread david vilanova
Thanks for the quick response. Should the following script do the trick ?? meaning use all required nodes to have at least 3G total memory ? even though my nodes were setup with 2G each ?? #SBATCH array 1-10%10:1 #SBATCH mem-per-cpu=3000m srun R CMD BATCH myscript.R thanks On 07/02/

Re: [slurm-users] Allocate more memory

2018-02-07 Thread r...@open-mpi.org
I’m afraid neither of those versions is going to solve the problem here - there is no way to allocate memory across nodes. Simple reason: there is no way for a process to directly address memory on a separate node - you’d have to implement that via MPI or shmem or some other library. > On Feb

Re: [slurm-users] Allocate more memory

2018-02-07 Thread Loris Bennett
Loris Bennett writes: > Hi David, > > david martin writes: > >>  >> >> Hi, >> >> I would like to submit a job that requires 3Go. The problem is that I have >> 70 nodes available each node with 2Gb memory. >> >> So the command sbatch --mem=3G will wait for ressources to become available. >> >

Re: [slurm-users] Allocate more memory

2018-02-07 Thread Loris Bennett
Hi David, david martin writes: >  > > Hi, > > I would like to submit a job that requires 3Go. The problem is that I have 70 > nodes available each node with 2Gb memory. > > So the command sbatch --mem=3G will wait for ressources to become available. > > Can I run sbatch and tell the cluster