What about if you increase swap memory? Virtual memory would increase as
well, and maybe the app would run. Of course if it works, the performance
could be very very poor.
El mié., 7 feb. 2018 16:53, david vilanova escribió:
> Thanks all for your comments, i will look into that
>
> El El mié, 7
Thanks all for your comments, i will look into that
El El mié, 7 feb 2018 a las 16:37, Loris Bennett
escribió:
>
> I was make the unwarranted assumption that you have multiple processes.
> So if you have a single process which needs more than 2GB, Ralph is of
> course right and there is nothing
I was make the unwarranted assumption that you have multiple processes.
So if you have a single process which needs more than 2GB, Ralph is of
course right and there is nothing you can do.
However, you are using R, so, depending on your problem, you may be able
to make use of a package like Rmpi
use slurm.
Best – Don
From: slurm-users [mailto:slurm-users-boun...@lists.schedmd.com] On Behalf Of
david vilanova
Sent: Wednesday, February 7, 2018 10:23 AM
To: Slurm User Community List
Subject: Re: [slurm-users] Allocate more memory
Yes, when working with the human genome you can easily go up
[mailto:slurm-users-boun...@lists.schedmd.com] On
> Behalf Of r...@open-mpi.org
> Sent: Wednesday, February 7, 2018 10:03 AM
> To: Slurm User Community List
> Subject: Re: [slurm-users] Allocate more memory
>
> Afraid not - since you don’t have any nodes that meet the 3G requir
ng similar.
Anyway - best - Don
-Original Message-
From: slurm-users [mailto:slurm-users-boun...@lists.schedmd.com] On Behalf Of
r...@open-mpi.org
Sent: Wednesday, February 7, 2018 10:03 AM
To: Slurm User Community List
Subject: Re: [slurm-users] Allocate more memory
Afraid not - since
Afraid not - since you don’t have any nodes that meet the 3G requirement,
you’ll just hang.
> On Feb 7, 2018, at 7:01 AM, david vilanova wrote:
>
> Thanks for the quick response.
>
> Should the following script do the trick ?? meaning use all required nodes to
> have at least 3G total memory
Thanks for the quick response.
Should the following script do the trick ?? meaning use all required
nodes to have at least 3G total memory ? even though my nodes were setup
with 2G each ??
#SBATCH array 1-10%10:1
#SBATCH mem-per-cpu=3000m
srun R CMD BATCH myscript.R
thanks
On 07/02/
I’m afraid neither of those versions is going to solve the problem here - there
is no way to allocate memory across nodes.
Simple reason: there is no way for a process to directly address memory on a
separate node - you’d have to implement that via MPI or shmem or some other
library.
> On Feb
Loris Bennett writes:
> Hi David,
>
> david martin writes:
>
>>
>>
>> Hi,
>>
>> I would like to submit a job that requires 3Go. The problem is that I have
>> 70 nodes available each node with 2Gb memory.
>>
>> So the command sbatch --mem=3G will wait for ressources to become available.
>>
>
Hi David,
david martin writes:
>
>
> Hi,
>
> I would like to submit a job that requires 3Go. The problem is that I have 70
> nodes available each node with 2Gb memory.
>
> So the command sbatch --mem=3G will wait for ressources to become available.
>
> Can I run sbatch and tell the cluster
11 matches
Mail list logo