What about if you increase swap memory? Virtual memory would increase as
well, and maybe the app would run. Of course if it works, the performance
could be very very poor.
El mié., 7 feb. 2018 16:53, david vilanova escribió:
> Thanks all for your comments, i will look into that
>
> El El mié, 7
Thanks all for your comments, i will look into that
El El mié, 7 feb 2018 a las 16:37, Loris Bennett
escribió:
>
> I was make the unwarranted assumption that you have multiple processes.
> So if you have a single process which needs more than 2GB, Ralph is of
> course right and there is nothing
I was make the unwarranted assumption that you have multiple processes.
So if you have a single process which needs more than 2GB, Ralph is of
course right and there is nothing you can do.
However, you are using R, so, depending on your problem, you may be able
to make use of a package like Rmpi
Hi David –
You might consider running your more memory intensive jobs on the XSede machine
at the Pittsburgh Supercomputing Center. It’s called Bridges.
Bridges has a set of 42 large memory (LM) nodes, each with 3 TBytes of private
memory. 9 of the nodes have 64 cores; the rest each have 80.
T
Yes, when working with the human genome you can easily go up to 16Gb.
El El mié, 7 feb 2018 a las 16:20, Krieger, Donald N.
escribió:
> Sorry for jumping in without full knowledge of the thread.
> But it sounds like the key issue is that each job requires 3 GBytes.
> Even if that's true, won't j
Sorry for jumping in without full knowledge of the thread.
But it sounds like the key issue is that each job requires 3 GBytes.
Even if that's true, won't jobs start on cores with less memory and then just
page?
Of course as the previous post states, you must tailor your slurm request to
the phys
Afraid not - since you don’t have any nodes that meet the 3G requirement,
you’ll just hang.
> On Feb 7, 2018, at 7:01 AM, david vilanova wrote:
>
> Thanks for the quick response.
>
> Should the following script do the trick ?? meaning use all required nodes to
> have at least 3G total memory
Thanks for the quick response.
Should the following script do the trick ?? meaning use all required
nodes to have at least 3G total memory ? even though my nodes were setup
with 2G each ??
#SBATCH array 1-10%10:1
#SBATCH mem-per-cpu=3000m
srun R CMD BATCH myscript.R
thanks
On 07/02/
I’m afraid neither of those versions is going to solve the problem here - there
is no way to allocate memory across nodes.
Simple reason: there is no way for a process to directly address memory on a
separate node - you’d have to implement that via MPI or shmem or some other
library.
> On Feb
Loris Bennett writes:
> Hi David,
>
> david martin writes:
>
>>
>>
>> Hi,
>>
>> I would like to submit a job that requires 3Go. The problem is that I have
>> 70 nodes available each node with 2Gb memory.
>>
>> So the command sbatch --mem=3G will wait for ressources to become available.
>>
>
Hi David,
david martin writes:
>
>
> Hi,
>
> I would like to submit a job that requires 3Go. The problem is that I have 70
> nodes available each node with 2Gb memory.
>
> So the command sbatch --mem=3G will wait for ressources to become available.
>
> Can I run sbatch and tell the cluster
Hi,
I would like to submit a job that requires 3Go. The problem is that I
have 70 nodes available each node with 2Gb memory.
So the command sbatch --mem=3G will wait for ressources to become available.
Can I run sbatch and tell the cluster to use the 3Go out of the 70Go
available or is
We use GrpTresRunMins for this, with the idea that it's OK for users to
occupy lots of resources with short-running jobs, but not so much with
long-running jobs.
On Wed, Feb 7, 2018 at 8:41 AM, Bill Barth wrote:
> Of course, Matteo. Happy to help. Our job completion script is:
>
> #!/bin/bash
>
Of course, Matteo. Happy to help. Our job completion script is:
#!/bin/bash
OUTFILE=/var/log/slurm/tacc_jobs_completed
echo
"$JOBID:$UID:$ACCOUNT:$BATCH:$START:$END:$SUBMIT:$PARTITION:$LIMIT:$JOBNAME:$JOBSTATE:$NODECNT:$PROCS"
>> $OUTFILE
exit 0
and our config settings (from scontrol show co
Hi Yair,
Thanks for the information. I guess I'll just have to try it out.
I did have multiple QOS working fine. However, after I modified one of
the QOS the users suddenly couldn't use any of the non-default QOS.
Maybe I'll have a look at the database myself too.
Cheers,
Loris
Yair Yarom w
Hi,
>From my experience - yes, new associations will be associated with the
QOS of the account.
I believe it doesn't explicitly modifies all the associations, just
notifies you which associations will be affected. Looking at my database
suggests that indeed most associations don't have explicit
16 matches
Mail list logo