Looking at your script, there’s a chance that by only specifying ntasks instead
of ntasks-per-node or a similar parameter, you might have allocated 8 CPUs on
one node, and the remaining 4 on another.
Regardless, I’ve dug into my Gaussian documentation, and here’s my test case
for you to see wha
I bet all on here would just LOVE the AMD Fangio ;-)
http://www.cpu-world.com/news_2012/2012111801_Obscure_CPUs_AMD_Opteron_6275.html
Hint - quite a few of these were sold!
On 11 July 2018 at 11:04, Mahmood Naderan wrote:
> My fault. One of the other nodes was in my mind!
>
> The node which is
My fault. One of the other nodes was in my mind!
The node which is running g09 is
[root@compute-0-3 ~]# ps aux | grep l502
root 11198 0.0 0.0 112664 968 pts/0S+ 13:31 0:00 grep
--color=auto l502
nooriza+ 30909 803 1.4 21095004 947968 ? Rl Jul10 6752:47
/usr/local/chem/g
Mahmood, please please forgive me for saying this. A quick Google shows
that Opteron 61xx have eight or twelve cores.
Have you checked that all the servers have 12 cores?
I realise I am appearing stupid here.
On 11 July 2018 at 10:39, Mahmood Naderan wrote:
> >Try runningps -eaf --fores
>Try runningps -eaf --forest while a job is running.
noor 30907 30903 0 Jul10 ?00:00:00 \_ /bin/bash
/var/spool/slurmd/job00749/slurm_script
noor 30908 30907 0 Jul10 ?00:00:00 \_ g09 trimmer.gjf
noor 30909 30908 99 Jul10 ?4-13:00:21 \_
/usr/local/chem
Another thought - are we getting mixed up between hyperthreaded and
physical cores here?
I don't see how 12 hyperthreaded cores translates to 8 though - it would be
6!
On 11 July 2018 at 10:30, John Hearns wrote:
> Mahmood,
> I am sure you have checked this. Try runningps -eaf --fores
Mahmood,
I am sure you have checked this. Try runningps -eaf --forest while
a job is running.
I often find the --forest option helps to understand how batch jobs are
being run.
On 11 July 2018 at 09:12, Mahmood Naderan wrote:
> >Check the Gaussian log file for mention of its using just
>Check the Gaussian log file for mention of its using just 8 CPUs-- just
because there are 12 CPUs available doesn't mean the program uses all of
>them. It will scale-back if 12 isn't a good match to the problem as I
recall.
Well, in the log file, it says
*
Check the Gaussian log file for mention of its using just 8 CPUs-- just because
there are 12 CPUs available doesn't mean the program uses all of them. It will
scale-back if 12 isn't a good match to the problem as I recall.
/*!
@signature Jeffrey Frey, Ph.D
@email f...@udel.edu
@source iPh
Gaussian? Look for NProc=8 or similar lines (NPRocShared, could be other
options, too) in their input files. There could also be some system-wide
parallel settings for Gaussian, but that wouldn’t be the default.
> On Jul 10, 2018, at 2:04 PM, Mahmood Naderan wrote:
>
> Hi,
> I see that althoug
Hi,
I see that although I have specified cpu limit of 12 for a user, his job
only utilizes 8 cores.
[root@rocks7 ~]# sacctmgr list association
format=partition,account,user,grptres,maxwall
PartitionAccount User GrpTRES MaxWall
-- -- -- - -
11 matches
Mail list logo