Hi Steffan,

Not sure if the output from 'numactl --hardware' is more consistent and easier 
to parse with a script or similar?

Kind Regards,
Stuart

-----Original Message-----
From: slurm-users <slurm-users-boun...@lists.schedmd.com> On Behalf Of Steffen 
Grunewald
Sent: 22 December 2021 15:38
To: Slurm users <slurm-users@lists.schedmd.com>
Subject: [slurm-users] Node specs for Epyc 7xx3 processors?

[You don't often get email from steffen.grunew...@aei.mpg.de. Learn why this is 
important at http://aka.ms/LearnAboutSenderIdentification.]

[EXTERNAL SENDER]


Hello,

I'm wondering whether there is some rule-of-thumb to translate the core config 
listed in 
https://urldefense.proofpoint.com/v2/url?u=https-3A__en.wikipedia.org_wiki_Epyc&d=DwIDAw&c=euGZstcaTDllvimEN8b7jXrwqOf-v5A_CdpgnVfiiMM&r=WRHOCjWNhD-hk2AQTjbVUqX9gELcVS7wxFXQqbJ02hk&m=tinTJJMaVo42Y9_sd_6GFqebHHPSHkAU7HWSxq9pJJk&s=4qtrnAVSz-nrQqws1E9H4JixtZQpj0dg3dtPszJpW1g&e=
 to the node information Slurm expects in "Sockets=x CoresPerSocket=y"? 
("ThreadsPerCore=2" is clear.)

We'll be getting Epyc 7313 and 7513 machines, and perhaps add a single 7713 one 
- "lscpu" outputs are wildly different, while the total number of cores is 
correct.

Will I have to wait until the machines have arrived, and do some experiments, 
or did someone already retrieve the right numbers, and is willing to share?

Thanks,
 Steffen

--
Steffen Grunewald, Cluster Administrator Max Planck Institute for Gravitational 
Physics (Albert Einstein Institute) Am Mühlenberg 1 * D-14476 Potsdam-Golm * 
Germany ~~~
Fon: +49-331-567 7274
Mail: 
steffen.grunewald(at)https://urldefense.proofpoint.com/v2/url?u=http-3A__aei.mpg.de&d=DwIDAw&c=euGZstcaTDllvimEN8b7jXrwqOf-v5A_CdpgnVfiiMM&r=WRHOCjWNhD-hk2AQTjbVUqX9gELcVS7wxFXQqbJ02hk&m=tinTJJMaVo42Y9_sd_6GFqebHHPSHkAU7HWSxq9pJJk&s=RHWCSvo5XB6Unry_N6weSSXIlHqR0wLYnQeZq-snMhE&e=
~~~


Reply via email to