milan cpu's aren't officially supported on less then rhel8.3. but
there's anecdotal evidence that rhel7 will run on milan cpu's. if the
evidence is true, is anyone on the list doing so and can confirm?
___
Beowulf mailing list, Beowulf@beowulf.org spons
On 28/6/22 11:44 am, leo camilo wrote:
My time indeed has a cost, hence I will favour a "cheap and dirty"
solution to get the ball rolling and try something fancy later.
One thing I'd add is the use of some sort of cluster management system
can be very handy to let you manage things as a whol
Thanks Robert,
You have given me a lot to think about.
Most of our nodes have around 250GB SSDs largely unpopulated so I am
guessing there is no harm in just installing the libraries in every node
with ansible. Also, in our department we have a wealth of old HDDs we could
repurpose
My time indeed
On Tue, 28 Jun 2022, leo camilo wrote:
I see, so if I understand it correctly I have to make sure that there is a
copy of the library, environments and modules on every computational node?
I am wondering if I can get around it by using nfs.
The answer is yes, although it is a bit of a pain.
On 28/06/2022 10:32, leo camilo wrote:
[...]
# Question:
So here is the question, is there a way to cache the frontnode's
libraries and environment onto the computational nodes when a slurm job
is created?
Will environment modules do that? If so, how?
Hi, Leo.
Have you considered using Ql
Dear all,
what we are doing is we are exporting our software stack via a shared file
system like, for example NFS (not a good idea for larger clusters),
SpectrumScale (formerly known as GPFS), Lustre, Ceph... the list is long.
To ease your pain with building architecture specific software and a
For what it’s worth I use an easy8 licensed bright cluster (now part of NVidia)
and I continually find I need to make sure the module packages, environment
variables etc are installed/set in the images that are deployed to the nodes
Bright supports slurm, k8, jupyter and a lot more
Richard
Se
I see, so if I understand it correctly I have to make sure that there is a
copy of the library, environments and modules on every computational node?
I am wondering if I can get around it by using nfs.
On Tue, 28 Jun 2022 at 11:42, Richard wrote:
> For what it’s worth I use an easy8 licensed br
Hi Leo,
A bit more clarification on this cluster setup
You have 1 primary node and then the rest connect to it in a diskless fashion
booting off the primary (front) node?
Regards,
Jonathan
From: Beowulf on behalf of leo camilo
Sent: 28 June 2022 11:32
To: Beo
# Background
So, I am building this small beowulf cluster for my department. I have it
running on ubuntu servers, a front node and at the moment 7 x 16 core
nodes. I have installed SLURM as the scheduler and I have been
procrastinating to setup environment modules.
In any case, I ran in this part
10 matches
Mail list logo