Hi all,
we have a slurm cluster running on nodes with 2x18 cores, 256GB RAM and
8 GPUs. Is there a way to reserve a bare minimum of two CPUs and 8GB RAM
for each GPU, so a high-CPU job cannot render the GPUs "unusable"?
Thanks in advance
Quirin
--
Quirin Lohr
Systemadministration
Technische U
Yugendra Guvvala writes:
> Hi,
>
> We are bringing a new cluster online. We installed SLURM through Bright
> Cluster Manager how ever we are running into a issue here.
>
> We are able to run jobs as root user and users created using bright cluster
> (cmsh commands). How ever we use AD authent
Hi, We are bringing a new cluster online. We installed SLURM through Bright Cluster Manager how ever we are running into a issue here. We are able to run jobs as root user and users created using bright cluster (cmsh commands). How ever we use AD authentication for all our users and when we try to
Hi Leon,
depends on how the admins configured slurm. If they set select/linear,
you have no chance to get just a core, as slurm will schedule only
complete nodes.
Nonetheless, you omitted to tell slurm, how much memory you need (at
least there is nothing in your script). Slurm will then also
Hi Leon,
If the partition is defined to run jobs exclusive you always get a full node.
You’ll have to try to either split up your analysis in independent subtasks to
be run in parallel by dividing the data or make use of some Perl
parallelization package like parallel::Forkmanager to run steps of
Dear there,
I wrote an analytic program to analyze my data. The analysis costs around
twenty days to analyze all data for one species. When I submit my job to the
cluster, it always request one node instead of one CPU. I am wondering how I
can ONLY request one CPU using "sbatch" command? Below
Thank you everyone, I successfully updated slurm using slurm.spec-legacy.
Cheers,
Colas
On 2019-02-11 11:54, Prentice Bisbal wrote:
Also, make sure no 3rd party packages installed software that installs
files in the systemd directories. The legacy spec file still checks
for systemd files to be
Reminds me of a followup question I've been meaning to ask, is it just
the Slurmctld's that need access to the shared SlurmDBD, or do all the
slurmd's on all the nodes need access?
On Tue, Feb 12, 2019 at 7:16 AM Antony Cleave wrote:
>
> You will need to be able to connect both clusters to the sa
You will need to be able to connect both clusters to the same SlurmDBD as
well, but if that is not a problem you are good to go.
Antony
On Tue, 12 Feb 2019 at 11:37, Gestió Servidors
wrote:
> Hi,
>
> I would like to know if "federated clusters in SLURM" concept allows
> connecting two SLURM clu
Hi,
I would like to know if "federated clusters in SLURM" concept allows
connecting two SLURM clusters that are completely separate (one
controller for each cluster, only sharing users via NFS and NIS).
Thanks.
Hi, I am using slurm 17.11.3-2 version on a small ROCKS 7 cluster. I have
two gpu nodes with nvidia driver 384.111 and opencl library installed.
Moreover in /etc/OpenCL/vendors directory there are two files (nvidia.icd
and intel.icd). Files are attached bellow. When I am submiting slurm script
I ge
11 matches
Mail list logo