fine. Seems like something is wrong during the
initial startup network stack configuration or something. I'm not really
sure where to look to begin troubleshooting these. A bit of googling hasn't
revealed much either unfortunately.
Any advice?
~Avery Grieve
They/Them/Theirs please!
Univ
Maybe a silly question, but where do you find the daemon logs or specify
their location?
~Avery Grieve
They/Them/Theirs please!
University of Michigan
On Mon, Dec 14, 2020 at 7:22 PM Alpha Experiment
wrote:
> Hi,
>
> I am trying to run slurm on Fedora 33. Upon boot the slurmd
ar to John, my daemon starts if I just run the systemctl start command
following boot.
~Avery Grieve
They/Them/Theirs please!
University of Michigan
On Mon, Dec 14, 2020 at 8:06 PM Luke Yeager wrote:
> What does your ‘slurmctld.service’ look like? You might want to add
> something to
ng the actual slurmctld command found in sbin
runs correctly with no critical errors.
I've tried to look into this, but can't seem to find too much on this
problem for slurm or for system processes in general.
Any ideas?
Thanks,
~Avery Grieve
They/Them/Theirs please!
University of Michigan
ored so they're more annoying than anything.
Thank you!
~Avery Grieve
They/Them/Theirs please!
University of Michigan
On Thu, Dec 10, 2020 at 1:30 PM Luke Yeager wrote:
> The ubuntu package is here: https://packages.ubuntu.com/focal/libpmix-dev
>
>
>
> Yes, we rewrote t
Hey Chris,
No code to test -- mpi works just not with slurm to call it.
Thanks for your help, I think I'm going to be recompiling from source.
~Avery Grieve
They/Them/Theirs please!
University of Michigan
On Thu, Dec 10, 2020 at 12:58 PM Christopher J Cawley
wrote:
> Hi Avery
my compute node and running
things with mpirun and a hostfile defined, but having a scheduler is a good
learning experience and makes useability a lot nicer!
Again, I appreciate the help, hopefully I can wrastle this into working.
~Avery Grieve
They/Them/Theirs please!
University of Michigan
tes/etc/slurm/slurm.conf.default#L87>)
> and it works.
>
>
>
> $ srun --mpi=list
>
> srun: MPI types are...
>
> srun: cray_shasta
>
> srun: pmi2
>
> srun: pmix_v3
>
> srun: pmix
>
> srun: none
>
>
>
> Hope that helps,
>
> Luke
&
Oop, sorry I meant to also include the following:
# srun --mpi=list
srun: MPI types are...
srun: none
srun: pmi2
srun: openmpi
running srun with --mpi=openmpi gives the same errors as with
MpiDefault=none.
~Avery Grieve
They/Them/Theirs please!
University of Michigan
On Thu, Dec 10, 2020 at
t;find" command.
It's sort of looking like I should be looking at building slurm from source
again, I guess.
Thanks,
~Avery Grieve
They/Them/Theirs please!
University of Michigan
On Thu, Dec 10, 2020 at 11:16 AM Christopher J Cawley
wrote:
> I have a 7 node jetson nano cluster run
my
devices, is there a way to get slurm and openmpi to behave together using
the precompiled package slurm-wlm?
Thank you,
~Avery Grieve
They/Them/Theirs please!
University of Michigan
11 matches
Mail list logo