Hi Kevin,
We fixed the issue on github. Thanks!
Best,
Chris
—
Christopher Coffey
High-Performance Computing
Northern Arizona University
928-523-1167
On 6/17/19, 8:56 AM, "slurm-users on behalf of Christopher Benjamin Coffey"
wrote:
Thanks Kevin, we'll put a fix in for that.
B
Also look for the presence of the slurm mpi plugins: mpi_none.so,
mpi_openmpi.so, mpi_pmi2.so, mpi_pmix.so, mpi_pmix_v3.so, They will be
installed typically to /usr/lib64/slurm/. Those plugins are used for the
various mpi capabilities and are good "markers"for how your configure detected
an
Hi Palle,
You should probably get the latest stable SLURM version from
www.schedmd.com and use the build/install instructions found there. Note
that you should check for WARNING messages in the config.log produced by
SLURM's configure, as they're the best place to find you've missing
packages tha
We don't do anything. In our environment it is the user's
responsibility to optimize their code appropriately. Since we have a
great variety of hardware any modules we build (we have several thousand
of them) are all build generically. If people want processor specific
optimizations then the
...ah, got it. I was confused by "PI/Lab nodes" in your partition list.
Our QoS/account pair for each investigator condo is our approximate
equivalent of what you're doing with owned partitions.
Since we have everything in one partition we segregate processor types via
topology.conf. We break up
I don't know off hand. You can sort of construct a similar system in
Slurm, but I've never seen it as a native option.
-Paul Edmon-
On 6/20/19 10:32 AM, John Hearns wrote:
Paul, you refer to banking resources. Which leads me to ask are
schemes such as Gold used these days in Slurm?
Gold was a
Paul, you refer to banking resources. Which leads me to ask are schemes
such as Gold used these days in Slurm?
Gold was a utility where groups could top up with a virtual amount of money
which would be spent as they consume resources.
Altair also wrote a similar system for PBS, which they offered t
People will specify which partition they need or if they want multiple
they use this:
#SBATCH -p general,shared,serial_requeue
As then the scheduler will just select which partition they will run in
first. Naturally there is a risk that you will end up running in a more
expensive partition.
Palle, you will get a more up to date version of Slurm by using the GitHub
repository
https://github.com/SchedMD/slurm
You do not necessarily have to use the Linux distribution version of
packages, which are often out of date.
However - please tell us a bit more about your environment.
Specificall
Dear all,
I have been following this mailinglist for some time, and as a complete
newbie using Slurm I have learned some lessons from you.
I have an issue with building and configuring Slurm to use OpenMPI.
When running srun for some task I get the error stating that Slurm has
not been buil
Dear all,
I have been following this mailinglist for some time, and as a complete
newbie using Slurm I have learned some lessons from you.
I have an issue with building and configuring Slurm to use OpenMPI.
When running srun for some task I get the error stating that Slurm has
not been buil
On 20/6/19 3:24 am, Brian Andrus wrote:
Can you give the exact command/output you have from this?
I suspect a typo in your slurm.conf for nodenames or what you are typing.
Brian Andrus
Hi Brian,
I am pretty sure there is no error in my typing of the commands, but
just in case find below t
Janne, thankyou. That FGCI benchmark in a container is pretty smart.
I always say that real application benchmarks beat synthetic benchmarks.
Taking a small mix of applications like that and taking a geometric mean is
great.
Note: *"a reference result run on a Dell PowerEdge C4130"*
In the old da
13 matches
Mail list logo