Not the answer you hoped for there I guess...
On 15.02.19 07:15, Marcus Wagner wrote:
> I have filed a bug:
>
> https://bugs.schedmd.com/show_bug.cgi?id=6522
>
>
> Lets see, what ScheMD has to tell us ;)
>
>
> Best
> Marcus
>
> On 2/15/19 6:25 AM, Marcus Wagner wrote:
>> NumNodes=1 NumCPUs=48 NumT
Hi Marcus,
for us slurmd -C as well as numactl -H looked fine, too. But we're using
task/cgroup only and every job starting on a skylake node gave us
|error("task/cgroup: task[%u] infinite loop broken while trying " "to
provision compute elements using %s (bitmap:%s)", |
from src/plugins/task/cg
r 2018 7:16:58 PM AEDT Andreas Henkel wrote:
>
>> PS: sorry, I missed to tell the SLurm-Version: it's 17.11.7
> It's always worth checking the NEWS file in git for changes after the release
> you're on in case it's since been fixed.
>
> https://github.com/
PS: sorry, I missed to tell the SLurm-Version: it's 17.11.7
On 10/24/18 9:43 AM, Andreas Henkel wrote:
>
> HI all,
>
> did anyone build Slurm using a recent version of HWLOC like 2.0.1 or
> 2.0.2?
>
> When I try to I end up with
>
> task_cgroup_cpuset.c:486:40:
cts(obj->cpuset,
hwloc_topology_get_allowed_cpuset(topology))
Replace cpusets with nodesets for NUMA nodes. To find out which ones,
replace intersects() with and() to get the actual intersection.
at https://www.open-mpi.org/projects/hwloc/doc/v2.0.1/a00327.php
Yet in the source code of Slurm there are already some preprocessor
switches for HWLOC 2.
Any hints welcome.
Best,
Andreas Henkel