retitle 1122398 cp2k: FTBFS: Error: "grid" at (1) has the CONTIGUOUS attribute 
but is not an array pointer or an assumed-shape or assumed-rank array
severity 1122398 important
thanks

Hi. This is strange.

This is my build history on machines with 1 CPU:

Status: successful  cp2k_2025.1-1.1_amd64-20250802T100712.059Z
Status: successful  cp2k_2025.1-1.1_amd64-20251004T020236.321Z
Status: failed      cp2k_2025.1-1.1_amd64-20251217T233307.849Z
Status: successful  cp2k_2025.1-1.1_amd64-20251227T225044.900Z
Status: successful  cp2k_2025.1-1.1_amd64-20251228T172316.918Z
Status: successful  cp2k_2025.1-1.1_amd64-20251228T172317.445Z
Status: successful  cp2k_2025.1-1.1_amd64-20251228T172317.297Z
Status: successful  cp2k_2025.1-1.1_amd64-20251228T172317.157Z
Status: successful  cp2k_2025.1-1.1_amd64-20251228T172317.001Z

and this is my build history on machines with 2 CPUs:

Status: successful  cp2k_2025.1-1.1_amd64-20250921T081424.298Z
Status: successful  cp2k_2025.1-1.1_amd64-20251104T235523.157Z
Status: failed      cp2k_2025.1-1.1_amd64-20251209T152710.671Z
Status: failed      cp2k_2025.1-1.1_amd64-20251210T172403.879Z
Status: failed      cp2k_2025.1-1.1_amd64-20251210T172407.591Z
Status: successful  cp2k_2025.1-1.1_amd64-20251228T172602.429Z
Status: successful  cp2k_2025.1-1.1_amd64-20251228T172603.745Z
Status: successful  cp2k_2025.1-1.1_amd64-20251228T172603.605Z
Status: successful  cp2k_2025.1-1.1_amd64-20251228T172603.266Z
Status: successful  cp2k_2025.1-1.1_amd64-20251228T172603.042Z

So, it would seem that the issue solved itself by way of some
build-dependency which got fixed. In these cases I try to use
debbisect to determine which one, but this is what I got:

good timestamp 20251230T000000Z was remapped to snapshot.d.o timestamp 
20251229T082432Z
bad timestamp 20251210T000000Z was remapped to snapshot.d.o timestamp 
20251209T203039Z
snapshot timestamp difference: 19.495752314814816 days
approximately 8 steps left to test
#1: trying known good 20251229T082432Z...
good timestamp was actually bad -- see debbisect.log.bad for details

--------------------------------------------------------------------------
It looks like opal_init failed for some reason; your parallel process is
likely to abort.  There are many reasons that a parallel process can
fail during opal_init; some of which are due to configuration or
environment problems.  This failure appears to be an internal failure;
here's some additional information (which may only be relevant to an
Open MPI developer):

  mca_base_var_init failed
  --> Returned value -1 instead of OPAL_SUCCESS
--------------------------------------------------------------------------

I'm downgrading to important for the time being, please don't close
this yet, I'd like to know what is going on.

Thanks.

Reply via email to