It seems you'll have to set up csync2 manually. >>> "Reynolds, John F - San Mateo, CA - Contractor" <[email protected]> schrieb am 25.11.2019 um 23:23 in Nachricht <[email protected]>: > Hello. > > I am trying to setup a two‑node cluster of SLES12SP4 servers. The two nodes > are named 'eagnmnmeqfc0', IP 56.76.161.34, and 'eagnmnmeqfc1', IP > 56.76.161.35 > > The ha‑cluster‑init on fc0 went fine. It is set up for unicast, as multicast > is blocked on our networks. > > The cluster‑join on fc1 failed. It looks OK, but at the end, there is a TLS
> handshake error. The log is: > > > eagnmnmeqfc1:/var/log # cat ha‑cluster‑bootstrap.log > + systemctl reload rsyslog.service > ================================================================ > 2019‑11‑25 15:28:52‑06:00 /usr/sbin/crm cluster join ‑c 56.76.161.34 > ‑‑interface=bond0 ‑y > ‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑ > + systemctl enable sshd.service > + mkdir ‑m 700 ‑p /root/.ssh > # Retrieving SSH keys ‑ This may prompt for [email protected]: > + scp ‑oStrictHostKeyChecking=no [email protected]:'/root/.ssh/id_*' > /tmp/crmsh_IlBXAY/ > [login header] > + mv /tmp/crmsh_IlBXAY/id_rsa* /root/.ssh/ > + cat /root/.ssh/id_rsa.pub >> /root/.ssh/authorized_keys > # One new SSH key installed > + ssh [email protected] ha‑cluster‑init ssh_remote > Done (log saved to /var/log/ha‑cluster‑bootstrap.log) > [login header] > # Configuring csync2 > + rm ‑f /var/lib/csync2/eagnmnmeqfc1.db3 > + ssh [email protected] ha‑cluster‑init csync2_remote eagnmnmeqfc1 > Done (log saved to /var/log/ha‑cluster‑bootstrap.log) > [login header] > + scp [email protected]:'/etc/csync2/{csync2.cfg,key_hagroup}' /etc/csync2 > [login header] > + systemctl enable csync2.socket > + ssh [email protected] "csync2 ‑mr / ; csync2 ‑fr / ; csync2 ‑xv" > [login header] > Marking file as dirty: /etc/corosync/authkey > Connecting to host eagnmnmeqfc1 (SSL) ... > Connect to 56.76.161.35:30865 (eagnmnmeqfc1). > SSL: failed to use key file /etc/csync2/csync2_ssl_key.pem and/or > certificate file /etc/csync2/csync2_ssl_cert.pem: Error while reading file. > (GNUTLS_E_FILE_ERROR) > ARNING: csync2 run failed ‑ some files may not be sync'd > # Merging known_hosts > parallax.call ['eagnmnmeqfc0', 'eagnmnmeqfc1'] : [ ‑e /root/.ssh/known_hosts > ] && cat /root/.ssh/known_hosts || true > parallax.copy ['eagnmnmeqfc0', 'eagnmnmeqfc1'] : 56.76.161.35 > ecdsa‑sha2‑nistp256 > AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBA1NplEqVWzby0/wwQED0s8wP > rNhk0zzkZz4NIWOlU/Z4td75heNmPgpEhh5z6i9Jdc3hWnuhPbiP9Wso5qsJMs= > eagnmnmeqfc0,56.76.161.34 ecdsa‑sha2‑nistp256 > AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBA1NplEqVWzby0/wwQED0s8wP > rNhk0zzkZz4NIWOlU/Z4td75heNmPgpEhh5z6i9Jdc3hWnuhPbiP9Wso5qsJMs= > eagnmnmeqfc1 ecdsa‑sha2‑nistp256 > AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBA1NplEqVWzby0/wwQED0s8wP > rNhk0zzkZz4NIWOlU/Z4td75heNmPgpEhh5z6i9Jdc3hWnuhPbiP9Wso5qsJMs= > eagnmnmeqfc1,56.76.161.35 ecdsa‑sha2‑nistp256 > AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBA1NplEqVWzby0/wwQED0s8wP > rNhk0zzkZz4NIWOlU/Z4td75heNmPgpEhh5z6i9Jdc3hWnuhPbiP9Wso5qsJMs= > eagnmnmeqfca,56.76.161.44 ecdsa‑sha2‑nistp256 > AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBA1NplEqVWzby0/wwQED0s8wP > rNhk0zzkZz4NIWOlU/Z4td75heNmPgpEhh5z6i9Jdc3hWnuhPbiP9Wso5qsJMs= > eagnmnmeqfcb,56.76.161.45 ecdsa‑sha2‑nistp256 > AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBA1NplEqVWzby0/wwQED0s8wP > rNhk0zzkZz4NIWOlU/Z4td75heNmPgpEhh5z6i9Jdc3hWnuhPbiP9Wso5qsJMs= > # Probing for new partitions... > + partprobe /dev/sde /dev/sdf /dev/sdb /dev/sdc /dev/sda /dev/sdd /dev/sdg > /dev/sdm /dev/sdn /dev/sdq /dev/sdr /dev/sdh /dev/sdk /dev/sdi /dev/sdl > /dev/sdp /dev/sds /dev/sdj /dev/sdu /dev/sdt /dev/sdv /dev/sdo /dev/sdx > /dev/sdw /dev/mapper/360000970000197200928533030333644 > /dev/mapper/360000970000197200928533030324134 > /dev/mapper/360000970000197200928533030324135 > /dev/mapper/360000970000197200928533030333645 > /dev/mapper/360000970000197200498533031374344 > /dev/mapper/360000970000197200498533030324637 > /dev/mapper/360000970000197200498533030324639 > /dev/mapper/360000970000197200498533030324638 /dev/sdy /dev/sdz /dev/sdaa > /dev/sdab /dev/sdac /dev/sdad /dev/sdae /dev/sdaf > /dev/mapper/vg_qncoa_noncloned‑‑a00‑lv_a00shared > /dev/mapper/vg_rootdisk‑lv_export /dev/mapper/vg_rootdisk‑lv_patrol > /dev/mapper/vg_rootdisk‑lv_root /dev/mapper/vg_rootdisk‑lv_swap > /dev/mapper/vg_rootdisk‑lv_var /dev/mapper/vg_rootdisk‑lv_var_log > # done > + mkdir ‑p /ncoa/qncoa/a00shared > + mkdir ‑p /mqm/qncoa/u00 > + mkdir ‑p /ncoa/qncoa/a01shared > + mkdir ‑p /ncoa/qncoa/a02shared > + mkdir ‑p /ncoa/qncoa/a03shared > + mkdir ‑p /ncoa/qncoa/a04shared > + mkdir ‑p /ncoa/qncoa/a05shared > + ssh [email protected] systemctl is‑enabled sbd.service > disabled > [login header] > + rm ‑f /var/lib/heartbeat/crm/* /var/lib/pacemaker/cib/* > + systemctl enable hawk.service > + systemctl start hawk.service > # Hawk cluster interface is now running. To see cluster status, open: > # https://56.76.161.35:7630/ > # Log in with username 'hacluster' > + systemctl disable sbd.service > + systemctl enable pacemaker.service > + systemctl start pacemaker.service > # Waiting for cluster... > # done > + csync2 ‑rm /etc/corosync/corosync.conf > + csync2 ‑rf /etc/corosync/corosync.conf > + csync2 ‑rxv /etc/corosync/corosync.conf > Marking file as dirty: /etc/corosync/corosync.conf > Connecting to host eagnmnmeqfc0 (SSL) ... > Connect to 56.76.161.34:30865 (eagnmnmeqfc0). > SSL: handshake failed: The TLS connection was non‑properly terminated. > (GNUTLS_E_PREMATURE_TERMINATION) > + corosync‑cfgtool ‑R > Reloading corosync.conf... > Done > + crm cluster run 'crm corosync reload' > ERROR: [eagnmnmeqfc1]: Exited with error code 1, Error output: > [login header] > ERROR: corosync: [Errno 2] No such file or directory: '/proc/30517/cmdline' > # Done (log saved to /var/log/ha‑cluster‑bootstrap.log) > eagnmnmeqfc1:/var/log # > > I've done some googling, but haven't found anything that seems to apply. > > Advice, please? > > Thank you. > > John Reynolds > _______________________________________________ > Manage your subscription: > https://lists.clusterlabs.org/mailman/listinfo/users > > ClusterLabs home: https://www.clusterlabs.org/ _______________________________________________ Manage your subscription: https://lists.clusterlabs.org/mailman/listinfo/users ClusterLabs home: https://www.clusterlabs.org/
