I’m learning things along the way, thank you :) Here is the output of "yum
history info 7" (yeah, I did recreate my VMs so that’s why it’s a different
ID):
Transaction ID : 7
Begin time : Thu 10 Aug 2023 02:36:15 PM UTC
Begin rpmdb : 430:cf802f1003554ad70c79c2569d6fae2fb17ca591
End time : Thu 10 Aug 2023 02:46:12 PM UTC (9 minutes)
End rpmdb : 439:e567b314bae50c45cb6d63fbcaea7a4033f5cf42
User : node-user <node-user>
Return-Code : Success
Releasever : 8
Command Line :
Comment :
Packages Altered:
Install python3-netifaces-0.10.6-4.el8.x86_64 @appstream
Install NetworkManager-initscripts-updown-1:1.40.16-4.el8_8.noarch @baseos
Install glibc-gconv-extra-2.28-225.el8.x86_64 @baseos
Install grub2-tools-efi-1:2.02-148.el8_8.1.rocky.0.3.x86_64 @baseos
Install kernel-4.18.0-477.21.1.el8_8.x86_64 @baseos
Install kernel-core-4.18.0-477.21.1.el8_8.x86_64 @baseos
Install kernel-modules-4.18.0-477.21.1.el8_8.x86_64 @baseos
Install python3-magic-5.33-24.el8.noarch @baseos
Install python3-setuptools-39.2.0-7.el8.noarch @baseos
Upgrade PackageKit-1.1.12-6.el8.0.2.x86_64 @appstream
Upgraded PackageKit-1.1.12-6.el8.x86_64 @@System
Upgrade PackageKit-glib-1.1.12-6.el8.0.2.x86_64 @appstream
Upgraded PackageKit-glib-1.1.12-6.el8.x86_64 @@System
Upgrade authselect-compat-1.2.6-1.el8.x86_64 @appstream
Upgraded authselect-compat-1.2.2-3.el8.x86_64 @@System
Upgrade cairo-1.15.12-6.el8.x86_64 @appstream
Upgraded cairo-1.15.12-3.el8.x86_64 @@System
Upgrade cairo-gobject-1.15.12-6.el8.x86_64 @appstream
Upgraded cairo-gobject-1.15.12-3.el8.x86_64 @@System
(sparing you the 500 lines here)
Upgrade yum-4.7.0-16.el8_8.noarch @baseos
Upgraded yum-4.7.0-4.el8.noarch @@System
Upgrade yum-utils-4.0.21-19.el8_8.noarch @baseos
Upgraded yum-utils-4.0.21-4.el8_5.noarch @@System
Upgrade zlib-1.2.11-21.el8_7.x86_64 @baseos
Upgraded zlib-1.2.11-17.el8.x86_64 @@System
Scriptlet output:
1 warning: /etc/shadow created as /etc/shadow.rpmnew
2 warning: /etc/systemd/logind.conf created as /etc/systemd/
logind.conf.rpmnew
3 warning: /etc/cloud/cloud.cfg created as /etc/cloud/cloud.cfg.rpmnew
4 libsemanage.semanage_direct_install_info: Overriding cockpit module at
lower priority 100 with module at priority 200.
5 warning: /etc/ssh/sshd_config created as /etc/ssh/sshd_config.rpmnew
Other than the four warnings, do you think the output of scriplet #4
(libsemanage.semanage_direct_install_info) could be the culprit here ?
Also, I didn’t mention it earlier, but SELinux is disabled:
$ sestatus
SELinux status: disabled
Regarding the rm -f -r
/home/node-user/.ansible/tmp/ansible-tmp-1691672333.3734102-8205-207838870885533/
> /dev/null 2>&1 && sleep 0, I did laugh when I saw your message, but I
didn’t ask for that specifically. I think this is just ansible cleaning its
temporary directory before finishing its run.
Le jeudi 10 août 2023 à 16:26:11 UTC+2, Evan Hisey a écrit :
> So you are getting errors on the history in transaction 8, might want to
> run "yum history info 8" and see what they are. IF the task is actually
> completing despite the role failing, Passed on that message, looks like the
> issue is in happen when ansible is reconnecting.
>
> This block here looks almost like you are shooting ansible in the head. 'rm
> -f -r
> /home/node-user/.ansible/tmp/ansible-tmp-1691672333.3734102-8205-207838870885533/
>
> > /dev/null 2>&1 && sleep 0' as it is deleting the location ansible is
> looking for out from under itself.
>
> <W.X.Y.Z> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o
> ControlPersist=60s -o KbdInteractiveAuthentication=no -o
> PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey
> -o PasswordAuthentication=no -o 'User="node-user"' -o ConnectTimeout=10 -q
> -o UserKnownHostsFile=ssh/known_hosts -i ssh/node-user -o
> 'ControlPath="/home/nicolas/test-upgrade-os/config/ansible/cp/37d3fe42d9"'
> W.X.Y.Z '/bin/sh -c '"'"'rm -f -r
> /home/node-user/.ansible/tmp/ansible-tmp-1691672333.3734102-8205-207838870885533/
>
> > /dev/null 2>&1 && sleep 0'"'"''
> <W.X.Y.Z> (0, b'', b'')
>
> worker1 | UNREACHABLE! => {
> "changed": false,
> "msg": "Failed to connect to the host via ssh: ",
> "unreachable": true
>
> On Thu, Aug 10, 2023 at 9:00 AM Nicolas Goudry <[email protected]> wrote:
>
>> @Evan
>> Here is the output of "yum history" after running the playbook that uses
>> the yum update task as a role:
>> ID | Command line
>>
>> | Date and time | Action(s)
>> | Altered
>>
>> ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
>> 8 |
>>
>> | 2023-08-09 22:48 | I, U
>> | 263 EE
>> 7 |
>>
>> | 2023-08-09 20:38 | Install
>> | 29
>> 6 |
>>
>> | 2023-08-09 20:35 | I, U
>> | 8
>> I purposefully omitted the 5 first transactions as these are from 2022
>> (and before), from when the VM image was created by the cloud provider I’m
>> relying upon.
>>
>> Regarding the debug idea, I actually do have one after the « YUM | Get
>> available package updates » task which correctly reports the list of
>> packages to update (which is later given to yum: name=... state=latest).
>> Also, I can tell that the update did succeed because running "yum update"
>> after the role failed gives me « Nothing to do ». This is stated in the PR
>> comments that I linked in my second message, but here is the role error:
>> fatal: [master2]: UNREACHABLE! => {"changed": false, "msg": "Failed to
>> connect to the host via ssh: ", "unreachable": true}
>>
>> Sorry for the cross posting BTW, I know it complicates things…
>>
>> @Will
>> Indeed, I totally forgot to point this… My apologies !
>>
>> When I run "yum update" on a host, I’m actually running it with the
>> following command:
>> ssh -i node-identity -l node-user -o ConnectTimeout=30 -o
>> ProxyCommand="ssh -i bastion-identity -l bastion-user -W %h:%p -p22
>> W.X.Y.Z" worker1 sudo yum update -y
>> (W.X.Y.Z is the IP of the bastion and the bastion hostfile have the
>> worker1 IP tied to worker1 hostname)
>>
>> In my inventory file, I also set the following under [all:vars]:
>> ansible_ssh_common_args='-q -i node-identity -o ProxyCommand="ssh -q -i
>> bastion-identity -W %h:%p -p22 [email protected]"'
>>
>> No proxies are involved here, I only use IP to IP communication
>>
>> It’s getting pretty late here (UTC+2), I’m running the « fantasist »
>> playbook that I know is working and will check for differences with the
>> failing one tomorrow. I’ll also post the results here.
>>
>> Thanks !
>> Le jeudi 10 août 2023 à 00:44:01 UTC+2, Evan Hisey a écrit :
>>
>>> Good catch on the jumphost Will. If that is timing out mid patch cycle
>>> due to duration of yum upgrade job you would get this behavior.
>>>
>>> On Wed, Aug 9, 2023, 5:40 PM Will McDonald <[email protected]> wrote:
>>>
>>>> Looking at your verbose output, it looks like your ansible runs are
>>>> tunneled through a bastion/jumphost/proxy?
>>>>
>>>> When you run your "yum update" directly on a host, are you doing:
>>>>
>>>> [user@*control-node* ~]$ ssh user@target
>>>> [user@*target-node* ~] sudo yum -y update
>>>>
>>>> Or are you doing:
>>>>
>>>> [user@*control-node* ~]$ ssh user@*target-node* sudo yum -y update
>>>>
>>>> I'm just wondering if there's something unusual in the bastion
>>>> connection handling, or the shell environment of a full interactive shell
>>>> with a TTY vs. an ansible run?
>>>>
>>>> Similarly, you have your -vvv output of a *failing *run. If you do
>>>> -vvv for a *working *run, does that cast any light, indicate any
>>>> differences in behaviour in connection, privilege escalation or command
>>>> invocation?
>>>>
>>>> Do you have any proxies defined that may be being picked up from the
>>>> environment in an interactive session which aren't in an ansible run?
>>>>
>>>>
>>>> On Wed, 9 Aug 2023 at 23:15, Nicolas Goudry <[email protected]> wrote:
>>>>
>>>>> Thanks for stepping in to help.
>>>>>
>>>>> I did run sudo yum update -y directly in one of my hosts, and
>>>>> everything went well.
>>>>>
>>>>> Also, I created the following playbook and surprisingly it works:
>>>>>
>>>>> - hosts: all
>>>>> gather_facts: no
>>>>> tasks:
>>>>> - name: YUM | Get available package updates
>>>>> yum:
>>>>> list: updates
>>>>> register: yum_available_package_updates
>>>>> - name: YUM | Update packages
>>>>> yum:
>>>>> name: "{{ yum_available_package_updates.results |
>>>>> map(attribute='name') | list }}"
>>>>> state: 'latest'
>>>>> register: yum_upgrade
>>>>> - name: YUM | Reboot after packages updates
>>>>> when:
>>>>> - yum_upgrade.changed
>>>>> reboot:
>>>>>
>>>>> However, if I use it as an ansible role, like so:
>>>>>
>>>>> ---
>>>>> - name: YUM | Get available package updates
>>>>> yum:
>>>>> list: updates
>>>>> register: yum_available_package_updates
>>>>> - name: YUM | Update packages
>>>>> yum:
>>>>> name: "{{ yum_available_package_updates.results |
>>>>> map(attribute='name') | list }}"
>>>>> state: 'latest'
>>>>> register: yum_upgrade
>>>>> - name: YUM | Reboot after packages updates
>>>>> when:
>>>>> - yum_upgrade.changed or system_upgrade_reboot == 'always'
>>>>> - system_upgrade_reboot != 'never'
>>>>> reboot:
>>>>>
>>>>> It doesn’t work (well, the system does get updated but the yum module
>>>>> hangs and the role ends up in error).
>>>>>
>>>>> For sake of completeness, this started as an issue with a new role
>>>>> added to Kubespray
>>>>> <https://github.com/kubernetes-sigs/kubespray/pull/10184>. There are
>>>>> other details in the latest pull request comments that could help to get
>>>>> the full picture. But in the end, even with a “raw” ansible command, the
>>>>> issue persist, so I don’t think this is specifically related to Kubespray.
>>>>>
>>>>> Le mercredi 9 août 2023 à 22:23:36 UTC+2, Evan Hisey a écrit :
>>>>>
>>>>> Check the host and see what happens on a full manual update. I have
>>>>> had issues with ansible when the yum command was hanging on a host do to
>>>>> a
>>>>> local issue with updating. Single packages were fine, but a full host
>>>>> update failed. I had to resolve the full update issue on the host.
>>>>>
>>>>> On Wed, Aug 9, 2023 at 3:14 PM Nicolas Goudry <[email protected]>
>>>>> wrote:
>>>>>
>>>>> I’m trying to perform a full system update with the `yum` module but
>>>>> ansible just hangs for a little bit more than an hour before failing.
>>>>>
>>>>> Here is the command I’m using:
>>>>>
>>>>> ansible all -u node-user -b --become-user=root -i exec/inventory -m
>>>>> yum -a 'name=* state=latest' -vvvv --limit=worker1
>>>>>
>>>>> Here is the output (redacted):
>>>>>
>>>>> ansible [core 2.12.5]
>>>>> config file = /home/nicolas/test-upgrade-os/ansible.cfg
>>>>> configured module search path =
>>>>> ['/home/nicolas/.ansible/plugins/modules',
>>>>> '/usr/share/ansible/plugins/modules']
>>>>> ansible python module location =
>>>>> /home/nicolas/test-upgrade-os/config/venv/lib64/python3.8/site-packages/ansible
>>>>> ansible collection location =
>>>>> /home/nicolas/.ansible/collections:/usr/share/ansible/collections
>>>>> executable location = ./config/venv/bin/ansible
>>>>> python version = 3.8.16 (default, Jun 25 2023, 05:53:51) [GCC 8.5.0
>>>>> 20210514 (Red Hat 8.5.0-18)]
>>>>> jinja version = 3.1.2
>>>>> libyaml = True
>>>>> Using /home/nicolas/test-upgrade-os/ansible.cfg as config file
>>>>> setting up inventory plugins
>>>>> host_list declined parsing
>>>>> /home/nicolas/test-upgrade-os/exec/inventory as it did not pass its
>>>>> verify_file() method
>>>>> script declined parsing /home/nicolas/test-upgrade-os/exec/inventory
>>>>> as it did not pass its verify_file() method
>>>>> auto declined parsing /home/nicolas/test-upgrade-os/exec/inventory as
>>>>> it did not pass its verify_file() method
>>>>> Parsed /home/nicolas/test-upgrade-os/exec/inventory inventory source
>>>>> with ini plugin
>>>>> Loading callback plugin minimal of type stdout, v2.0 from
>>>>> /home/nicolas/test-upgrade-os/config/venv/lib64/python3.8/site-packages/ansible/plugins/callback/minimal.py
>>>>> Skipping callback 'default', as we already have a stdout callback.
>>>>> Skipping callback 'minimal', as we already have a stdout callback.
>>>>> Skipping callback 'oneline', as we already have a stdout callback.
>>>>> META: ran handlers
>>>>> <10.10.0.101> ESTABLISH SSH CONNECTION FOR USER: node-user
>>>>> <10.10.0.101> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o
>>>>> ControlPersist=60s -o KbdInteractiveAuthentication=no -o
>>>>> PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey
>>>>> -o PasswordAuthentication=no -o 'User="node-user"' -o ConnectTimeout=10
>>>>> -q
>>>>> -o UserKnownHostsFile=ssh/known_hosts -i ssh/node-user -o
>>>>> 'ProxyCommand=ssh
>>>>> -q -o UserKnownHostsFile=ssh/known_hosts -i ssh/bastion-user -W %h:%p
>>>>> -p22
>>>>> [email protected]' -o
>>>>> 'ControlPath="/home/nicolas/test-upgrade-os/config/ansible/cp/09896940d7"'
>>>>>
>>>>> 10.10.0.101 '/bin/sh -c '"'"'echo ~node-user && sleep 0'"'"''
>>>>> <10.10.0.101> (0, b'/home/node-user\n', b'')
>>>>> <10.10.0.101> ESTABLISH SSH CONNECTION FOR USER: node-user
>>>>> <10.10.0.101> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o
>>>>> ControlPersist=60s -o KbdInteractiveAuthentication=no -o
>>>>> PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey
>>>>> -o PasswordAuthentication=no -o 'User="node-user"' -o ConnectTimeout=10
>>>>> -q
>>>>> -o UserKnownHostsFile=ssh/known_hosts -i ssh/node-user -o
>>>>> 'ProxyCommand=ssh
>>>>> -q -o UserKnownHostsFile=ssh/known_hosts -i ssh/bastion-user -W %h:%p
>>>>> -p22
>>>>> [email protected]' -o
>>>>> 'ControlPath="/home/nicolas/test-upgrade-os/config/ansible/cp/09896940d7"'
>>>>>
>>>>> 10.10.0.101 '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo
>>>>> /home/node-user/.ansible/tmp `"&& mkdir "` echo
>>>>> /home/node-user/.ansible/tmp/ansible-tmp-1691583637.8116903-3768362-148267575047576
>>>>>
>>>>> `" && echo ansible-tmp-1691583637.8116903-3768362-148267575047576="` echo
>>>>> /home/node-user/.ansible/tmp/ansible-tmp-1691583637.8116903-3768362-148267575047576
>>>>>
>>>>> `" ) && sleep 0'"'"''
>>>>> <10.10.0.101> (0,
>>>>> b'ansible-tmp-1691583637.8116903-3768362-148267575047576=/home/node-user/.ansible/tmp/ansible-tmp-1691583637.8116903-3768362-148267575047576\n',
>>>>>
>>>>> b'')
>>>>> <worker1> Attempting python interpreter discovery
>>>>> <10.10.0.101> ESTABLISH SSH CONNECTION FOR USER: node-user
>>>>> <10.10.0.101> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o
>>>>> ControlPersist=60s -o KbdInteractiveAuthentication=no -o
>>>>> PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey
>>>>> -o PasswordAuthentication=no -o 'User="node-user"' -o ConnectTimeout=10
>>>>> -q
>>>>> -o UserKnownHostsFile=ssh/known_hosts -i ssh/node-user -o
>>>>> 'ProxyCommand=ssh
>>>>> -q -o UserKnownHostsFile=ssh/known_hosts -i ssh/bastion-user -W %h:%p
>>>>> -p22
>>>>> [email protected]' -o
>>>>> 'ControlPath="/home/nicolas/test-upgrade-os/config/ansible/cp/09896940d7"'
>>>>>
>>>>> 10.10.0.101 '/bin/sh -c '"'"'echo PLATFORM; uname; echo FOUND; command -v
>>>>> '"'"'"'"'"'"'"'"'python3.10'"'"'"'"'"'"'"'"'; command -v
>>>>> '"'"'"'"'"'"'"'"'python3.9'"'"'"'"'"'"'"'"'; command -v
>>>>> '"'"'"'"'"'"'"'"'python3.8'"'"'"'"'"'"'"'"'; command -v
>>>>> '"'"'"'"'"'"'"'"'python3.7'"'"'"'"'"'"'"'"'; command -v
>>>>> '"'"'"'"'"'"'"'"'python3.6'"'"'"'"'"'"'"'"'; command -v
>>>>> '"'"'"'"'"'"'"'"'python3.5'"'"'"'"'"'"'"'"'; command -v
>>>>> '"'"'"'"'"'"'"'"'/usr/bin/python3'"'"'"'"'"'"'"'"'; command -v
>>>>> '"'"'"'"'"'"'"'"'/usr/libexec/platform-python'"'"'"'"'"'"'"'"'; command
>>>>> -v
>>>>> '"'"'"'"'"'"'"'"'python2.7'"'"'"'"'"'"'"'"'; command -v
>>>>> '"'"'"'"'"'"'"'"'python2.6'"'"'"'"'"'"'"'"'; command -v
>>>>> '"'"'"'"'"'"'"'"'/usr/bin/python'"'"'"'"'"'"'"'"'; command -v
>>>>> '"'"'"'"'"'"'"'"'python'"'"'"'"'"'"'"'"'; echo ENDFOUND && sleep 0'"'"''
>>>>> <10.10.0.101> (0,
>>>>> b'PLATFORM\nLinux\nFOUND\n/usr/libexec/platform-python\nENDFOUND\n', b'')
>>>>> <10.10.0.101> ESTABLISH SSH CONNECTION FOR USER: node-user
>>>>> <10.10.0.101> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o
>>>>> ControlPersist=60s -o KbdInteractiveAuthentication=no -o
>>>>> PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey
>>>>> -o PasswordAuthentication=no -o 'User="node-user"' -o ConnectTimeout=10
>>>>> -q
>>>>> -o UserKnownHostsFile=ssh/known_hosts -i ssh/node-user -o
>>>>> 'ProxyCommand=ssh
>>>>> -q -o UserKnownHostsFile=ssh/known_hosts -i ssh/bastion-user -W %h:%p
>>>>> -p22
>>>>> [email protected]' -o
>>>>> 'ControlPath="/home/nicolas/test-upgrade-os/config/ansible/cp/09896940d7"'
>>>>>
>>>>> 10.10.0.101 '/bin/sh -c '"'"'/usr/libexec/platform-python && sleep 0'"'"''
>>>>> <10.10.0.101> (0, b'{"platform_dist_result": ["centos", "8.5", "Green
>>>>> Obsidian"], "osrelease_content": "NAME=\\"Rocky Linux\\"\\nVERSION=\\"8.5
>>>>> (Green Obsidian)\\"\\nID=\\"rocky\\"\\nID_LIKE=\\"rhel centos
>>>>> fedora\\"\\nVERSION_ID=\\"8.5\\"\\nPLATFORM_ID=\\"platform:el8\\"\\nPRETTY_NAME=\\"Rocky
>>>>>
>>>>> Linux 8.5 (Green
>>>>> Obsidian)\\"\\nANSI_COLOR=\\"0;32\\"\\nCPE_NAME=\\"cpe:/o:rocky:rocky:8:GA\\"\\nHOME_URL=\\"
>>>>> https://rockylinux.org/\\"\\nBUG_REPORT_URL=\\"
>>>>> https://bugs.rockylinux.org/\\"\\nROCKY_SUPPORT_PRODUCT=\\"Rocky
>>>>> Linux\\"\\nROCKY_SUPPORT_PRODUCT_VERSION=\\"8\\"\\n"}\n', b'')
>>>>> Using module file
>>>>> /home/nicolas/test-upgrade-os/config/venv/lib64/python3.8/site-packages/ansible/modules/setup.py
>>>>> <10.10.0.101> PUT
>>>>> /home/nicolas/test-upgrade-os/config/ansible/tmp/ansible-local-3768356wtqis0tq/tmpy4qpsqz0
>>>>>
>>>>> TO
>>>>> /home/node-user/.ansible/tmp/ansible-tmp-1691583637.8116903-3768362-148267575047576/AnsiballZ_setup.py
>>>>> <10.10.0.101> SSH: EXEC sftp -b - -vvv -C -o ControlMaster=auto -o
>>>>> ControlPersist=60s -o KbdInteractiveAuthentication=no -o
>>>>> PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey
>>>>> -o PasswordAuthentication=no -o 'User="k
>>>>> ubonode"' -o ConnectTimeout=10 -q -o
>>>>> UserKnownHostsFile=ssh/known_hosts -i ssh/node-user -o 'ProxyCommand=ssh
>>>>> -q
>>>>> -o UserKnownHostsFile=ssh/known_hosts -i ssh/bastion-user -W %h:%p -p22
>>>>> [email protected]' -o
>>>>> 'ControlPath="/home/nicolas/test-upgrade-os/config/ansible/cp/09896940d7"'
>>>>>
>>>>> '[10.10.0.101]'
>>>>> <10.10.0.101> (0, b'sftp> put
>>>>> /home/nicolas/test-upgrade-os/config/ansible/tmp/ansible-local-3768356wtqis0tq/tmpy4qpsqz0
>>>>>
>>>>> /home/node-user/.ansible/tmp/ansible-tmp-1691583637.8116903-3768362-148267575047576/AnsiballZ_setup.py\n',
>>>>>
>>>>> b'')
>>>>> <10.10.0.101> ESTABLISH SSH CONNECTION FOR USER: node-user
>>>>> <10.10.0.101> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o
>>>>> ControlPersist=60s -o KbdInteractiveAuthentication=no -o
>>>>> PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey
>>>>> -o PasswordAuthentication=no -o 'User="node-user"' -o ConnectTimeout=10
>>>>> -q
>>>>> -o UserKnownHostsFile=ssh/known_hosts -i ssh/node-user -o
>>>>> 'ProxyCommand=ssh
>>>>> -q -o UserKnownHostsFile=ssh/known_hosts -i ssh/bastion-user -W %h:%p
>>>>> -p22
>>>>> [email protected]' -o
>>>>> 'ControlPath="/home/nicolas/test-upgrade-os/config/ansible/cp/09896940d7"'
>>>>>
>>>>> 10.10.0.101 '/bin/sh -c '"'"'chmod u+x
>>>>> /home/node-user/.ansible/tmp/ansible-tmp-1691583637.8116903-3768362-148267575047576/
>>>>>
>>>>> /home/node-user/.ansible/tmp/ansible-tmp-1691583637.8116903-3768362-148267575047576/AnsiballZ_setup.py
>>>>>
>>>>> && sleep 0'"'"''
>>>>> <10.10.0.101> (0, b'', b'')
>>>>> <10.10.0.101> ESTABLISH SSH CONNECTION FOR USER: node-user
>>>>> <10.10.0.101> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o
>>>>> ControlPersist=60s -o KbdInteractiveAuthentication=no -o
>>>>> PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey
>>>>> -o PasswordAuthentication=no -o 'User="node-user"' -o ConnectTimeout=10
>>>>> -q
>>>>> -o UserKnownHostsFile=ssh/known_hosts -i ssh/node-user -o
>>>>> 'ProxyCommand=ssh
>>>>> -q -o UserKnownHostsFile=ssh/known_hosts -i ssh/bastion-user -W %h:%p
>>>>> -p22
>>>>> [email protected]' -o
>>>>> 'ControlPath="/home/nicolas/test-upgrade-os/config/ansible/cp/09896940d7"'
>>>>>
>>>>> -tt 10.10.0.101 '/bin/sh -c '"'"'sudo -H -S -n -u root /bin/sh -c
>>>>> '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-ztvxikfxzuzwogfymzcnlpfaroxhooqg ;
>>>>> /usr/libexec/platform-python
>>>>> /home/node-user/.ansible/tmp/ansible-tmp-1691583637.8116903-3768362-148267575047576/AnsiballZ_setup.py'"'"'"'"'"'"'"'"'
>>>>>
>>>>> && sleep 0'"'"''
>>>>> Escalation succeeded
>>>>> <10.10.0.101> (0, b'\r\n{"ansible_facts": {"ansible_pkg_mgr": "dnf"},
>>>>> "invocation": {"module_args": {"filter": ["ansible_pkg_mgr"],
>>>>> "gather_subset": ["!all"], "gather_timeout": 10, "fact_path":
>>>>> "/etc/ansible/facts.d"}}}\r\n', b'')
>>>>> Running ansible.legacy.dnf as the backend for the yum action plugin
>>>>> Using module file
>>>>> /home/nicolas/test-upgrade-os/config/venv/lib64/python3.8/site-packages/ansible/modules/dnf.py
>>>>> <10.10.0.101> PUT
>>>>> /home/nicolas/test-upgrade-os/config/ansible/tmp/ansible-local-3768356wtqis0tq/tmpomw666d5
>>>>>
>>>>> TO
>>>>> /home/node-user/.ansible/tmp/ansible-tmp-1691583637.8116903-3768362-148267575047576/AnsiballZ_dnf.py
>>>>> <10.10.0.101> SSH: EXEC sftp -b - -vvv -C -o ControlMaster=auto -o
>>>>> ControlPersist=60s -o KbdInteractiveAuthentication=no -o
>>>>> PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey
>>>>> -o PasswordAuthentication=no -o 'User="node-user"' -o ConnectTimeout=10
>>>>> -q
>>>>> -o UserKnownHostsFile=ssh/known_hosts -i ssh/node-user -o
>>>>> 'ProxyCommand=ssh
>>>>> -q -o UserKnownHostsFile=ssh/known_hosts -i ssh/bastion-user -W %h:%p
>>>>> -p22
>>>>> [email protected]' -o
>>>>> 'ControlPath="/home/nicolas/test-upgrade-os/config/ansible/cp/09896940d7"'
>>>>>
>>>>> '[10.10.0.101]'
>>>>> <10.10.0.101> (0, b'sftp> put
>>>>> /home/nicolas/test-upgrade-os/config/ansible/tmp/ansible-local-3768356wtqis0tq/tmpomw666d5
>>>>>
>>>>> /home/node-user/.ansible/tmp/ansible-tmp-1691583637.8116903-3768362-148267575047576/AnsiballZ_dnf.py\n',
>>>>>
>>>>> b
>>>>> '')
>>>>> <10.10.0.101> ESTABLISH SSH CONNECTION FOR USER: node-user
>>>>> <10.10.0.101> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o
>>>>> ControlPersist=60s -o KbdInteractiveAuthentication=no -o
>>>>> PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey
>>>>> -o PasswordAuthentication=no -o 'User="node-user"' -o ConnectTimeout=10
>>>>> -q
>>>>> -o UserKnownHostsFile=ssh/known_hosts -i ssh/node-user -o
>>>>> 'ProxyCommand=ssh
>>>>> -q -o UserKnownHostsFile=ssh/known_hosts -i ssh/bastion-user -W %h:%p
>>>>> -p22
>>>>> [email protected]' -o
>>>>> 'ControlPath="/home/nicolas/test-upgrade-os/config/ansible/cp/09896940d7"'
>>>>>
>>>>> 10.10.0.101 '/bin/sh -c '"'"'chmod u+x
>>>>> /home/node-user/.ansible/tmp/ansible-tmp-1691583637.8116903-3768362-148267575047576/
>>>>>
>>>>> /home/node-user/.ansible/tmp/ansible-tmp-1691583637.8116903-3768362-148267575047576/AnsiballZ_dnf.py
>>>>>
>>>>> && sleep 0'"'"''
>>>>> <10.10.0.101> (0, b'', b'')
>>>>> <10.10.0.101> ESTABLISH SSH CONNECTION FOR USER: node-user
>>>>> <10.10.0.101> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o
>>>>> ControlPersist=60s -o KbdInteractiveAuthentication=no -o
>>>>> PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey
>>>>> -o PasswordAuthentication=no -o 'User="node-user"' -o ConnectTimeout=10
>>>>> -q
>>>>> -o UserKnownHostsFile=ssh/known_hosts -i ssh/node-user -o
>>>>> 'ProxyCommand=ssh
>>>>> -q -o UserKnownHostsFile=ssh/known_hosts -i ssh/bastion-user -W %h:%p
>>>>> -p22
>>>>> [email protected]' -o
>>>>> 'ControlPath="/home/nicolas/test-upgrade-os/config/ansible/cp/09896940d7"'
>>>>>
>>>>> -tt 10.10.0.101 '/bin/sh -c '"'"'sudo -H -S -n -u root /bin/sh -c
>>>>> '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-gjdfwphkqonajiudmalgairdspobkjad ;
>>>>> /usr/libexec/platform-python
>>>>> /home/node-user/.ansible/tmp/ansible-tmp-1691583637.8116903-3768362-148267575047576/AnsiballZ_dnf.py'"'"'"'"'"'"'"'"'
>>>>>
>>>>> && sleep 0'"'"''
>>>>> Escalation succeeded
>>>>>
>>>>> Before running ansible, I ssh'ed in the node and ran:
>>>>>
>>>>> watch "ps -aux | grep ansible"
>>>>>
>>>>> While ansible was performing the yum update, I saw the process
>>>>> /usr/libexec/platform-python
>>>>> /home/node-user/.ansible/tmp/ansible-tmp-1691583637.8116903-3768362-148267575047576/AnsiballZ_dnf.py
>>>>> was
>>>>> running for about 10-15 minutes and when it had disappear, ansible kept
>>>>> running for more than an hour before failing with the following error:
>>>>>
>>>>> worker1 | UNREACHABLE! => {
>>>>> "changed": false,
>>>>> "msg": "Failed to connect to the host via ssh: ",
>>>>> "unreachable": true
>>>>> }
>>>>>
>>>>> I tried using the dnf and package modules, which gave the exact same
>>>>> results.
>>>>>
>>>>> I tried updating a single package (tar) and it worked with yum, dnf
>>>>> and package modules.
>>>>>
>>>>> I’m running ansible on a Rocky Linux 8 machine with python 3.8.16. The
>>>>> worker1 machine is also using Rocky Linux 8 and the output of
>>>>> /usr/libexec/platform-python
>>>>> --version is Python 3.6.8.
>>>>>
>>>>> Should I file an issue in the ansible github repo for this matter ? Or
>>>>> am I doing something wrong ?
>>>>>
>>>>> --
>>>>> You received this message because you are subscribed to the Google
>>>>> Groups "Ansible Project" group.
>>>>> To unsubscribe from this group and stop receiving emails from it, send
>>>>> an email to [email protected].
>>>>> To view this discussion on the web visit
>>>>> https://groups.google.com/d/msgid/ansible-project/fa63bd90-69d7-494f-a494-0743de6c314an%40googlegroups.com
>>>>>
>>>>> <https://groups.google.com/d/msgid/ansible-project/fa63bd90-69d7-494f-a494-0743de6c314an%40googlegroups.com?utm_medium=email&utm_source=footer>
>>>>> .
>>>>>
>>>>> --
>>>>> You received this message because you are subscribed to the Google
>>>>> Groups "Ansible Project" group.
>>>>> To unsubscribe from this group and stop receiving emails from it, send
>>>>> an email to [email protected].
>>>>> To view this discussion on the web visit
>>>>> https://groups.google.com/d/msgid/ansible-project/33abb936-151f-4bf4-940c-f1bb48b8b5b0n%40googlegroups.com
>>>>>
>>>>> <https://groups.google.com/d/msgid/ansible-project/33abb936-151f-4bf4-940c-f1bb48b8b5b0n%40googlegroups.com?utm_medium=email&utm_source=footer>
>>>>> .
>>>>>
>>>> --
>>>> You received this message because you are subscribed to the Google
>>>> Groups "Ansible Project" group.
>>>> To unsubscribe from this group and stop receiving emails from it, send
>>>> an email to [email protected].
>>>>
>>> To view this discussion on the web visit
>>>> https://groups.google.com/d/msgid/ansible-project/CAKtKohQJMLbjYX02c5RoFDfpcEkkzg_rYJDj-1AUCOL505xS9g%40mail.gmail.com
>>>>
>>>> <https://groups.google.com/d/msgid/ansible-project/CAKtKohQJMLbjYX02c5RoFDfpcEkkzg_rYJDj-1AUCOL505xS9g%40mail.gmail.com?utm_medium=email&utm_source=footer>
>>>> .
>>>>
>>> --
>> You received this message because you are subscribed to the Google Groups
>> "Ansible Project" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to [email protected].
>>
> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/ansible-project/a7e4b655-adb9-44b1-9a24-3ddf0eaa3d84n%40googlegroups.com
>>
>> <https://groups.google.com/d/msgid/ansible-project/a7e4b655-adb9-44b1-9a24-3ddf0eaa3d84n%40googlegroups.com?utm_medium=email&utm_source=footer>
>> .
>>
>
--
You received this message because you are subscribed to the Google Groups
"Ansible Project" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
To view this discussion on the web visit
https://groups.google.com/d/msgid/ansible-project/c5d5cc10-9ee8-40bc-9ab8-1e4ccdf445aan%40googlegroups.com.