How to force creating a new ssh connection

Hello,

by default Ansible uses SSH’s ControlPersist feature to reuse one ssh connection for running multiple tasks. This is very nice and helpful. However, there is one situation when this is a problem: when I change sshd configuration, I want Ansible to start a new connection so that it will use new configs.

My playbook basically looks like this:

  • hosts: all
    tasks:

  • name: change ssh config from X to Y
    notify: reload ssh

  • hosts: all
    tasks:

  • [do more stuff that requires ssh to have config Y]

Right now, ansible reuses the connection established for running first play (when ssh configuration was X) in the second play - but the second play fails when ssh configuration is X. Can I force Ansible to create a new connection between these two plays?

Note that I don’t want to disable ControlPersist completely because it’s quite useful.

best,
Jan

Take a look at asynchronous actions:
http://docs.ansible.com/ansible/playbooks_async.html

you can run a command to kill the connection locally:

ssh -O stop -o ControlPath=~/.ansible/cp/ansible-ssh--22-

Here is an article I came across that solves an issue I had:

https://dmsimard.com/2016/03/15/changing-the-ssh-port-with-ansible

Hello,

thanks a lot! For the sake of people having the same problem as me, here’s a complete task that kills connections to all hosts from the current play:

`

this will force Ansible to create new connection(s) so that changes in ssh

settings will have effect (normally Ansible uses ControlPersist feature to

reuse one connection for all tasks). Note that the path to the socket must

be the same as what is configured in ansible.cfg.

  • name: kill cached ssh connection
    local_action: >
    shell ssh -O stop {{ hostvars[item].ansible_ssh_host }}
    -o ControlPath=/tmp/ansible-ssh-{{ hostvars[item].ansible_ssh_user }}-{{ hostvars[item].ansible_ssh_host }}-{{ hostvars[item].ansible_ssh_port }}
    run_once: yes
    register: socket_removal
    failed_when: >
    socket_removal|failed
    and “No such file or directory” not in socket_removal.stderr
    with_items: “{{ play_hosts }}”

`

If you have any further suggestions, let me know!

best,
Jan

I forgot that ansible_ssh_port and ansible_ssh_host may be unset. This version should work:

`

this will force Ansible to create new connection(s) so that changes in ssh

settings will have effect (normally Ansible uses ControlPersist feature to

reuse one connection for all tasks). Note that the path to the socket must

be the same as what is configured in ansible.cfg.

  • name: kill cached ssh connection
    local_action: >
    shell ssh -O stop {{ hostvars[item].ansible_ssh_host|default(inventory_hostname) }}
    -o ControlPath=~/.ansible/cp/ansible-ssh-{{ hostvars[item].ansible_ssh_host|default(inventory_hostname) }}-{{ hostvars[item].ansible_ssh_port|default(‘22’) }}-{{ hostvars[item].ansible_ssh_user }}
    run_once: yes
    register: socket_removal
    failed_when: >
    socket_removal|failed
    and “No such file or directory” not in socket_removal.stderr
    with_items: “{{ play_hosts }}”

`

Sorry to bring up an old thread - I’m trying to use the code below, and whilst it doesn’t error and the command appears to run, if I then move to the next Ansible command I’m getting an unreachable error which suggests that Ansible doesn’t realise it needs to re-create a new session. Output as below. I tried adding a pause after killing the SSH connection to see if that would make any difference (it doesn’t). If I break my playbook down into two components i…e

  1. Playbook 1 → Make change to remote shell, then run the SSH session kill code
  2. Playbook 2 → add interfaces using command based on shell change made in Playbook 1
    This works fine as two executions of ansible-playbook. Only if I concatenate them into a single play do I get an issue.

Any thoughts?

thanks, Keith

`
TASK [kill cached ssh connection] *******************************************************************************************************************************************************
changed: [172.27.254.170 → localhost] => (item=172.27.254.170)

TASK [wait a few seconds] ***************************************************************************************************************************************************************
Pausing for 10 seconds
(ctrl+C then ‘C’ = continue early, ctrl+C then ‘A’ = abort)
ok: [172.27.254.170]

PLAY [172.27.254.170] *******************************************************************************************************************************************************************

TASK [Ccnfigure Interface eth1 to 1.2.3.4/24 and change state to on] ********************************************************************************************************************
failed: [172.27.254.170] (item=clish -c ‘set interface eth1 ipv4-address 1.2.3.4 mask-length 24’ -s) => {“item”: “clish -c ‘set interface eth1 ipv4-address 1.2.3.4 mask-length 24’ -s”, “msg”: “Authentication or permission failure. In some cases, you may have been able to authenticate and did not have permissions on the target directory. Consider changing the remote temp path in ansible.cfg to a path rooted in "/tmp". Failed command was: ( umask 77 && mkdir -p "echo CLINFR0329 Invalid command:'/bin/sh -c 'echo ~ && sleep 0''./.ansible/tmp/ansible-tmp-1508773836.62-247443091731403" && echo ansible-tmp-1508773836.62-247443091731403="echo CLINFR0329 Invalid command:'/bin/sh -c 'echo ~ && sleep 0''./.ansible/tmp/ansible-tmp-1508773836.62-247443091731403" ), exited with result 1, stdout output: CLINFR0329 Invalid command:‘/bin/sh -c ‘( umask 77 && mkdir -p "echo CLINFR0329 Invalid command:'\"'\"'/bin/sh -c '\"'\"'echo ~ && sleep 0'\"'\"''\"'\"'./.ansible/tmp/ansible-tmp-1508773836.62-247443091731403" && echo ansible-tmp-1508773836.62-247443091731403="echo CLINFR0329 Invalid command:'\"'\"'/bin/sh -c '\"'\"'echo ~ && sleep 0'\"'\"''\"'\"'./.ansible/tmp/ansible-tmp-1508773836.62-247443091731403" ) && sleep 0’’.\n”, “unreachable”: true}
failed: [172.27.254.170] (item=clish -c ‘set interface eth1 state on’ -s) => {“item”: “clish -c ‘set interface eth1 state on’ -s”, “msg”: “Authentication or permission failure. In some cases, you may have been able to authenticate and did not have permissions on the target directory. Consider changing the remote temp path in ansible.cfg to a path rooted in "/tmp". Failed command was: ( umask 77 && mkdir -p "echo CLINFR0329 Invalid command:'/bin/sh -c 'echo ~ && sleep 0''./.ansible/tmp/ansible-tmp-1508773836.89-36032531829548" && echo ansible-tmp-1508773836.89-36032531829548="echo CLINFR0329 Invalid command:'/bin/sh -c 'echo ~ && sleep 0''./.ansible/tmp/ansible-tmp-1508773836.89-36032531829548" ), exited with result 1, stdout output: CLINFR0329 Invalid command:‘/bin/sh -c ‘( umask 77 && mkdir -p "echo CLINFR0329 Invalid command:'\"'\"'/bin/sh -c '\"'\"'echo ~ && sleep 0'\"'\"''\"'\"'./.ansible/tmp/ansible-tmp-1508773836.89-36032531829548" && echo ansible-tmp-1508773836.89-36032531829548="echo CLINFR0329 Invalid command:'\"'\"'/bin/sh -c '\"'\"'echo ~ && sleep 0'\"'\"''\"'\"'./.ansible/tmp/ansible-tmp-1508773836.89-36032531829548" ) && sleep 0’’.\n”, “unreachable”: true}
fatal: [172.27.254.170]: UNREACHABLE! => {“changed”: false, “msg”: “All items completed”, “results”: [{“_ansible_item_result”: true, “item”: “clish -c ‘set interface eth1 ipv4-address 1.2.3.4 mask-length 24’ -s”, “msg”: “Authentication or permission failure. In some cases, you may have been able to authenticate and did not have permissions on the target directory. Consider changing the remote temp path in ansible.cfg to a path rooted in "/tmp". Failed command was: ( umask 77 && mkdir -p "echo CLINFR0329 Invalid command:'/bin/sh -c 'echo ~ && sleep 0''./.ansible/tmp/ansible-tmp-1508773836.62-247443091731403" && echo ansible-tmp-1508773836.62-247443091731403="echo CLINFR0329 Invalid command:'/bin/sh -c 'echo ~ && sleep 0''./.ansible/tmp/ansible-tmp-1508773836.62-247443091731403" ), exited with result 1, stdout output: CLINFR0329 Invalid command:‘/bin/sh -c ‘( umask 77 && mkdir -p "echo CLINFR0329 Invalid command:'\"'\"'/bin/sh -c '\"'\"'echo ~ && sleep 0'\"'\"''\"'\"'./.ansible/tmp/ansible-tmp-1508773836.62-247443091731403" && echo ansible-tmp-1508773836.62-247443091731403="echo CLINFR0329 Invalid command:'\"'\"'/bin/sh -c '\"'\"'echo ~ && sleep 0'\"'\"''\"'\"'./.ansible/tmp/ansible-tmp-1508773836.62-247443091731403" ) && sleep 0’’.\n”, “unreachable”: true}, {“_ansible_item_result”: true, “item”: “clish -c ‘set interface eth1 state on’ -s”, “msg”: “Authentication or permission failure. In some cases, you may have been able to authenticate and did not have permissions on the target directory. Consider changing the remote temp path in ansible.cfg to a path rooted in "/tmp". Failed command was: ( umask 77 && mkdir -p "echo CLINFR0329 Invalid command:'/bin/sh -c 'echo ~ && sleep 0''./.ansible/tmp/ansible-tmp-1508773836.89-36032531829548" && echo ansible-tmp-1508773836.89-36032531829548="echo CLINFR0329 Invalid command:'/bin/sh -c 'echo ~ && sleep 0''./.ansible/tmp/ansible-tmp-1508773836.89-36032531829548" ), exited with result 1, stdout output: CLINFR0329 Invalid command:‘/bin/sh -c ‘( umask 77 && mkdir -p "echo CLINFR0329 Invalid command:'\"'\"'/bin/sh -c '\"'\"'echo ~ && sleep 0'\"'\"''\"'\"'./.ansible/tmp/ansible-tmp-1508773836.89-36032531829548" && echo ansible-tmp-1508773836.89-36032531829548="echo CLINFR0329 Invalid command:'\"'\"'/bin/sh -c '\"'\"'echo ~ && sleep 0'\"'\"''\"'\"'./.ansible/tmp/ansible-tmp-1508773836.89-36032531829548" ) && sleep 0’’.\n”, “unreachable”: true}]}
to retry, use: --limit @/home/krichards/Ansible-Local-Deployment_and_Setup/kr-test.retry

PLAY RECAP ******************************************************************************************************************************************************************************
172.27.254.170 : ok=4 changed=3 unreachable=1 failed=0

`