Cisco.ios.ios_config module reporting odd local path error

I have a weird issue that I have been on 3-4 forums so far and no one (so far) has any idea what could be the issue.

I have an existing playbook that suddenly seems to be having an odd problem out of the blue. It was working just last week.

I have a series of API calls from our password manager to gather some data, then a play to set some facts related to those fields, and then this play that is failing:

- name: Backup configuration from switch stacks....
  cisco.ios.ios_config:
    backup: true
    backup_options:
      filename: "{{  file_string  }}.txt"
      dir_path: "/home/user_account/backups/{{  inventory_hostname  }}"

The traceback is very non-specific, and occurs with a single host or multiple. Other playbooks running different modules are unaffected and I have removed and redownloaded the cisco.ios collection.

fatal: [hostname]: FAILED! => changed=false
msg: socket path /home/user_account/.ansible/pc/e1b79382e1 does not exist or cannot be found. See Troubleshooting socket path issues in the Network Debug and Troubleshooting Guide

I have read this guide it mentions and have enabled the network verbose logging, and there is nothing obvious or anything pointing in a direction to troubleshoot from.

I have seen a couple suggestions to downgrade Ansible to 2.14 where similar issues don’t happen, but that forum was very non-specific on why. Then another suggestion to try LibSSH over Paramiko, but I have had other issues with LibSSH before on Cisco playbooks.

I am currently running Ansible core 2.17.0, with Python 3.10.12, on Ubuntu 22.04.

1 Like

If you’re using Ansible navigator you might verify that your running that task on localhost and its not searching for the path within an EE container.

The catch is this playbook was working at one point. For months.

I am thinking its either update related, or something got corrupted. But I am not finding any direction to troubleshoot which is the issue.

I have seen this issue occur on long-running tasks. Ansible creates a control socket on the control node to maintain active SSH sessions for a period of time to reduce the performance overhead of having to frequently authenticate with remote hosts between commands sent over ssh. The default value is ControlPersist=60s, so one minute before the socket expires due to inactivity.

The solution in my case was to increase the control persist value to 600s.

ansible.cfg

[ssh_connection]
ssh_args = -C -o ControlMaster=auto -o ControlPersist=600s -o ControlPath=%d/.ansible/ssh/%r@%h

I use OpenSSH though, not LibSSH or paramiko, and I don’t manage network devices, so your mileage may vary.

1 Like