Cygwin on Managed Nodes

I recently inherited a number of Windows Server 2012 hosts that have Cygwin installed on them and a number of data-processing jobs setup to be run in that shell/environment. I would like to make it such that the people responsible for running those jobs can do so using Ansible playbooks that are kicked off from a Jenkins server. I’ve run into a bit of trouble, getting this going, though.

Here’s a relevant excerpt from my inventory file:

[equinox_ssh]

workserver01-ssh ansible_host=172.17.74.14

[equinox_ssh:vars]

ansible_ssh_private_key_file=/home/agenerette/.ssh/id_jenkins_rsa ansible_ssh_user=jenkins ansible_port=2121

/etc/sshd_config on the target Cygwin setup has:

Port 2121

PubkeyAuthentication yes

And I’m able to run commands via ssh to the target from my admin workstation:

$ ssh -p 2121 -i ~/.ssh/id_jenkins_rsa jenkins@172.17.74.14 ‘ls -al /cygdrive’
Warning: Permanently added ‘[172.17.74.14]:2121’ (ECDSA) to the list of known hosts.
total 16

d—r-x—+ 1 NT SERVICE+TrustedInstaller NT SERVICE+TrustedInstaller 0 May 29 11:54 c
drwxr-xr-x+ 1 SYSTEM SYSTEM 0 May 17 00:52 d

However, when I try to run Ansible ad-hoc commands, I see:

$ ansible wpsi1t3ws01-ssh -m raw -a ‘whoami’

wpsi1t3ws01-ssh | UNREACHABLE! => {

“changed”: false,

“msg”: “Failed to connect to the host via ssh: Permission denied (password).\r\n”,

“unreachable”: true

}

Using the “-vvv” switch, I can see that this is what’s happening:

<172.17.74.14> ESTABLISH SSH CONNECTION FOR USER: agenerette

<172.17.74.14> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o ‘IdentityFile=“/home/agenerette/.ssh/id_jenkins_rsa ansible_ssh_user=jenkins ansible_port=2121”’ -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=jenkins -o ConnectTimeout=10 -o ControlPath=/home/agenerette/.ansible/cp/33518283d2 172.17.74.14 ‘/bin/sh -c ‘"’“‘echo ~ && sleep 0’”’"‘’

Permission denied (password).

But, I’m not able to tell, from that output, what might be going wrong. The key being supplied is valid, of course, because it works for that straight ssh call that I ran. The key is also associated with the right user.

'Anyone out there happen to know what might be going wrong, here? I figured I would run into trouble with the whole win versus non-win (win_shell, shell) thing, but thought that using the raw module might get me over this hurdle. Also, though I don’t know that I’ll be able to set ansible_python_interpreter to point to it, I thought I might be able to use a Python2.7 installation that’s in place on the target(s):

More information on this one…

I turned up the debugging level on that ad-hoc call

$ ansible -vvvv wpsi1t3ws01-ssh -m raw -a “ls -al /cygdrive”

And noticed this in the output:

debug1: Remote protocol version 2.0, remote software version CoreFTP-0.3.3\r\ndebug1: no match: CoreFTP-0.3.3\r

Now, I happen to know that CoreFTP is running on the target, on port 22. That’s why I configured sshd to use port 2121. The debugging output makes it seem that my call is attempting to run over port 22. So, it’s as if the “ansible_port = 2121” setting that I made, in my inventory file, is being overridden or ignored.

Searching the output, again, I do find multiple instances of this string: