SSH not working to client machine (ssh from command line works) - Ansible 2.1.0

When I run ansible all -m ping -vvvv from the controller to the client (ubuntu) I get the following -

Loaded callback minimal of type stdout, v2.0
<10.10.128.0> ESTABLISH WINRM CONNECTION FOR USER: Administrator on PORT 5986 TO 10.10.128.0
<10.10.128.2> ESTABLISH SSH CONNECTION FOR USER: None
<10.10.128.2> SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/ansible-ssh-%h-%p-%r 10.10.128.2 ‘/bin/sh -c ‘"’"’( umask 77 && mkdir -p “echo $HOME/.ansible/tmp/ansible-tmp-1472067867.19-242906541295629” && echo ansible-tmp-1472067867.19-242906541295629=“echo $HOME/.ansible/tmp/ansible-tmp-1472067867.19-242906541295629” ) && sleep 0’“'”‘’
10.10.128.2 | UNREACHABLE! => {
“changed”: false,
“msg”: “Failed to connect to the host via ssh.”,
“unreachable”: true

Verified that ssh works fine if i run it from command line on the controller.

Any ideas? I verified that the same user exists on both machines (admin). admin was used to setup ssh.

Ansible itself was installed by root on the controller.

Any ideas? Thanks

Anyone?

When I run ansible all -m ping -vvvv from the controller to the client
(ubuntu) I get the following -

Loaded callback minimal of type stdout, v2.0
<10.10.128.0> ESTABLISH WINRM CONNECTION FOR USER: Administrator on PORT
5986 TO 10.10.128.0
<10.10.128.2> ESTABLISH SSH CONNECTION FOR USER: None

Since user is None, no user is specified, ten Ansible will use the user running ansible-playbook.

<10.10.128.2> SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o
ControlPersist=60s -o KbdInteractiveAuthentication=no -o
PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey
-o PasswordAuthentication=no -o ConnectTimeout=10 -o
ControlPath=/root/.ansible/cp/ansible-ssh-%h-%p-%r 10.10.128.2 '/bin/sh -c
'"'"'( umask 77 && mkdir -p "` echo
$HOME/.ansible/tmp/ansible-tmp-1472067867.19-242906541295629 `" && echo
ansible-tmp-1472067867.19-242906541295629="` echo
$HOME/.ansible/tmp/ansible-tmp-1472067867.19-242906541295629 `" ) && sleep
0'"'"''
10.10.128.2 | UNREACHABLE! => {
    "changed": false,
    "msg": "Failed to connect to the host via ssh.",
    "unreachable": true

Verified that ssh works fine if i run it from command line on the
controller.

Any ideas? I verified that the same user exists on both machines (admin).
admin was used to setup ssh.

So you are logged inn as admin and running ansible-playbook and this user can login to the node 10.10.128.2?

No - all I am doing here is a ping to validate that the controller and target can communicate. This should work before I setup playbooks.

Since I read somewhere that it makes sense to setup an ansible user on each machine for use with ansible I had the admin user on each server previously setup or it was setup with the OS install (I cant recall)

All I did was create the ssh keys on each box as the admin user and then copy them to the other VM.

Any ideas how to resolve?

yes I am logged into the controller as admin and also yes I as admin I can run ssh 10.10.128.2 (Client IP) and connect just using command line…

Used a brand new ubuntu 16 desktop vm and the ping works…issue resolved