Problems with Solaris 8 & 9 in Ansible 2.2.0.0

Just started testing the new 2.2 version with my current playbooks, but it fails already in the facts gathering on Solaris 8 and 9. Solaris 10 works fine, and
8 and 9 worked fine with 2.1.2.0 and 2.1.3.0. Python version is 2.6.2 on both Solaris 8 and 9.

Error message:
(2.2.0.0) -bash-4.2$ ansible-playbook --inventory-file=/local/ansible/unix/staging/inventory/hosts site.yml --limit xx.xx.xx.xx.xx -v
Using /local/home/ans_unix/.ansible.cfg as config file

PLAY [First ansible tests] *****************************************************

TASK [setup] *******************************************************************
fatal: [xx.xx.xx.xx.xx]: FAILED! => {“changed”: false, “cmd”: null, “failed”: true, “msg”: “Argument ‘args’ to run_command must be list or string”, “rc”: 257}
to retry, use: --limit @/local/ansible/unix/staging/site.retry

PLAY RECAP *********************************************************************
xx.xx.xx.xx.xx : ok=0 changed=0 unreachable=0 failed=1

Anyone else seen this ?

can you show the output when using -vvvv?

Using /local/home/ans_unix/.ansible.cfg as config file
statically included: /local/ansible/unix/staging/roles/test/tasks/test_replace.yml
statically included: /local/ansible/unix/staging/roles/test/tasks/test_blockinfile.yml
Loading callback plugin default of type stdout, v2.0 from /local/ansible/install/2.2.0.0/lib/python2.7/site-packages/ansible/plugins/callback/init.pyc

PLAYBOOK: site.yml *************************************************************
1 plays in site.yml

PLAY [First ansible tests] *****************************************************

TASK [setup] *******************************************************************
Using module file /local/ansible/install/2.2.0.0/lib/python2.7/site-packages/ansible/modules/core/system/setup.py
<xx.xx.xx.xx> ESTABLISH SSH CONNECTION FOR USER: root
<xx.xx.xx.xx> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=10 -o ControlPath=/local/home/ans_unix/.ansible/cp/ansible-ssh-%h-%p-%r xx.xx.xx.xx ‘/bin/sh -c ‘"’"’( umask 77 && mkdir -p “echo $HOME/.ansible/tmp/ansible-tmp-1479290147.18-43117107564303” && echo ansible-tmp-1479290147.18-43117107564303=“echo $HOME/.ansible/tmp/ansible-tmp-1479290147.18-43117107564303” ) && sleep 0’“'”‘’
<xx.xx.xx.xx> PUT /tmp/tmp8eXZ2E TO //.ansible/tmp/ansible-tmp-1479290147.18-43117107564303/setup.py
<xx.xx.xx.xx> SSH: EXEC sftp -b - -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=10 -o ControlPath=/local/home/ans_unix/.ansible/cp/ansible-ssh-%h-%p-%r ‘[xx.xx.xx.xx]’
<xx.xx.xx.xx> ESTABLISH SSH CONNECTION FOR USER: root
<xx.xx.xx.xx> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=10 -o ControlPath=/local/home/ans_unix/.ansible/cp/ansible-ssh-%h-%p-%r xx.xx.xx.xx ‘/bin/sh -c ‘"’“‘chmod u+x //.ansible/tmp/ansible-tmp-1479290147.18-43117107564303/ //.ansible/tmp/ansible-tmp-1479290147.18-43117107564303/setup.py && sleep 0’”’“‘’
<xx.xx.xx.xx> ESTABLISH SSH CONNECTION FOR USER: root
<xx.xx.xx.xx> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=10 -o ControlPath=/local/home/ans_unix/.ansible/cp/ansible-ssh-%h-%p-%r -tt xx.xx.xx.xx '/bin/sh -c '”‘"’/local/ansible/bin/python //.ansible/tmp/ansible-tmp-1479290147.18-43117107564303/setup.py; rm -rf “//.ansible/tmp/ansible-tmp-1479290147.18-43117107564303/” > /dev/null 2>&1 && sleep 0’“'”‘’
fatal: [xx.xx.xx.xx]: FAILED! => {
“changed”: false,
“cmd”: null,
“failed”: true,
“invocation”: {
“module_args”: {
“fact_path”: “/etc/ansible/facts.d”,
“filter”: “*”,
“gather_subset”: [
“all”
],
“gather_timeout”: 10
},
“module_name”: “setup”
},
“msg”: “Argument ‘args’ to run_command must be list or string”,
“rc”: 257
}
to retry, use: --limit @/local/ansible/unix/staging/site.retry

PLAY RECAP *********************************************************************
xx.xx.xx.xx : ok=0 changed=0 unreachable=0 failed=1

Weird, for some reason it is trying to run a ‘null’ command. I need to narrow it down, can you run ansible with ANSIBLE_DEBUG=1 and then look at the target machine’s syslog (it should log every run_command).

Trying with ANSIBLE_DEBUG=1 doesn´t produce anything in the syslog on the Solaris target machine. I also tried with a known working Linux machine, and there it prints all the commands in the syslog.

Maybe I should also have mentioned that both Solaris machines are branded zones, if that matters.

Ansible just uses python’s syslog library, if that does not work in the solaris branded zones … not sure what to do at this point.

Also tested on a bare metal Solaris 8 machine, and it has the same problems, so it does not seem to be related to the zones.
Maybe I should file a proper bug report, but since I´m new to Ansible I want to rule out my incompetence as the cause first.