I am probably doing this completely wrong…
The playbook is for a system running openvz.
It tries to clone a openvz container from an existing template then setup ssh using the chroot connector.
The container is then booted.
Finally it attempts to ssh to the container to make sure that everything has worked.
my vars file:
vzhome: /vz
veprivate: $vzhome/private/$veid
my hosts file:
[container_deploy]
scratch248.example.com veid=248 suite=wheezy arch=amd64
some of my playbook:
- hosts: container_deploy
gather_facts: no
connection: local
vars_files: - vars/setup.yml
…
-
include: tasks/clone.yml
-
name: Clean existing known_hosts entry
command: ssh-keygen -R $inventory_hostname
-
include: tasks/initialconfig.yml ansible_connection=chroot inventory_hostname=$veprivate
-
include: tasks/verify.yml ansible_connection=ssh
The variable inventory_hostname is not passed through to the included initialconfig.yml playbook like ansible_connection is. I assume that it is a bit special. I get the following error:
TASK: [Create Openvz configuration] *******************
changed: [scratch248.example.com]
TASK: [Rebuild locales] *******************************************************
fatal: [scratch248.example.com] => scratch248.example.com is not a directory
FATAL: all hosts have already failed – aborting
PLAY RECAP ********************************************************************
to retry, use: --limit @/var/tmp/ansible/deploy.retry
scratch248.example.com : ok=6 changed=4 unreachable=1 failed=0
Any suggestions for how to do this in a more elegant manner, or at least one that will work?
Thanks.