Same host in multiple host groups

Hi,

I am experiencing something similar to this issue Ansible group_vars from inventory don’t support repeated hosts.

My main playbook is:

site.yml

`

  • name: Configure acceptance server
    hosts: acceptance
    roles:

  • common
    tags: acceptance

  • name: Configure production server
    hosts: production
    roles:

  • common
    tags: production
    `

hosts

`

[acceptance]
127.0.0.1 ansible_connection=local

[production]
127.0.0.2 ansible_connection=local

`

In the group_vars I have the following files:

acceptance

`

Application

app_name: snow_proxy
cert_name: vcpe-le-snow-proxy-acc

Web Server

deploy_ssl true for HTTPS, false for only HTTP

deploy_ssl: true

local directory SSL certificates should be located here

ssl_certs_dir: /home/mid/certs

`

production

`

Application

app_name: snow_proxy
cert_name: vcpe-le-snow-proxy-acc

Web Server

deploy_ssl true for HTTPS, false for only HTTP

deploy_ssl: true

local directory SSL certificates should be located here

ssl_certs_dir: /home/mid/prod/certs

`

When running the playbook I expect the correct set of variables will be loaded depending on the tag, but in both cases it seems the group_vars are overriding each other because the same host is repeated in both groups.

`
ansible-playbook -i hosts site.yml --tags acceptance


Unable to find ‘/home/mid**/prod/certs/**vcpe-le-snow-proxy-acc.crt’ in expected paths.

`

`
ansible-playbook -i hosts site.yml --tags production


Unable to find ‘/home/mid**/prod/certs/**vcpe-le-snow-proxy-acc.crt’ in expected paths.

`

What is the play that uses ssl_certs_dir?

I am experiencing something similar to this issue Ansible group_vars from
inventory don't support repeated hosts
<https://github.com/ansible/ansible/issues/17243&gt;\.

My main playbook is:

*site.yml*

- name: Configure acceptance server
  hosts: acceptance
  roles:
    - common
  tags: acceptance

- name: Configure production server
  hosts: production
  roles:
    - common
  tags: production

*hosts*
[acceptance]
127.0.0.1 ansible_connection=local

[production]
127.0.0.2 ansible_connection=local

From Ansible point of view this is to separate hosts and those hosts have separated variable as any other hosts in Ansible inventory.

In the group_vars I have the following files:

*acceptance*
# Application
app_name: snow_proxy
cert_name: vcpe-le-snow-proxy-acc

# Web Server
## deploy_ssl true for HTTPS, false for only HTTP
deploy_ssl: true
## local directory SSL certificates should be located here
ssl_certs_dir: /home/mid/certs

*production*
# Application
app_name: snow_proxy
cert_name: vcpe-le-snow-proxy-acc

# Web Server
## deploy_ssl true for HTTPS, false for only HTTP
deploy_ssl: true
## local directory SSL certificates should be located here
ssl_certs_dir: /home/mid/prod/certs

When running the playbook I expect the correct set of variables will be
loaded depending on the tag, but in both cases it seems the group_vars are
overriding each other because the same host is repeated in both groups.

ansible-playbook -i hosts site.yml --tags acceptance

....
Unable to find '/home/mid*/prod/certs/*vcpe-le-snow-proxy-acc.crt' in
expected paths.
....

ansible-playbook -i hosts site.yml --tags production

....
Unable to find '/home/mid*/prod/certs/*vcpe-le-snow-proxy-acc.crt' in
expected paths.
....

This works on Ansible 2.6.3 and most if not all other versions, if not Ansible would have failed in many situations.

Are you sure you haven't hard coded the value in the common role?

I just checked the ansible version, it was quite old, version 2.2

I updated, now it is version 2.6
The earlier problem is resolved, though I have another problem now.
When running the playbook below, it throws error. I am running in vagrant.

`

  • name: ensure Nginx is installed via the system package
    apt: name=nginx state=present update_cache=yes
    sudo: yes


ansible-playbook -i hosts site.yml --tags acceptance

TASK [common : ensure Nginx is installed via the system package] ***********************************************************************************************
fatal: [195.121.71.148]: FAILED! => {“changed”: false, “msg”: “Failed to lock apt for exclusive operation”}

`

Does this have to do with the user?

In my site.yml I have the following:

`

  • name: Configure acceptance server
    hosts: acceptance
    user: vagrant
    roles:
  • common
    tags: acceptance

`

If you check the top of the message you'll something about ignoring sudo or something like that.
That's because sudo is deprecated if favor of become and partially removed in Ansible 2.6.

Change "sudo: yes" to "become: yes" and it should work.