I'd do this without delegation, but with roles. I'd have one role that's parameterized with "instance name" , and have it applied 3 times to one host in testing and once (parameterized with host vars) per host in production.
Or even better, parameterize the role with list of mongo instances - then you'd be able to use the same play for tests and production.
Mike Trienis <mike.trienis@orcsol.com> napisał:
Hi Michael, thank you for taking the time to respond!
It's a bit of an unusual use case, so I am trying to understand if
there is
a better way to solve the problem.
I am setting up mongo replication on two types of environments
1. Single-instance with replication for testing purposes
2. Multi-instance with replication for production purposes
In the single-instance case there are three init scripts on one machine
- /etc/init.d/mongod-mongo1
- /etc/init.d/mongod-mongo2
- /etc/init.d/mongod-mongo3
Starting all three services on instance A is a simple task for the
single
instance topology
- name: start all three mongodb service
shell: creates=/var/lock/subsys/mongod-{{ item.name }}
/etc/init.d/mongod-{{ item.name }} start
with_items:
- { name: mongo1, host: mongo1.example.com }
- { name: mongo2, host: mongo2.example.com }
- { name: mongo3, host: mongo3.example.com }
This would be just:
- service: name={{ item.name }} state=started
with_items: mongo_instances
With something like:
mongo_instances:
- name: mongo1
- name: mongo2
- name: mongo3
In the test host's variables
However, if we're dealing with a multi-instance topology, I would want
to
start */etc/init.d/mongo1* on instance A, and */etc/init.d/mongo2* on
instance B, and */etc/init.d/mongo3* on instance C. Here is a a crappy
solution to help illustrate the problem.
- name: start all three mongodb service
shell: ssh root@"{{ item.host}}"; creates=/var/lock/subsys/mongod-{{
item.name }} /etc/init.d/mongod-{{ item.name }} start
with_items:
- { name: mongo1, host: mongo1.example.com }
- { name: mongo2, host: mongo2.example.com }
- { name: mongo3, host: mongo3.example.com }
For this you'd have to set the right instances in the right host's variables - the playbook could stay the same