MongoDB Ansible Deployment

Hello,

So I am trying to convert a bunch of Ansible Playbooks that were used to deploy MongoDB into a more generalized group of roles used to for the deployment. I have somewhat based it off of the Ansible Example MongoDB Deployment Repository. The thing I am having trouble with now is the mongod role and the updating of the startup and configuration files on the arbiter/replica servers. In the original deployment, I have a play that looks like the one below.

  • hosts:
  • rs1
    remote_user: root

tasks:

  • name: Create the data directory for the replica sets
    file: path=/data/db state=directory
  • name: Copy the daemon configuration file over to the replica sets
    copy: src=/ansible/MongoDB/roles/mongod/templates/mongod_rs1.conf dest=/etc/mongod.conf
  • name: Copy the rc.local file over to the replica servers to start the mongod services at boot
    copy: src=/ansible/MongoDB/roles/mongod/templates/rc_rep.local dest=/etc/rc.d/rc.local mode=0755

The problem I am having stems from the hosts - there are 20 replica set hosts with each set having 2 replication servers. Now, it is easy to create the data directories and the startup file on each replication server as they all get the same path and startup file. So, I can just simply do the same task for every server in those groups. The place where I struggle, though, is the configuration file. The 2 servers in each replication set get their own configuration files, which are inherently different from all the other configuration files. My thoughts on how to get this to work - without doing a different play for each host - involved using the inventory_hostname variable as I ran through the Playbook. The way I had it set-up was like this,

  • name: Create the mongodb configuration file for each set of replica servers
    template: src=ansible/MongoDB/roles/mongod/templates/mongod_{{inventory_hostname}}.conf dest=/etc/mongod.conf

I thought, then, that this would drop the mongod.conf file onto the correct servers because it would read in the group, like “rs1”, to that variable and it would work easily. This made me realize, though, that it would use “ghmrep1” and “ghmrep21” for example instead of “rs1” as the hostname. Then I thought about just changing the name of the mongod configuration files in the template folder to something like “mongod_ghmrep1” so that it would work that way. This seems unnecessary, though, and I would also have to double the amount of files I had to make this work. Is there any simpler way to go about this play where I could just create/copy all the files over in one fell swoop?

Hello,

So I am trying to convert a bunch of Ansible Playbooks that were used to deploy MongoDB into a more generalized group of roles used to for the deployment. I have somewhat based it off of the Ansible Example MongoDB Deployment Repository. The thing I am having trouble with now is the mongod role and the updating of the startup and configuration files on the arbiter/replica servers. In the original deployment, I have a play that looks like the one below.

  • hosts:
  • rs1
    remote_user: root

tasks:

  • name: Create the data directory for the replica sets
    file: path=/data/db state=directory
  • name: Copy the daemon configuration file over to the replica sets
    copy: src=/ansible/MongoDB/roles/mongod/templates/mongod_rs1.conf dest=/etc/mongod.conf
  • name: Copy the rc.local file over to the replica servers to start the mongod services at boot
    copy: src=/ansible/MongoDB/roles/mongod/templates/rc_rep.local dest=/etc/rc.d/rc.local mode=0755

The problem I am having stems from the hosts - there are 20 replica set hosts with each set having 2 replication servers. Now, it is easy to create the data directories and the startup file on each replication server as they all get the same path and startup file. So, I can just simply do the same task for every server in those groups. The place where I struggle, though, is the configuration file. The 2 servers in each replication set get their own configuration files, which are inherently different from all the other configuration files. My thoughts on how to get this to work - without doing a different play for each host - involved using the inventory_hostname variable as I ran through the Playbook. The way I had it set-up was like this,

  • name: Create the mongodb configuration file for each set of replica servers
    template: src=ansible/MongoDB/roles/mongod/templates/mongod_{{inventory_hostname}}.conf dest=/etc/mongod.conf

I thought, then, that this would drop the mongod.conf file onto the correct servers because it would read in the group, like “rs1”, to that variable and it would work easily.

If you need to use the group name, if I understand correctly, then make it a variable like:

  • hosts: ‘{{ hosts }}’

and then set the external variable at run time like -e “hosts=rs1” and use that for the conf file name as well mongod_{{ hosts }}.conf

Alternatively, if your groups are named like rs1, rs2, … etc., then you can use the integer sequence loop to dynamically generate the group names in play time: http://docs.ansible.com/ansible/playbooks_loops.html#looping-over-integer-sequences

It sounds like you want 2 roles that are almost the same, I'd try
explicitly making 2 roles.