Multiple instances of Ansible to deliver to production and non-production environments

HI All,
First post and we are looking to implement Ansible into a new cloud environment for multiple legacy systems.

Sorry but there is a bit of a story first and this is more a setup/component architecture question as we have a security constraint that production and non-production data centers can never talk to each other. Meaning components such as version control and configuration management have to have two instances (prod and non-prod) with separate login’s etc and syncing taking place between them.

I think this is mad, and have managed to talk them down to have only one version control instance.

However, compromise was a prod and non prod Ansible, which we can sort of justify, as we have a lack of maturity with tools as we are just starting to implement Ansible.

Do others have experience in running two instances of Ansible, and keeping common config in sync?

eg:

  1. One source repository for Ansible config containing both prod and non-prod config
  2. Two source repository’s for Ansible config, one prod and one non-prod config

Any steer’s appreciated.

Cheers
Byron

Hi,

I have never used two source repository for Ansible config.
But with One source repository we keep a vars file with all the required variables.
Example: vars/project/staging/project.yml vars/project/production/project.yml

and then have two separate playbooks at the root of the ansible project like project-staging.yml and project-production.yml.
That way you keep the variables and settings separated between projects and load the required vars files on each one.

Kind regards,
David Negreira.

I would say you really want to use the same plays in both environments and just keep the data different, otherwise you are ‘testing’ plays against production every time.

Hey Byron,

I'm using Ansible with multiple environments daily and it's very easy
once you unlearn what Ansible documentation presents as a best
practice. :slight_smile:

First, forget about /etc/ansible/ directory and its contents. Instead,
in your home directory create separate directories, each one for a
different "environment" - production, staging, testing, etc. For
example:

~/src/projects/example.com/production/
~/src/projects/example.com/staging/

Additionally create a separate directory for all of your playbooks
that will be applied to both environments:

~/src/projects/example.com/playbooks/

Now, cd into one of the project directories (let's say
~/src/projects/example.com/staging/) and assume that you work from
there. The same steps should be performed in the other project
directory.

Create ansible inventory and other required directories, for example:

ansible/inventory/hosts
ansible/inventory/group_vars/all/
ansible/inventory/group_vars/group/
ansible/inventory/host_vars/<hostname>/

ansible/roles/
ansible/playbooks/

In the project directory, create 'ansible.cfg' file with contents:

[defaults]
inventory = ansible/inventory
roles_path = ansible/roles/

You can add any variables from /etc/ansible/ansible.cfg that you want,
just make sure that it points to your local project directory instead
of the /etc/ansible/ directory. Put your playbooks and roles in
previously created directories. It's easier if you put them in a
shared common directory and symlink them to project directories.

Now, when you want to work in a staging environment, or production
environment, all you need to do is cd into it and ansible should
(using local ansible.cfg file) automatically use that environment. So
for example to run your main playbook, run:

ansible-playbook ansible/playbooks/site.yml -l host

And to switch to production environment, run:

cd ../production

These environments should be completely separate and should use
different hosts. You could try and combine different parts of the
inventory between them, but it might get tricky. Regardless, you
should use the same set of playbooks and roles in both environments to
be sure that staging environment reflects your production environment.
Only think that should be different is contents of the
'ansible/inventory/' directory, if needed.

Try it out a couple of times and I think that you will like it. :slight_smile:

Cheers,
Maciej

Thanks very much guys,

We have a couple of systems that we are going to do a POC on and we do have a clear production, staging, test to work to.

We would prefer to not have the complication of two separate ansible instances, but will start with your recommendations thanks Maciej. It seems there is, as always, more that one way to do it.

Cheers
Byron

We run the same playbooks against 7 (!) various staging environments,
with a different inventory
for each. Per-environment config goes into the inventory under the
[all:vars] key - including things
like versions of RPMs etc.

SSH credentials are managed out of band, but there's a central git
repo to hold all this that can
be checked out to a bastion host on the relevant environment.

We've occasionally branched the repo to allow big changes in playbook
structure to be proven
on dev. environment configs but try to merge those into master asap.

As others have said, a playbook per environment is likely to involve
unproven playbook
runs against production.

(as an aside, the restrictions you are being presented with sound like
'policy scar tissue' after someone
got burned by a different CM solution like Puppet or Chef. The
agentless nature of Ansible makes
it a lot less likely a change can 'leak' into prod without some
operator explicitly wanting it).

Thanks Dick,
‘policy scar tissue’ :')

Thats great, I am going to use that. It’s more that we have had no CM, it all been manual in the past. Have tried to say by implementing Ansible we do no need the complete separation, so will have to prove it. The agentless of ansible was the key point for me in going with it.