I’ve been using ansible for a few months, and just got around to reorganizing our playbooks to follow the latest best practices. I wanted to share my project structure, hoping that someone can spot if I’m doing something terribly wrong and can suggest a better approach. At the very least I hope that this is an interesting read to someone.
Our environment / use cases:
We have three environments (dev, stage, prod), and while a given app is deployed the same way across all envs, it runs as a different user in each env (e.g dev-appsvc, stg-appsvc, prd-appsvc).
We use jenkins to run our playbooks. We have jobs named “upgrade environment - staging”, and such.
Need flexibility when working with certain config files. For example: we’re testing an nginx site config in staging, but this change isn’t ready for prod. Meanwhile, we need to update prod with a hotfix! We’d like to update the app in prod, but let it keep its current nginx config (since the config in staging isn’t ready yet).
To address #1 and #3, I used the following folder structure:
Thanks for this! An interesting read for me anyway as I’m similarly geared on my end (3 different environments), with a few different tweaks:
Use of group_vars to set the “environment” var and other environment specific vars (to avoid passing in --extra-vars=“environment=” and/or “-i environment” each time). Keeps vars for an environment like staging or dev in one place, but I do like your approach of coupling the vars to the roles that use them. I believe though based on my understanding of variable precedence that if you wanted to override any of those role based vars you’d have to use --extra-vars (perhaps my needs are different where I opted to override a var on a per host level, if needed).
To keep configs as “DRY” as possible, I use one template with logic for handling different environments (such as having output caching disabled in dev, but not on stage or prod). If I need to test a config change I can just run the related tasks against dev/staging first, test, then run against production. What I like about your approach (with separate configs for environments) is that it can keep the configs a bit more readable without all this conditional environment logic. Just a trade off that updates have to then be made in multiple configs (in my use case).
Would definitely be curious to hear the feedback from others as well!
Thanks for this! An interesting read for me anyway as I'm similarly geared
on my end (3 different environments), with a few different tweaks:
* Use of group_vars to set the "environment" var and other environment
specific vars (to avoid passing in --extra-vars="environment=" and/or "-i
environment" each time). Keeps vars for an environment like staging or dev
in one place, but I do like your approach of coupling the vars to the roles
that use them. I believe though based on my understanding of variable
precedence that if you wanted to override any of those role based vars
you'd have to use --extra-vars (perhaps my needs are different where I
opted to override a var on a per host level, if needed).
* To keep configs as "DRY" as possible, I use one template with logic for
handling different environments (such as having output caching disabled in
dev, but not on stage or prod). If I need to test a config change I can
just run the related tasks against dev/staging first, test, then run
against production. What I like about your approach (with separate configs
for environments) is that it can keep the configs a bit more readable
without all this conditional environment logic. Just a trade off that
updates have to then be made in multiple configs (in my use case).
Would definitely be curious to hear the feedback from others as well!
We've been thinking of deploying Ansible, but we haven't put in the time to create our structure.
Have you considered using version control to keep 3 seperate branches and deploy from that ?
We have a few places in our configuration files where we specify what the environment is that the
application is running in where we could just make these files into a template a simple
variable substitution with 'test' 'prod' or 'dev' would be enough.
For me, I do keep separate branches for changes and then just apply those updates from that branch to test. If it all checks out, I’ll pull request the branch and deploy everywhere, but I don’t keep separate long-lived branches where each branch represents an environment…
I’m also in the process of marching towards using Ansible and Vagrant together to better test updates before they go to any stage or production environments (instead of testing updates directly against say, staging, where other devs/clients are using actively using it)…
You wouldn’t want to put /everything/ in branches, because you don’t want to tweak development environment specific values to production values when merging.
I recommend using different inventory locations for QA/Stage/Prod and also putting hosts into different groups, so that you can apply group_vars for different things, like if the development/stage environments had a different branch name or something similar.
(B)
“environment” is a reserved variable in Ansible, I’d name this something like “env”. The reason for this is environment is the hash table of environment values to pass to the server, as set by the “environment” keyword.
Do not pass “environment” as a variable and life should be good.
(C)
Branches are generally good otherwise to reflect VERSIONS of context, but branches shouldn’t be used to keep differences
For instance, if I am working on the 1.8.5 release of my app, I’d like to be able to keep the development and prod variables for this in the same branch. Let’s call this branch “devel”. When I want to maintain the release, I can branch to “1.8.5” for a maintainance branch.
Hope this makes sense and would be happy to offer other suggestions/etc.
TLDR – group vars good, and branch for versions, not environmental differences. Also seperate inventory files are VERY good to prevent deploying a set of servers when you don’t want to!
If you want to deploy off of that “1.8.5 maintenance branch” would you check it out on you ansible deploy server?
I can imagine that app "foo"has a playbook to install. In verison 1.8.6 it changes. I want to use the 1.8.6 version to deploy to “dev” but the “1.8.5 maintenance” version to deploy to “production”. Can these all be on the ansible server at the same time? Or do I need to checkout the version of my ansible files first and then run the commands?