Hi
As Ansible/AWX become one of our most important tool since few years, it’s time now to think about how to better organize our projects and our coding way.
At the moment we host all of our ansible/awx project into a sub-group hosted on a GitLab instance. I figured out that many roles we developped are common from project to project as well as some vars file (i.e Hashicorp Vault access , URLs …)
Here’s open questions :
what should be the best practices in order to streamline common stuff like roles, vars and common Ansible resources ?
Also, how do you guys automate role/task testing from tools like GitLab everytime you get some updates (ansible update, module update, execution environement update …) ?
In another way, how do you successfully set up projects to ease collaboration between workmates on your team ?
We’re an IT organization in an academic environment. I’m not sure how much that changes these answers; I mention it to give you some context.
We developed a suite of common roles before collections were mature. If we had it to do over again, we’d probably use collections. For things other than task files - think plugins of various sorts - collections are the way to go.
Since Ansible’s implementation of collections matured, we’ve kind of dumped new work in our catch-all “common collection”. This would make more sense as multiple collections if we had more roles or plugins, but it’s only a small handful at this point. There has been no incentive to move our old common roles into collections because they work just fine as they are.
Our pushes into our gitlab instance and merge requests trigger Jenkins jobs that run ansible-lint. When appropriate, Jenkins also scans our AWX instance looking for projects that need an SCM update but aren’t configured to force one before a job. Again, this was all set up before gitlab runners were fashionable, and we’ve done little with runners because, again, the old Jenkins jobs work just fine.
We don’t have CI/CD test suites. Instead we test manually whatever we’ve been working on before merging. That works for us because we have few changes, and each of our service lines is so niche. I really don’t see how to implement rigorous testing in our existing projects. It sounds like a good idea, though!
We migrated our entire puppet config to Ansible many years ago, and there was a lot of learning and making mistakes along the way. And Ansible itself was undergoing significant change. All of that has settled down considerably. We have a dozen or so common roles - each of which is its own project - plus basically one project per “service line”. We rarely have multiple feature branches in play in any given project, and we have a small, collaborative team (7 people). And we’ve been totally work-from-home since the Spring of 2020, so we practically live in Slack. (Those Jenkins jobs I mentioned before post notices in our Slack channels when appropriate.)
These days, on the rare occasion we need a new project, we run a Jenkins job that creates a gitlab project, clones our “skeleton” project in our gitlab instance, sets up permissions, web hooks, initial tokens, etc. We run it every couple of months just to make sure it all still works, then delete the resulting test project.
If we were starting from scratch and hadn’t already been using Jenkins, then we’d be using gitlab runners more, and I’d like to think we’d build test suites. But a lot of what we do can’t really be tested without standing up instances of our service lines, so I’m not sure what that would look like.
So maybe we aren’t the best example of best practices, but I hope this gives you some idea of how somebody runs Ansible. Feel free to ask any follow-up questions you might have.