Need help organizing tasks/playbooks for multiple operating systems

I’m stuck. I’m sure this can all be done a better way, but right now, I’m just not seeing it. Can anyone offer suggestions of what else to try here?

Originally I started with one operating system, so it writing up a ansible playbook for the core OS configuration was simple and I ended up with a lot of stanzas like this:

  • hosts: all:!tftp-server

sudo: true

tasks:

  • name: Ensure tftpd server removed

yum: name=tftp-server state=absent

  • hosts: tftp-servers

sudo: true

roles:

  • tftp-server

And I did this for EVERY SERVICE (xinetd, vsftpd, httpd, etc….) on my hosts. In short, if it didn’t have to be on, it had to be explicitly disabled. [If there’s a better form for these types of patterns, PLEASE let me know – it’s so verbose and ugly, especially duplicated for every service I have on my boxes.]

Now, I’m adding a second OS to my playbooks. So I created a second parallel playbook with similar code customized for the new OS. But now, my problem is that when I’m setting up a new box, how do I have Ansible determine which playbook to run? My head is telling me what I want to do is this: create an Ansible playbook ‘os_core.yml’ which can determine the distro/version of the OS on the target box then execute the correct OS-specific playbook for that distro/version. But how do I do that? I can’t do include playbook with variables in the path and I can’t move all this to roles as I need to be able to differentiate plays based upon host groups. I REALLY do not want to create a giant single playbook which has rules for host groups based upon things like ‘rhel5-tftp-server’ vs ‘centos6-tftp-server’. That just doesn’t scale.

I’m open to any and all suggestions here.

Thx

Chris.

I’m sure that there are many better ways…

First, you can detect you OS programmatically and add it to the appropriate group… I start with a playbook that has this…

I think this comes from some sort of OCD and you may wish to give this up :slight_smile:

State what should be on the machines, not what should not.

It would be impossible to define all the things a server could not be.

If you really really need to ensure that software wasn’t installed by mistake, keep package list dumps and compare to that, much more efficient than doing it package by package.​

I have to say that I agree. I was trying to show a better general solution but in my case I use an initial os load that puts the absolute minimum on the server. Then I add to it with ansible.

It’s not so much OCD as it is DISA STIG. The RHEL6 STIG rules explicitly state for specific services that if it’s not needed on a host it must be disabled/uninstalled. Granted I don’t need to do that for every possible service, but I do have to do it for specific services. What I may end up doing is have a general ‘base’ OS playbook for when I’m setting up host that only turns on stuff. And then have a separate STIG playbook that I run occasionally to ensure that only those needed services on a given host are actually enabled and other STIG-identified services are not.

Thx, all.

Chris.

“It’s not so much OCD as it is DISA STIG. The RHEL6 STIG rules explicitly state for specific services that if it’s not needed on a host it must be disabled/uninstalled.”

I did a small amount of consulting around STIG for a previous systems management app company – so I know what you are talking about. Ultimately, those tools are not great at describing something that isn’t there, and this still holds for Ansible, though having a list of services to remove and doing the following is not heinous evil:

  • yum: name={{ item }} state=absent
    with_items: packages_to_remove

Etc.

(Of course if someone installs “banned_package” in /usr/local/you-are-not-going-to-find-it, that’s not a complete solution)

Thanks for clarifying the use case!