hello folks, I’m looking for good practices on how to deploy a docker-compose from GitHub to Ansible.
at the moment I have a repository https://github.com/Mercado-Social-de-Madrid/ansible-takahe that contains the playbook and some instructions (install Docker, generate SSL certificate). in another repository https://github.com/AlbertoMoreta/takahe-docker there’s a docker-compose.yml I want to deploy to my target machines.
I am about to create a systemd .service file that will take care of running docker compose up in the appropriate directory. but before that, I need to upload that compose file to the target servers. there are naturally several ways of doing this but I’m looking for best practices. should I merge both repositories (or add one as a submodule of the other)? should I command a git clone
inside the target machine? should I scp
the contents? what do I do when the compose definition changes?
any references to public repos or guides that do similar things are more than welcome.
Hi,
Ultimately it depends on what process you envision; you could, as you suggest, create a systemd service piloting your app and components deployment / teardown on target machine, or you could deploy them directly from ansible, running your playbook from a CI pipeline, cron job or anything really. As for fetching sources, it depends on the location you want to deploy from, or if you need to have some of your repo files available on target machine (though you should probably deploy those files from an ansible tasks instead IMO).
I won’t list a bunch of options here, but I can give you my input on how I’d do things (don’t know if it could be considered as best practices though).
Regarding your repos structure, I’d do it a bit differently; I’d suggest splitting files in separate repositories :
- Each role in it’s own, with the lint part as pre-commit / pre-receive hook
- ‘Deployment’ files (requirements.yml, playbook calling your roles, inventory and variables files, and perhaps CI pipeline files if that’s how you’d like to trigger your deployments) in a seperate one
The idea is to standardize your projects structures and allow from easy roles components reuse accross eventual multiple projects. Here is a structure exemple from a project I manage :
-
‘gbt_psg’ role repo :
/home/ptouron/Infra_GIT/ansible-roles/gbt_psg/
├── .ansible-lint
├── .gitignore
├── .pre-commit-config.yaml
├── .yamllint
├── README.md
├── defaults
│ └── main.yml
├── files
│ ├── cacerts.vault
│ └── dockerFindNextAvailableSubnet.sh
├── meta
│ └── main.yml
├── molecule
│ └── docker
│ ├── converge.yml
│ ├── molecule.yml
│ ├── prepare.yml
│ └── requirements.yml
├── tasks
│ ├── backend_conf_lint.yml
│ ├── build_frontend-es_image.yml
│ ├── copy_backend_cacerts.yml
│ ├── copy_keycloak_realm_exports.yml
│ ├── create_psg_docker_containers.yml
│ ├── create_psg_docker_networks.yml
│ ├── create_psg_docker_volumes.yml
│ └── main.yml
├── templates
│ ├── Dockerfile_es.j2
│ ├── backend_config.json.j2
│ └── realm-export.json.j2
└── vars
├── main.yml
└── secrets.vault
-
‘psg-deploy’ repo :
/home/ptouron/TEMP/gitlab/psg-deploy/
├── .gitignore
├── .gitlab-ci.yml
├── README.md
├── ansible.cfg
├── inventories
│ ├── host_vars
│ │ ├── .yml # Masked to avoid information leak
│ │ └── .yml # Masked to avoid information leak
│ ├── hosts_psg_production
│ └── hosts_psg_staging
├── main.yml
└── requirements.yml
$ less requirements.yml