Client/server is confusing here, I meant server side modules are
something to explore.
Don't explore modifying anything in library.
Client/server is confusing here, I meant server side modules are
something to explore.
Don't explore modifying anything in library.
Correct me if I’m wrong, but it sounds like we might be approaching this with a different set of assumptions. For any reasonably complex, or multi-task playbook, trying to establish the future “true state of the system” would be a fool’s errand. If that’s what you’re objecting to, I totally agree. What we’re interested in doing, is, for small, relatively atomic changes, have a switch that says “just tell us what you’re planning to do”. For example, for a templated deploy of a config file, show a diff, but do nothing. For missing or out-of-date packages, just output the package and version that is intended to be installed, etc. That said, the diff example might return wholly inaccurate info, depending on if it ran before the package install. (which I think was one of your points, as well)
Fortunately, after kicking it around the office for awhile, we realized that most of the logic can just go into the playbook, and doesn’t need to be complicated by creating a fork. We’ll probably just do our own “check to see if a package is installed” and “do a diff of what would be pushed” (by deploying a temporary copy and then running diff, great idea, Brian) modules, and build them into our playbooks.
-Chris
I’m in support of at least a minimalist approach to a dry run switch/param. I want to at least be able to see what config files managed directly by file/template/copy are going to be modified if not also diff’ed so I can confirm those changes are up to date. When you work in an ideal environment and no one has access to your hosts, this might not be useful, but in a multi-admin/user environment I’d assume the worst.
Case in point with us developers have access to prod. They had to put in a really quick hotfix to a config and forgets to also update ansible/puppet. When release day comes that fix is reverted and bugs are reintroduced. My sanity check during deployment currently with puppet is to use their --noop param so I can first see the changes coming in are to be expected before I blindly apply anything. This has saved my tush quiet a number of times.
This doesn’t have to be an all fancy solution or anything, but at least have a dry-run option so you can display results of changes as it already does now, but do not apply anything if requested.
So after a bit of a break adding core things, this actually hits
pretty good timing now.
I'll very likely be added a --check mode in the coming weeks that will
assuredly support files, templates, and hopefully also
packages/services.
Stay tuned!
until then, use the backup option, it is a life saver for rollbacks
(but now you also need a clean_backups.yml).
until then, use the backup option, it is a life saver for rollbacks
(but now you also need a clean_backups.yml).
Yep, I generally think we should make the backup_local code by default
(maybe unless so configured) keep N backups.
(For those that didn't know, backup=yes can be added to the various
file/template options and it saves filename with the timestamp added
to the end)
I like the backup option, but it doesn’t cover the use case where my playbook swaps out the working config and then restarts the service because of a notify handler. At that point you’re wondering why I don’t copy it to a staging area before applying, but then that takes away from the simplicity of configuration management. In the end your implementation of --check will be much cleaner and more simple.
I’ve honestly used it so much that I’ve replied on it just for sanity checks when something goes wrong in production and developers try to point fingers. I just run a puppet --noop and redirect the blame right back. It’s just a great insurance measure.
Cool.
I am going to start on this today, though we'll see if I violate my
own code freeze and merge it for release or not
+1 on violating that code freeze.
Really appreciate your work BTW!
Hi, Michael
I totally argee with you. Today I had in incednet which show how much truth is in you words.
I have a playbook, which install a set of pacjages and then perform few tasks to run and enable services, which were installed earlie, in previous tasks.
So, in “dry run” mode task, enabling services would fail, because ansble will try to check actual state of services in actual, real OS, BUT, there is no packages installed yet and no init scripts for services.
In “run for real” mode task would not fail, cause all needed stuff will be installed “for real” in actual system.