when making a service thats set up remotely, in this case openvpn, id like to set it up on two machines that already have keepalived between them and have it listen on the floating address, so all requests will go to “active” instead of standby.
Stouts.openvpn runs on a single host and does all its work on that host.
fetching /etc/openvpn to local and then copying to standby would include secrets in plaintext without a vault to hold them, so thats not good.
copying from master directly to standby would need agent forwarding, something id like to avoid, or an rsync service, and i dont want to run extra services.
maybe a script to randomly generate a key, archive, compress, and encrypt the folder, fetch to local, copy to standby and decrypt there. that could be put in a handler when run on the active node, and a check for exist on all standbys. seems like a lot of work for such a simple thing.
any better ideas?
So you’re running a playbook against one host, then want to clone the resulting configuration state to other hosts?
As you already mentioned that sounds indeed like a lot of work.
Why not simply run the playbook against all the machines?
Dick
then the servers would all generate a different set of keys and certs, so some way to sync them would still be needed.
Aha. But that means that the playbook isn’t truly idempotent.
I would recommend spending some time changing the role (I assume you’re using https://github.com/Stouts/Stouts.openvpn) to be idempotent.
I had to do similar work on roles that generate key materials on the remote host.
What I ended up doing is generating those materials upfront locally and storing them as (vaulted) vars. The tasks were then changed to first check for existence of those vars, and copy them in place.
It is some work, and running a role requires work up front, but I preferred it to writing all kinds of parsing and syncing logic, which sounded like rewriting a lot of functionality that’s in ansible already.
With everything stored in vars you also have the advantage of being able to start from scratch. With a cloning/syncing setup you’d still bootstrap the first host with new (i.e. different) key material.
Dick
I totally agreed with Dick solution, but out of completeness's you have the
slurp module [1] that stores the data in memory.
But if the control machine has swap and is swapping some data could end up on
the disk.
[1] https://docs.ansible.com/ansible/latest/slurp_module.html
I totally agreed with Dick solution, but out of completeness’s you have the
slurp module [1] that stores the data in memory.
But if the control machine has swap and is swapping some data could end up on
the disk.
i like this approach better. one less ansible folder with a vault in it.
and thanks for the paranoid reminder
sudo swapoff -a
ansible-playbook …
sudo swapon -a
of course, now i wonder if how easy it is for a malicious process to read encrypted swap while its online.
maybe a script to randomly generate a key, archive, compress, and encrypt
the folder, fetch to local, copy to standby and decrypt there. that could
be put in a handler when run on the active node, and a check for exist on
all standbys. seems like a lot of work for such a simple thing.
any better ideas?
I totally agreed with Dick solution, but out of completeness’s you have the
slurp module [1] that stores the data in memory.
But if the control machine has swap and is swapping some data could end up on
the disk.
to be clear, i also agree with Dicks solution. but, for a rewritten role or playbook.
But wouldn't that be the case with a vault as well?
Ansible will have to decrypt it at some stage and it does this in memory.
So security wise it's doesn't matter if you're using vaulted vars or
slurping remote content into memory.
The only relevance for swap is that slurped content is base64 so
slightly harder to harvest (but still very close to being plain text).
DIck
Yes, vault will have the same problem, but my comment was not on vault as you
can see by what I quoted, it was a comment on encrypt on the remote host and
decrypt on the remote.
If PKI is used, no sensitive data will exist on the control machine. But with
slurp and swap enabled this could happen, that's why I mentioned it.
I totally agreed with Dick solution, but out of completeness’s you have
the
slurp module [1] that stores the data in memory.
But if the control machine has swap and is swapping some data could end up
on the disk.
[1] https://docs.ansible.com/ansible/latest/slurp_module.html
But wouldn’t that be the case with a vault as well?
Yes, vault will have the same problem, but my comment was not on vault as you
can see by what I quoted, it was a comment on encrypt on the remote host and
decrypt on the remote.
If PKI is used, no sensitive data will exist on the control machine. But with
slurp and swap enabled this could happen, that’s why I mentioned it.
swap is easy to disable, so thats a non issue.
i was thinking more of a randomly generated symetric key held in memory.
till then, i just made a new vm to do this work in and used fetch, as slurp didnt work, see https://groups.google.com/forum/?hl=en#!topic/ansible-project/xb62ClmpAxs . my control machines runs qubes-os, so the file system is already encrypted and making new VMs is built into the system. but this is not a real solution as sensitive data is now on the control machine.
thought about making an ssh keypair with a “cat > && delete_this_key” command key, put those on their respective hosts for a single use followed by deleting them. its not atomic, and in theory, something else hiding there would have a time window to take advantage of it. (i realize in my case, id already be screwed, but someone elses case could be different) so, were back to slurp or vaulted data.