I’m trying to figure out a good way to bootstrap both my ec2 machines and my datacenter hardware machines. The basic bootstrap is:
- create an admin group
- create N users of which I am a member (last part is relavant since I’m running ansible-playbook)
- install updated sudoers files.
In amazon world, my key is installed under ec2-user which has passwordless sudo access.
On the datacenter machines, I drop a temporary public key into root’s authorized keys.
I’m having trouble envisioning a workflow where a playbook can dynamically change the user that ansible will ssh into a remote node as and also flipping the sudo parameter. I could have a separate bootstrap playbook that isn’t referenced anywhere else, but I’m trying to reuse tasks and variables as much as possible.
Does anybody have an example pattern to share or should I just get over it and come up with separate bootstrap playbooks?
I was thinking that some combo of ignore_errors and notify might do the trick, but seeing as ssh is a core component of ansible and not so much a task, these might function at all.
My thought is in cloud capacity you would be injecting SSH keys most of the time.
Am I gathering that basically the question is you want the same plays to target cloud and datacenter machines?
Inventory variables like ansible_ssh_user may be exactly what you want in this case (and not setting them in your playbooks).
My thought is in cloud capacity you would be injecting SSH keys most of the time.
is there any way to use the exit status or output of ssh itself to create conditionals?
so that took a user and a host as parameters and returned the exit code for “ssh -l $user $host uptime” …where something is a module or playbook or subtask?
Am I gathering that basically the question is you want the same plays to target cloud and datacenter machines?
yes, my intent is to use ec2 to stage all changes and use as a platform to build prototypes and prove the automation in bringing up those prototypes. i’m using the ec2.py inventory script in a staging directory and plan to use the cobbler inventory script in a parallel production directory. i’d hoped to write a symlink in both production and staging to a shared site.yml. the only difference between the two environments should be a few different variables which i gather i can switch between by using inventory vars.
i’m still trying to grok ansible, so very little of this is actually done.
You can do things like ansible_ssh_private_key_file as an inventory variable to specify that different hosts or different hosts can use different keys.