Headless Ansible

Hey Charles !

I wasn’t sure where to put the private key on the control node so I put it in /etc/ansible. Was that a bad idea?

Sort of. Having your private key outside of “the user who use it”'s home means other non-privileged users might be able to use it as well, or at least access the file. It’s just best practice to limit private keys exposure.
That being said, if you want to keep it there, just ensure the file belongs to your user and not root. Also check ACL, especially if you run sshd in strict mode (it should be 0600 in most cases).

I linked it with ansible_ssh_private_key_file in my hosts file

Note that Ansible relies heavily on your existing ssh config, so you could also define all your hosts ssh config in ~/.ssh/config without having to specify anything in Ansible configuration (be it hosts file, vars or envvars). I find it somewhat easier to manage ssh hosts config this way, as I’m already using it to join my servers anyway, though I have set ANSIBLE_SSH_ARGS envvar to use a Jumpbox and a few other parameters. I’m also more comfortable with pure ssh config.

See the ssh_config man page: man ssh_config for more info.

so I tried running Ansible with sudo but then it seems all the host validation was lost.

When you run command with sudo without specifying a target account, it defaults to use root. In both cases, you run commands as another user who doesn’t have your existing configuration and perhaps can’t access your files. You shouldn’t have to do that. Also see above for ownership and permissions warning.

Is the regular way to do host key checking just using ssh to connect to the server first or is there another better way to do that?

Ssh connection on an unknown node will prompt for hostkeys approval, but you might either / or:

  • Use ssh-keyscan prior to pre-fill your ~/.ssh/known_hosts with hostkeys hashes; here is an example one-liner to do precisely that for every host listed in your inventory files (under ./inventories):

    for host in $(ansible -i ./inventories/ all --list-hosts | tail -n+2 | sed -e 's/\\s\\+//g'); do $(which ssh-keygen) -t rsa -F "${host}" || $(which ssh-keyscan) -t rsa "${host}" >> ~/.ssh/known_hosts; done
    
  • Manually add hostkeys hashes to ~/.ssh/known_hosts

  • Disable hostkeys checking in Ansible configuration

Where should I put my inventory, hosts, key, etc.

It depends on how you plan to use them. You should obviously version your config files (not your private keys !!), but there are no rules for how to organize them on filesystem. Well, there are some rules, for example host_vars and group_vars folders which have to be either next to your inventories or your playbook. Also consider where you put your optional ansible.cfg file.
Have a look to best practices for useful suggestions.

Should I bother encrypting the ssh private key?

It’s up to you. I’d say if you plan to leave your key on a server filesystem, you should do it. If you instead chose to store it in a secrets / creds store (which usually encrypt files and strings), then it’s less necessary. Of course, it also depends on how you access your key. If not sure, do it ! (though I agree using an ssh agent can be cumbersome in some cases).
Here is how I usually manage encrypted keys in Gitlab-CI pipelines:

    - mkdir -p -m=700 ~/.ssh
    - eval $(ssh-agent)
    - chmod 400 "${SSH_PRIVATE_KEY_GIT}"
    - echo "echo ${SSH_PASSPHRASE_GIT@Q}" > ~/.ssh/.ssh_askpass_git.sh
    - chmod ug+x ~/.ssh/.ssh_askpass_git.sh
    - SSH_ASKPASS_REQUIRE=force SSH_ASKPASS=~/.ssh/.ssh_askpass_git.sh ssh-add "${SSH_PRIVATE_KEY_GIT}"

SSH_PRIVATE_KEY_GIT and SSH_PASSPHRASE_GIT being CI vars on my projects.

Which is somewhat needlessly complex, I’ll give you that :confused:

3 Likes