How to manage ssh known hosts

When I ssh into a server for the first time, ssh always asks me if the ECDSA key fingerprint is correct. When I say yes, it adds that to my ~/.ssh/known_hosts file.

I’d like to use ansible to create a known_hosts file that accurately represents all my servers. Then I can set that in /etc/ssh/ssh_known_hosts, and safely tell my servers to ignore each users individual known_hosts file. That will let me run ssh operations between my servers without having to accept an ECDSA fingerprint every time I log into a new server. And, I won’t have to turn off StrictHostKeyChecking.

I could delete my current known_hosts file, then ssh into each server one at a time to build an accurate known_hosts file. But that’s rather time consuming, and keeping it accurate would be painful.

So, how can I do that with ansible?

Currently I’m researching how fingerprinting actually works in an effort to figure everything out on my own. Any answers to these questions would be greatly appreciated

How does ssh generate the fingerprint?

Why is the fingerprint shown to the user logging in of the form xx:xx:cc…:xx, but the line in the known_hosts file of a form similar to a public key?

Why does ssh ask to confirm the fingerprint again when you use a hostname instead of an ip address, after accepting the fingerprint for the ip address? Both fingerprints are the same.

host ⮀ ssh 192.168.88.4
The authenticity of host ‘192.168.88.4 (192.168.88.4)’ can’t be established.
ECDSA key fingerprint is 68:06:f9:4e:7a:c5:cf:1d:70:a2:6a:6f:12:eb:d4:55.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added ‘192.168.88.4’ (ECDSA) to the list of known hosts.
Connection closed by 192.168.88.4
host ⮀ ssh vm.beta.lab
The authenticity of host ‘vm.beta.lab (192.168.88.4)’ can’t be established.
ECDSA key fingerprint is 68:06:f9:4e:7a:c5:cf:1d:70:a2:6a:6f:12:eb:d4:55.
Are you sure you want to continue connecting (yes/no)?

If you trust your machines in their current state, there's
ssh-keyscan. No ansible needed.

ssh-keygen -lf /etc/ssh/ssh_host_X_key.pub will print the key
fingerprint on the local machine.

Run an ansible job for that command, write it to a file, pull that
file back to you, concatenate. (Although I'm sure there's a more
elegant way to do it.)

==ml

When I ran ssh-keyscan on my little vagrant cluster, it returned public keys that don’t look the same as what’s in my known_hosts file.

ssh-keyscan vm.master.lab vm.alpha.lab vm.beta.lab

Can I just copy that output into a known_hosts file and have it work?

Guess I just need to test it. :slight_smile:

–David Reagan

It should work. If it doesn't, it's an OpenSSH question... they have
their own mailing list. :wink:

Mind you, I would find a playbook that logged into a host, computer
the key fingerprint locally, and generated a known_hosts file from the
results, very nice. If anyone's looking for a project, this would be a
good one...

==ml

Mind you, I would find a playbook that logged into a host, computer
the key fingerprint locally, and generated a known_hosts file from the
results, very nice. If anyone's looking for a project, this would be a
good one...

You might be interested in [1]. A bit off-topic, because it's about
SSHFP in DNS and collecting those fingerprints (and avoiding the whole
known_hosts mess :-), but it may get you started on the right track :wink:

        -JP

[1]: http://jpmens.net/2012/11/03/an-action-plugin-for-ansible-to-handle-ssh-host-keys/

@Jan-Piet Mens That looks like a potential way to get the fingerprint via Ansible. I’ll have to look into it a bit more than the brief skim I just did. Thanks!

Currently, I made a text file that lists all my nodes and their aliases. Then I used ssh-keyscan to find all the fingerprints. I’m probably going to add updating those files to my workflow when I add or delete a node. Kinda a hassle for it to be mandatory, but unless the article Jan-Piet linked to points out a better, automated way, I think it’ll just have to do.

Thanks everyone for your help!

Another pretty way of doing it is using SSH certificates. This is a not very well known feature of OpenSSH that enables you to sign all your ssh host keys signed. Once done, you only need one key to authenticate all your present and future servers without the need to maintain any kind of database:
http://justanothergeek.chdir.org/2011/07/howto-authenticate-ssh-server-through/

Is there some config in the playbook that I can add to skip the key fingerprint check entirely so that it doesn’t add to my known_hosts file? I’m using ansible with vagrant and don’t want the transient vm keys added to my known_hosts. I tried export ANSIBLE_HOST_KEY_CHECKING=False but it is still asking me on first connect.

I don’t know why it would be asking you but it might be a function of where you decided to do the export (i.e. before running vagrant, etc).

Works fine for me without vagrant anyway.

I was setting ansible.inventory_path in Vagrantfile. I removed that and set playbook to use hosts = [hostname of vm box] then it is not asking me anymore. thanks for the quick reply