I’ve run a playbook that creates an EC2 instance. The AMI used to create the instance is using CentOS and installs an SSH key so that I can ssh in as username “centos” without a password. It works fine. But now I’ve created a subsequent playbook that configures the new server. One of the things it does is create two new users:
Notice that it’s creating the new users with /home/centos as the home directory. I thought it would just set that as home directory for my new user, and that I would then be able to ssh in as that new user, using the same keys. However, after running this I am now unable to login as anything. Not the new users, and not “centos” either.
I can kill the instance and create it anew. That’s not a problem. But anyone have any guesses as to what happened that I can’t login now?
The public keys are already there, since it’s re-using the /home/centos directory. But it’s not using those keys when logging in as the new user, the keys that are (theoretically) in the new user’s home directory.
ssh is enforcing that the owner is the only one that with access, by checking the mode, it don't care about the owner id or group id. But the filesystem will, it wont let user1 use user2 files/directory if they have 0400/0600/0700.
Okay, I get that. I actually hadn’t thought about that aspect of it.
But there’s still a couple of things that don’t make sense to me. One is that the authorized keys file is still read-writable by the “developer” group, which the existing user and the new users are in. And that’s the only one that really matters, right? I have the private key here on my laptop. SSH verifies that public key entry against my private key, and so should be good. Also, why would the existing user, “centos”, who could log in before, no longer be able to?
Ooooh… wait… after my ‘user’ play I ran a play that recursively changed permissions on /home/centos to 2775. I bet that caused this. facepalm