We have more then one “test environment” (of our production environment) installed on 1 giant machine. Simply put, the program is installed multiple times, under a different user account inside different directories.
The thing is, we are trying to manage these different environments using ansible.
For instance, stopping and starting these giant environments takes about 10 minutes each, so going over each environment 1 by 1 takes hours. So therefore we thought of making ansible believen each environment is actually one host, but simply pointing to the same “actual” server, like so:
This may be related to the MaxAuthTries setting in your sshd_config file (coincidentally, I just learned about that setting myself from Jesse Keatings AnsibleFest presentation). Try upping that from the default (6 on my CentOS system) and see if it helps.
Unfortunately, that did not work… ;-( We have a support subscription. Could I use a support ticket on this? But keep the answers for the world to read here?
Mark Maas, Binckbank
This may be related to the MaxAuthTries setting in your sshd_config file (coincidentally, I just learned about that setting myself from Jesse Keatings AnsibleFest presentation). Try upping that from the default (6 on my CentOS system) and see if it helps.
Hi James,
Unfortunately, that did not work… ;-( We have a support subscription. Could I use a support ticket on this? But keep the answers for the world to read here?
Wel when I add my public key to the root user of the server. and change my playbook a little like so:
Leading me to think this is a sudo issue we need to track down?
Normally we connect to all servers under our own name and key, and use sudo: true everywhere.
We think / assume that is the most logical en secure way of handling this?