No handlers could be found for logger "paramiko.transport"
No handlers could be found for logger "paramiko.transport"
fatal: [betapi00.sx2] => FAILED: Incompatible ssh peer (no acceptable kex algorithm)
No handlers could be found for logger "paramiko.transport"
fatal: [clientapi00.sx2] => FAILED: Incompatible ssh peer (no acceptable kex algorithm)
fatal: [clientapi01.sx2] => FAILED: Incompatible ssh peer (no acceptable kex algorithm)
No handlers could be found for logger "paramiko.transport"
No handlers could be found for logger "paramiko.transport"
fatal: [eti200.sx2] => FAILED: Incompatible ssh peer (no acceptable kex algorithm)
fatal: [eti2mock00.sx2] => FAILED: Incompatible ssh peer (no acceptable kex algorithm)
This sounds like a bug in the YAML inventory parser (inventory_parse_yaml.py) where it is not remembering to set ansible_ssh_port as a variable.
As I indicated previously, I *don't* have bandwidth to bug fix master releases, but will gladly do a point release if I get a patch for this. There's already another fix on the master branch.
I will open a ticket to make sure this is not a problem in 0.5
It's becoming clear to me that the way ansible communicates with the managed hosts are far more fragile than I first belived, and with the python-simplejson dependency on the managed hosts I think I'll have to look somewhere else for my needs.
I realize that might have sounded snarky. The thing is, I have major problems getting ansible to connect to my RHEL5u4 machines with python2.4 and I'm beginning to think I'm spending my time on the wrong thing. Ansible sure have many nice features, but it's not living up to it's potential as dependency light. I think it will be great come 2.0 though.
If you are having problems with the json aspect, make sure that
python-json isn't installed. Just install python-simplejson. That
caused me some grief on my RHEL5 nodes.
An update on this (not asking for more work, just providing additional
context):
It turns out that I was misreading the log. I had a different host at
the beginning of the group for which I had mistyped the IP address.
However, there is a different bug now which does sound more like the
YAML inventory parser issue you're talking about - having multiple
entries for one host (e.g. 111.222.333.444:55, 111.222.333.444:56,
111.222.333.444:57) only runs one of these entries. I have only started
digging through the code (the time I have to do so is not much, but
better than doing nothing right?) so I can't tell for sure where the
error lies, but it appears to be touching only the first entry, after
which it assumes it is done (I've come to this conclusion after running
"ansible -D -o mygroup -m setup" with a number of different group
configurations). I suspect that when it hits the second host entry it
assumes that it already finished because the IP of the host entry is the
same. If that is the case (and I'll spend some time with the code
trying to confirm or deny it) then I would propose that hosts could be
identified uniquely not by their IP address, but by their setup
variables (or perhaps by default by their IP unless specified otherwise
via a variable).
PS: This is still using version 0.4 from last week - Haven't done a git
pull for the new code yet.
Alright, so the issue is most definitively with the way host lists are
stored - the maps are indexed by hostname alone, whereas hostname+port
should be used at minimum. There are multiple places where this would
need to be addressed so I won't be able to post a fix for a while (me
being more familiar with Ruby and Bash than with Python doesn't help,
either).
In the meantime, I do have a workaround - Multiple DNS entries. I
ended up creating multiple hostnames for the host with multiple ports,
and I've thus been able to get Ansible to handle the scenario.
I don't think this is the right fix, because you are left with a confusing situation of remembering which host
is what, plus.
The better fix is to have aliases for each host, and if a variable like ansible_host (or something similar) is defined,
when connecting to that host, use that alias.
I would be happy to have the option of user-defined aliases. However,
for the sake of addressing the current issue they could lead to
unexpected internal behaviour - Right now the use of hostnames to
create the list of hosts has the (possibly unintended) side effect of
de-duplicating host entries. Using an external alias internally would
mean that more code would need to be added to handle de-duplication of
identical hosts. The proposed internal use of a host+port string as
the key for the hosts maps is in effect the equivalent of an alias,
and it is the minimum amount of data that is always available in the
Ansible code (port 22 is made implicit, but it could be made explicit
instead). The result would be that less special casing is necessary,
since aliases are not always available.
Another issue may be that from a user perspective host aliases would
seem to provide little benefit - Users can already define per-host
groups and include groups in other groups. The only additional
functionality I can think of is that aliases would affect output.
Perhaps a middle ground would be to implement aliases, and to default
them to host:port if an alias is not defined for a host. This would
allow the alias to be also used for forcing duplication of tasks per
host (which may be of limited use in regular scenarios but could be
useful in certain situations).
In the meantime, I will continue using DNS entries (come to think of
it, this simply externalizes the aliases). Once I have a little more
time I'll look into writing a proof of concept alias feature.
Regards....
PS: In the case I was testing I did use tunnels specifically, but the
issue would occur with any type of proxy host, such as virtualization
hosts where the VMs are NATed.
I would be happy to have the option of user-defined aliases. However,
for the sake of addressing the current issue they could lead to
unexpected internal behaviour - Right now the use of hostnames to
create the list of hosts has the (possibly unintended) side effect of
de-duplicating host entries. Using an external alias internally would
mean that more code would need to be added to handle de-duplication of
identical hosts.
I have trouble telling what this is talking about, unfortunately, because it's talking about code (which is somewhat like dancing about architecture) and not use cases I think. My previously fix is crazy simple -- Most people would be doing this for tunneling so aliases are all well and good, should be fine. That is what I'm going to do.