Hi guys,
I’m hoping I’ve just overlooked an option, but here’s the situation:
After updating from ansible 1.6.2 to 1.8.1, I noticed our ansible runs take much much longer to actually make contact with the servers. Here is a really simple comparison:
https://gist.github.com/brentley/7c644614e5dc3aae045d
The first run is on 1.8.1, the 2nd is the same system on 1.6.2.
This is using the ec2.py dynamic inventory, and both runs had precached inventory.
When I straced the process, I noticed the 1.8.1 run was doing this:
lstat("/etc/ansible/host_vars/[10.10.7.20](http://10.10.7.20)", 0x7fff243bbe60) = -1 ENOENT (No such file or directory)
lstat("/etc/ansible/host_vars/10.10.7.20.yml", 0x7fff243bbe60) = -1 ENOENT (No such file or directory)
lstat("/etc/ansible/host_vars/10.10.7.20.yaml", 0x7fff243bbe60) = -1 ENOENT (No such file or directory)
lstat("/etc/ansible/host_vars/10.10.7.20.json", 0x7fff243bbe60) = -1 ENOENT (No such file or directory)
but iterating over our entire inventory. There was an additional block for /etc/ansible/group_vars/<inventory_name>.* also.
Is there a configuration option or other way to disable this behavior?
Inventory gets processed on startup time a bit more now – that’s good for various reasons as it would do it later anyway and that data can be needed – but I’m curious if you could provide info about what “much longer” means? Such as before and after numbers, and numbers of hosts in your inventory?
Check the gist. I left an example of the timing difference.
Brent,
in your gist the realtime seems to have actually decreased from 46 to 44 seconds from 1.6 to 1.8. I see values like this for ping as well connecting via DSL (10mbit) and VPN instead of standard Ethernet in the office. I may ping 50 hosts in 30 seconds via DSL which takes 2 seconds via Ethernet.
Oops,
I really should learn to read digits, so it’s going up from less than 1second to 44 seconds. Do you run this from the same machine?
Yeah please see my other questions too.
Since they probably detail your infrastructure with more information than you might care to share here, feel free to share off list.
(We’re probably going to want to see if we can get a copy of your inventory files).
Thanks!
Following up on this, it looks like that server had a pretty old version of ec2.py as the inventory script. When I replaced it with the currently linked one from github, it behaved much better:
time ansible tag_environment_development -m ping --list-hosts
10.10.5.157
10.10.4.221
10.10.7.202
10.10.5.10
10.10.4.92
10.10.4.170
10.10.4.153
10.10.6.32
10.10.6.226
10.10.5.91
10.10.5.149
10.10.5.29
10.10.16.137
10.10.7.174
10.10.4.145
10.10.5.118
10.10.4.45
10.10.19.217
10.10.7.36
10.10.0.138
10.10.6.250
10.10.4.190
real 0m0.802s
user 0m0.662s
sys 0m0.141s
I still have the old copy of ec2.py if you are interested in digging in further, but This is probably really a non-issue.
Possibly could have been that the old ec2.py didn’t do the “_meta” trick (see developer docs on inventory plugins).
Thanks!