Ansible performing poorly on scale setups

Hi,

we are using Ansible playbooks to build traffic tools and have around 500 hosts.

For e.g:
We build a stream file to map the interfaces of all the hosts and then trigger traffic on respective hosts participating. This stream file is around 100 MB.

So just this line in one of the playbook:

vars_files:

  • “{{ playbook_dir }}/{{ streams_file_name }}”

Is taking almost 1-2 hrs of time to run.

I dont think ansible has ever been designed to perform at scale.
Chef or Puppet are better at that because they are agent based.
If you already tried mitogen and ssh pipelining and even increased the forks, I suggest you take a look at the new paradigm of execution environments that comes with Ansible Automation Platform 2.
Regards.

I believe there are other vars plugins that you can use? They may work better than the files module. Or write your own.
https://docs.ansible.com/ansible/latest/plugins/vars.html

Alternatively, instead of using vars plugins, you maybe able to use lookup plugins for fetching data, and use faster stores to randomly access data instead of loading it all into memory at the beginning.

- Sandip