I have a requirement to run lots of playbooks parallely on localhost.
When I run 100 playbooks parallely (a simple one that executes the a shell command - date) , I see that
it consumes lot of resources (memory ~ 7GiB and CPU ~ 200%)
Is this on expected lines ?
here is what I am trying to run
--------------------main.yml-----------------------------------------
When I fire off 100 copies of date, I hardly see any change in memory usage.
I even fired 100 copies of the following script, but I donot see any change in memory usage even when 100 such processes are running in parallel
That script does not run 100 copies of date. It runs 100 copies of an empty loop which may well be optimised out of existence.
I’m not saying that Ansible is not the problem, but you do need to start with a fair comparison. Also, your playbook is running 100 times on one system; you may find it is hardly a problem if you run those 100 scripts on 100 remote systems.
Few things:
(1)As mentioned in my previous reply, I even ran the date command 100 times but no change in memory usage. In addition, I also tried a empty loop to keep the process running
(2)It is my requirement to run the playbook on the localhost in response to a request to execute a job (there could be multiple such requests)
The requirement is to be able to launch a playbook on receiving a request (ie on demand). So, if 100 requests are received, it will result in 100 playbooks being executed parallely.
You mentioned - Ansible uses about 50MB of memory per playbook. Is there something that can be done to optimize on that ?
Got it. Thanks for your time.
My requirement is to be able to sequence together tasks(to create a workflow) and after some prototyping, I found ansible playbook to meet that requirement.
But then, scalability is turning out to be an issue. If we could daemonize the ansible process so that it can receive requests and execute playbook in a thread, that would scale.
But then, that is not how ansible is supposed to work