High CPU when running playbook to disable virtual servers

I recently run an Ansible playbook to disable several virtual servers on the LTM F5. I noticed that when I was running the playbook CPU on the F5 was up to about 97% while the playbook was running and it made the GUI interface almost unusable. Any suggestions on how to make this run more efficiently? Also, if I have a list of 25 VIPs is the Ansible host logging into the F5 25 times to make the change or is it logging in once and then running through all the virtual servers.

Its hard to tell without knowing the resources of the ansible controller, number of changes/hosts and forks but, networking tasks are more CPU related due to running on localhost and sending api calls.

Jonathan Lozada De La Matta on mobile

– You received this message because you are subscribed to the Google Groups “Ansible Project” group. To unsubscribe from this group and stop receiving emails from it, send an email to . To post to this group, send email to . To view this discussion on the web visit . For more options, visit .

I’m not really following your comment. The high CPU is on the F5 load balancer not the Ansible I controlled. Are you thinking that the Ansible controller is over running the F5 load balancer?

Hey Sharon,

I'm one of the original authors of these modules, so lets see if I can
offer any guidance.

It looks like you're using the SSH provider to connect to the F5, so I
would be curious to know what process is running on the box that is
consuming your CPU. Running a `top` command for a brief moment should
be informative. If it's the restjavad service, that would be curious
because the cli provider does not use the rest daemon to do its work.

When we were doing performance testing of the different methods of
controlling the BIGIP, we noticed that ssh is both the slowest as well
as the most prone to disconnects. If I remember correctly, on a loaded
device it would be reliable ~50% of the time. SOAP (not supported in
the F5 ansible modules) was reliable ~95% of the time, and REST was
reliable ~98% of the time.

SSH is, additionally, one of the services that is chosen for
termination first when the device is under load.

Other processes that I could guess might be at the top of the `top`
output might be mcpd or (depending on your activated modules) mysql or
postgres. Additionally, you might want to tail the /var/log/ltm file
when you run ansible, or one of the /var/log/restjava.* files to see
if either of them are raising lots of messages.

Finally, out of curiosity, if you have the option of trying similar
modules using the rest transport, that might help pinpoint the
culprit.

F5 has a Slack channel for support of these things too if you're
interested; f5cloudsolutions.slack.com. There is a ansible channel in
there where myself and others lurk and may be able to assist further.

-tim