Possible to set Serial per task ( needed when using delegate_to ) ?

Hi

In playbook for web servers, I need set firewall rules so that database accepts connections:

  • name: FW rule - accept input 3306 from web server to DB server
    lineinfile: dest=/etc/sysconfig/iptables
    regexp=“^-A INPUT -p tcp -m state --state NEW -m tcp -s {{ ansible_eth0[“ipv4”][“address”] }} --dport 3306 -j ACCEPT$”
    line=“-A INPUT -p tcp -m state --state NEW -m tcp -s {{ ansible_eth0[“ipv4”][“address”] }} --dport 3306 -j ACCEPT”
    state=present
    insertbefore=“^-A INPUT -j REJECT --reject-with icmp-host-prohibited.*$”
    delegate_to: “{{ groups.dbservers.0 }}”
    notify:
  • Restart iptables on DB server
    tags: fwrules

However, since I have multiple web servers, the liniinfile action will be run in parallel on the db server, causing an unpredictable result ( trying to change the file from multiple processes at the same time )…
Any thoughts about adding support for “Serial:1” in task context?
I found this thread on the topic : https://groups.google.com/forum/#!topic/ansible-project/CNxrMIyKx58
but no solution yet…

In one attempt to work around this problem, I have tried to set the FW rules in the playbook for Database server instead, by looping over groups[‘webservers’]…
However, I still need the IP of each web server and that is problematic. It should be possible to get the IPs using magic variable :

{{ hostvars['test.example.com']['ansible_distribution'] }}

Since I am looping over groups['webservers'], I have the name of the web server in {{ item }}. How to I inject that variable name in the expression?
The following do not work ( substituting lineinfile with shell to illustrating the variable problem ) :
- name: FW rule - accept input 3306 from web server to DB server
  shell: /bin/true {{ hostvars.item.ansible_eth0["ipv4"]["address"] }} {{ hostvars.[{{ 'item' }}].ansible_eth0["ipv4"]["address"] }}
  with_items:  groups['webservers']
  notify:
    - Restart iptables on DB server
  tags: fwrules  

Btw, when using Rolles ( http://docs.ansible.com/playbooks_roles.html#roles ), in which file may I specify Serial ?
Neither in tasks/main.yml, handlers/main.yml or vars/main.yml seems to work....

Best regards,
Vidar

Serial needs to be set per play.

But you can have multiple plays per file, so start a new play for the section that you want to run in serial mode.

Serial needs to be set per play.

But you can have multiple plays per file, so start a new play for the section that you want to run in serial mode.

But how is that done when using roles?
I have for instance roles/webservers/tasks/main.yml…
AFAIK, I can only include task lists from main.yml:

  • include: firewall-rules.yml

But firewall-rules.yml may only contain tasks, right? not “serial:” statements…
And putting “serial: 1” in roles/webservers/vars/firewall-rules.yml do not work either

Best regards,
Vidar

I also have a setup where multiple tasks run in parallel against the same system. The way these tasks are set up, this is usually OK in my environment. However, for tasks where this wasn’t, I ended up moving the task functionality into a custom module that utilizes file locking (which essentially forces serial=1 within the same physical system). It would have been helpful for me (and it sounds like for you) if tasks had the ability to acquire a file-based lock on the system for this purpose, something like “lock_file: true” or possibly providing a name/path for the lock.

Roles are just abstractions around tasks.

Plays map roles to hosts.

You do it in the play, and the play has the role assignments.

You can put more than one play in a playbook.

Hi Garron.

Your approach sound interesting. Would it be possible for you to share
this custom module with me and the rest of the world?

Best regards,
Vidar

Sorry for the delay on getting back to you. Essentially, my custom ansible module uses fcntl.flock(). This has the effect that the lock will automatically be given up when the process exits. Here is some sample code:

def main():

Normal ansible module initialization

lock_file = open(LOCK_FILE_PATH, ‘w’)
fcntl.flock(lock_file.fileno(), fcntl.LOCK_EX)

Put code that needs to be run serially per system here

Lock will be released when lock_file is closed (or goes out of scope)

Roles are just abstractions around tasks.

Plays map roles to hosts.

You do it in the play, and the play has the role assignments.

You can put more than one play in a playbook.

I have also reached to a situation that I need ‘serial’ to be defined at task (handler) level. I am not sure that I understand how your suggestion of having more plays could work with a role deployment. My use case is this: I am deploying a database cluster node role at a group of nodes, so I have a single play which applies that role on that group. The role includes a handler that restarts the database service on configuration changes, but I want this handler to be executed serially one node at a time. I do not want to set serial=1 for the whole play, because that would significantly slow down the deployment process as the number of nodes grows.

Seems like it would be better to try to add the serial keyword to the task itself and see what may imply.

Nothing wrong with the flock – just should be more native IMHO and pretty soon you’re going to want serial: N and then you’ll have to create a mutex and all that funness, when most likely we could handle it in application logic…

I agree the serial keyword on each task is likely a better option for most people and is easier to use and understand.

I have multiple inventory entries that point at the same machine. In my particular situation, I wanted the tasks to run in parallel as much as possible with the restriction that it isn’t OK to have multiple in parallel on the same physical box. I realize this is probably an uncommon use case. Serial tasks would have solved my problem as well, just with longer run time in some situations.

Garron

Here is another example :

- name: Fetch public ssh key
  command: cat /root/.ssh/id_rsa.pub
  register: root_pub_key

- name: Add public ssh key to backup account
  delegate_to: "{{ backup_server }}"
  authorized_key: >
    user={{ hostvars[backup_server]['backup_user'] }}
    key="{{root_pub_key.stdout}}"

This second task cannot be executed in parallel, because the
authorized_key module is not thread safe.

Problem is, this task is in the middle of a role, so I cannot just
split my role in two parts to have 3 plays :
- role (part one)
- task with serial:1
- role (part two)

It would work, but it is really ugly.

Le 14/05/21 15:57, Garron Moore claviotta :

There’s no current way to add “serial” to a task right now, nor is that the proper keyword for this.

I think this would be proposing an override for “forks” as a task attribute.

Just as a quick update, this has actually nothing to do with thread safety.

Ansible, in fact, even locally does not use threads - it uses forks.

Remotely, it’s more of an issue with “X is not able to be used concurrently”, which is the same thing you’d get if you were running from 2 different Ansible machines at the same infrastructure at once, as well.

delegate_to usually only makes sense on a “serial: 1” play, or at least a serial: small play, as if you have 500 hosts, and delegate things all to one host in a host loop, you’re going to spawn 500 python processes, and probably hit the SSH connection limit well before that :slight_smile:

Hello,

Not sure if this still interest everyone, but the way I found to make is work is just like Michael explained. This is an example for future reference:

`

As this post is stilled in top of google search let me describe one more way for a single task that should not be done simultaneously on multiple servers:

  • hosts: all
    tasks:

  • name: set fact
    set_fact:
    marker: marker

  • name: group by marker
    group_by: key=marker
    changed_when: no

  • name: target task
    debug: msg=“Performing task on {{ inventory_hostname }}, item is {{ item }}”
    with_items: groups[‘marker’]
    when: “hostvars[item].inventory_hostname == inventory_hostname”

Michael’s recommendation of putting multiple plays in a playbook worked great for me.

I ran a test and saw that all variables carry over between plays, so that if you do a lot of work to set up state, you don’t lose that when you switch to a new play in the middle of a playbook.

Here is an example showing the general layout:

Combining some ideas here, I wrote a small action plugin – very lightly tested.

import fcntl

class ActionModule(object):

def init(self, runner):

self.runner = runner

def run(self, conn, tmp, module_name, module_args, inject, complex_args=None, **kwargs):

lock_file = open(‘/tmp/serialize.lock’, ‘w’)

fcntl.flock(lock_file.fileno(), fcntl.LOCK_EX)

module_name, module_args = module_args.split(’ ', 1)

return self.runner._execute_module(conn, tmp, module_name, module_args, inject=inject, complex_args=complex_args, **kwargs)

Dropping this in action_plugins/synchronize.py and touching library/synchronize.py – you can invoke this in your playbook:

  • name: restart foo

serialize: command supervisorctl -c /etc/supervisord.conf signal HUP foo

m

Very neat solution, thanks! Works for me too, where I pull and push to a git repository. Obviously this can not be done in parallel due to git conflicts.

Separate plays sure will work but that’s butt-ugly.

I guess now with Ansible 2 strategy plugins might be clean a solution for this. A strategy plugin which changes behavior based on the task context.

When I tried to use it I’m getting error:

FAILED! => {“failed”: true, “reason”: “ERROR! this task ‘serialize’ has extra params, which is only allowed in the following modules: command, shell, script, include, include_vars, add_host, group_by, set_fact, raw
, meta\n\nThe error appears to have been in ‘/home/ansible/slave/workspace/pr_test_prov2/ansible/roles/bootstrap_nodes/tasks/cobbler_netboot.yaml’: line 4, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.
\n\nThe offending line appears to be:\n\n\n- name: set cobbler netboot flag\n ^ here\n”}

my task definition is: