Aggregating hosts by group_by and groups

Hi there,
So my task is quite simple, run a play against a group of hosts. Then , in the next play report which hosts completed the task successfully and which don’t. I am using the group_by keyword for that, but without much success.

In more detail ,
I am running a deploy playbook against a large group of hosts ( called bid ) in one play where I also group each host by a boolean variable.
In the next play (same playbook) I am trying to write to screen all hosts from each group.

This is how it is laid out:

play1:

  • name: group by already_failed
    group_by: key={{ (already_failed) | ternary(‘failed_hosts’, ‘success_hosts’) }}
    tags:
  • release

play2:

  • name: set success and failed groups
    set_fact:
    failed_hosts=“{{ groups[‘failed_hosts’] | default(‘None’) }}”
    success_hosts=“{{ groups[‘success_hosts’] | default(‘None’) }}”
    run_once: True
    tags:

  • notification

  • release

  • name: log grouped hosts to screen
    debug: msg="Failed releases on - {{ failed_hosts }}. Success releases on - {{ groups[‘success_hosts’] }} "
    run_once: True
    tags:

  • release

When I run this task against 3 hosts and serial is 3 or more than this succeeds :

TASK: [set success and failed groups] *****************************************
ok: [54.yyy.32.xxx] => {“ansible_facts”: {“failed_hosts”: “None”, “success_hosts”: “[‘54.yyy.32.xxx’, ‘54.yyy.32.xxx’, ‘54.yyy.32.xxx’]”}}

But when I run it with serial which is lower than the number of hosts, then the output only contains the first hosts in that group, e.g.

TASK: [set success and failed groups] *****************************************
ok: [54.yyy.32.xxx] => {“ansible_facts”: {“failed_hosts”: “None”, “success_hosts”: “[‘54.yyy.32.xxx’, ‘54.yyy.32.xxx’]”}}

So it looks like the newly created group when accessed from the group.succes_host is only aware of the first hosts that were set in it.

To make things more confusing, when I ran the second play against the newly created group, then ALL hosts were actually running it - meaning that the group actually contains all hosts, but not when accessed from the groups vars.

TASK: [log grouped hosts to screen] *******************************************
ok: [54.yyy.32.xxx] => {
“msg”: "Failed releases on - None. Success releases on - [‘54.yyy.32.xxx’, ‘54.yyy.32.xxx’] "
}
ok: [54.yyy.32.xxx] => {
“msg”: "Failed releases on - None. Success releases on - [‘54.yyy.32.xxx’, ‘54.yyy.32.xxx’] "
}
ok: [54.yyy.32.xxx] => {
“msg”: "Failed releases on - None. Success releases on - [‘54.yyy.32.xxx’, ‘54.yyy.32.xxx’] "
}

I got a sense that this is a bug, but would appreciate a second pair of eyes .

Any help will be much appreciated,
Thanks,
Offer

How about this:
debug: msg="Failed releases on - {{ groups[‘bids:!success_hosts’ }}. Success releases on - {{ groups[‘success_hosts’] }} "

Not sure about the syntax right now but basically:

  • You have a group of all hosts “bid”
  • You record all the hosts that are good (success hosts)
  • You output the inventory of “bid:!success_hosts” – that is all hosts in the bid group except for the ones that were successfull.

Why is that necessary?

AFAIK it’s necessary because once a host is failed ansible will remove it from the play (or even playbook?). But you can still access the group where it originated from and just exclude all the hosts that were good. This way you also need only one group and not 2 groups :slight_smile:

Someone with a greater understanding of ansible will probably cringe by now and tell you all the stuff that I was stating falsely :slight_smile:

HTH,
Martin

Thanks Martin !

But the issue still remains, the success_hosts group does not contain all the hosts that should be included ( I suspect this is a bug and will open an issue in the right place), so doing bid - success will not give you the right answer.
Also, I control for failure with a variable, so those hosts who failed are still in the game. This is so if all hosts failed, the playbook will not abort and I was supposed to have all hosts in the failed group reported.