apt module doesn't install module with a 'when' condition

Hi,

I run ansible 1.7.2 under ubuntu 14.04 amd64

I made a playbook to upgrade only package that I added to a list.
( https://github.com/johan-chassaing/linux/blob/master/ansible/playbook/package_upgrade_list.yml )

So, I retrieve the list of every packages that need to be upgraded on my server, and it install the package when the result is matched with an entry of my list.

For my test I got 4 package that could be upgraded: apache2-mpm-worker apache2.2-bin apache2.2-common bash

If I do not insert the when condition, all packages are installed.

TASK: [Upgrade] ***************************************************************
<127.0.0.1> REMOTE_MODULE apt name=apache2-mpm-worker,apache2.2-bin,apache2.2-common,bash state=latest
changed: [127.0.0.1] => (item=apache2-mpm-worker,apache2.2-bin,apache2.2-common,bash) => {“changed”: true, "

If I add the when condition, install is skipped.

TASK: [Upgrade] ***************************************************************
skipping: [127.0.0.1] => (item=apache2-mpm-worker,apache2.2-bin,apache2.2-common)
PLAY RECAP
127.0.0.1 : ok=3 changed=1 unreachable=0 failed=0

If I replace the APT module with a message, only bash package is skipped.

TASK: [Upgrade] ***************************************************************
<127.0.0.1> ESTABLISH CONNECTION FOR USER: vagrant
ok: [127.0.0.1] => (item=apache2-mpm-worker) => {
“item”: “apache2-mpm-worker”,
“msg”: “Package to update:apache2-mpm-worker”
}
<127.0.0.1> ESTABLISH CONNECTION FOR USER: vagrant
ok: [127.0.0.1] => (item=apache2.2-bin) => {
“item”: “apache2.2-bin”,
“msg”: “Package to update:apache2.2-bin”
}
<127.0.0.1> ESTABLISH CONNECTION FOR USER: vagrant
ok: [127.0.0.1] => (item=apache2.2-common) => {
“item”: “apache2.2-common”,
“msg”: “Package to update:apache2.2-common”
}
skipping: [127.0.0.1] => (item=bash)

Do you have any information about that?
Or any tips how I can debug?
Thank you

Johan

If I understand your intent, it is to only update Apache if it is already installed, so that systems without Apache on them won’t accidentally have it installed by mistake. A better way to go about this would be to have an “apache” or “webserver” group, and assign that group only to the servers that are supposed to have apache installed on them

You didn’t post your inventory, but let’s say it looks like this now:

---- start inventory -----
host1
host2
host3
host4
---- end inventory ----

Lets make it more descriptive:

---- start inventory ----
[webservers]

host1
host2

[databases]
host3

[proxy-servers]
host4
---- end inventory

you can now have a playbook like this:

---- start playbook ----

Thanks for your reply :slight_smile:
In my test case, I added apache indeed, but I would like to work like that with any other packages.
I have a very long list of servers which is not up to date with all the servers’ servicies.
My intent is to not upgrade packages that can interrupt my production (like apache, mysql… ). So I want to put a list of packages that can safely be upgraded and update them only if there are present.
If a specific server has a htop I want to upgrade without installing it on the 600 others.

If you have any idea.
johan

Sounds like you need to go group by group, server by server, and add them in slowly instead of trying to jump to the end and try to come up with a playbook that handles all scenarios for your entire 600 machines from the start. You could have a single inventory file with all 600 hosts in it, but I think it it easier and safer to have multiple inventories (one per group-environment pair). This lets you run plays on just one enviroment at a time instead of running a huge “do everything to every machine” playbook.

Let’s take an example. Pick one instance of machines that are alike, like say you have a load-balanced pair of web servers that your HR department uses to store files on or something, so you call the instance “webhr”. You have 2 production “webhr” servers and 2 test systems, so you would have a “webhr-prod” and a “webhr-test” instance-environment groupings.

Your webhr-prod inventory could look like this:

[webhr-prod]
hrweb-1243
hrweb-9432

[children:webservers]
webhr-prod

I have a gist up at https://gist.github.com/BradGunnerSGT/ba1cea6c6629a702f9eb with a lot more detail that describes how we handle this sort of thing. Basically it lets us “touch” only the machines that need to be touched during that run, and if someone slips and runs ansible pointing at the complete inventory directory, then ansible only runs the parts of the master playbook pertinent to each individual machine and skips the rest. A system in “webhr-prod” would never have the “tomcat” or “mysql-server” roles run, and vice versa (unless the system has both “mysql-server” and “webserver” defined).

Hi Johan,

Regardless of whether there’s a cleaner way to implement what you’re doing, this does appear to be a bug. Could you please open an issue on GitHub for this, so that we can keep track of it?

Thanks!

Are you sure it is a bug? Looking at the output, the “magic” to consolidate module runs by joining the items of with_items updates ‘item’ to be a comma separated string of all items.

I haven’t tested, but I would think a when statement, should do something such as using .split(‘,’) on item, and using difference, intersection in it’s logic. Maybe I am just thinking within the confines of how this works now, but without updating ‘item’ to be a joined string, I am not sure things will end up working correctly.

Yes, here’s a simple reproducer:

  • hosts: localhost
    gather_facts: no
    vars:
    test: [‘a’, ‘b’]
    tasks:
  • shell: echo -e ‘a\nb\nc’
    register: result
  • debug: var=result
  • name: do it
    yum: name=“{{item}}”

with_items: result.stdout_lines
when: item in test

The output of the “do it” task is:

TASK: [do it] *****************************************************************
skipping: [127.0.0.1] => (item=a,b)

So you can see that it correctly pruned item “c”, however it still skipped the task overall, which is incorrect.

Yes, here's a simple reproducer:

- hosts: localhost
  gather_facts: no
  vars:
    test: ['a', 'b']
  tasks:
  - shell: echo -e 'a\nb\nc'
    register: result
  - debug: var=result
  - name: do it
    yum: name="{{item}}"
    with_items: result.stdout_lines
    when: item in test

The output of the "do it" task is:

TASK: [do it]
*****************************************************************
skipping: [127.0.0.1] => (item=a,b)

This seems fishy to me. Wouldn't you expect an item=a and an item=b line?
Does item=a,b imply that it's checking to see if the string "a,b" is in
test (which it isn't)?

Indeed, if you replace 'yum: name="{{item}}"' with 'debug: var=item, you
get different results:

  TASK: [do it] *****************************************************************
  ok: [localhost] => (item=a) => {
      "item": "a"
  }
  ok: [localhost] => (item=b) => {
      "item": "b"
  }
  skipping: [localhost] => (item=c)

Is it surprising that debug treats the items as a list elements, but yum
treats them as a comma-separated string of elements?

                                      -Josh (jbs@care.com)

This email is intended for the person(s) to whom it is addressed and may contain information that is PRIVILEGED or CONFIDENTIAL. Any unauthorized use, distribution, copying, or disclosure by any person other than the addressee(s) is strictly prohibited. If you have received this email in error, please notify the sender immediately by return email and delete the message and any attachments from your system.

Nothing fishy at all - there is an optimization for certain modules (primarily apt and yum) where items are combined into a single execution. This makes these modules way more efficient, since the underlying package management systems are able to handle a list of package names at once just as easily.

But yes, the bug appears to be that some later conditional check is incorrectly being evaluated to make the task be skipped rather than run. As Matt mentioned above, there is a per-item check with the conditional to remove individual items, which I believe should be the only conditional check when the items have been merged into a list like this.

This is a total hack, but cute when used:

As a workaround try:

with_items: result|intersect(test)

the filter will only return items common to both lists, which seems to be what you want

Brian Coca