ansible week and thoughts

So this week, among other things I've been working on ansible playbooks
and how to structure a repository for a bunch of systems. For our
purposes let's say a bunch is not a massive number but a middle-range
number - 100-150ish servers of various kinds on multiple distros - but
nothing dramatically different.

I've also been working on a couple of scripts using the ansible api for
letting me more easily have a simple programmatic interface to ssh.

I talked to some collaborators and coworkers about ansible and fleshed
out some areas where it needs improvement or changes. I wanted to
articulate those thoughts.

So let's start with the positive items:

1. the api and the async modules work pretty well for letting me
communicate to a bunch of boxes (or a bunch of boxes about different
things). I think I can use this effectively for a number of projects.
Not the least of them being post kickstart provisioning and to sanely
script a large-ish number of processes where doing them via puppet is
not convenient or just not possible and doing them manually is
error-prone.
The api is a great addition on top of paramiko it is what I think
people want paramiko to be and I like that a lot. Even if you just used
the simplest modules (command) and added async to it - it gives you A
LOT of power in a sensible mechanism for communication.

2. the modules - most of them work well - some need some love but b/c
of how they run fixing and testing them is trivially easy. That's good.
In comparison to some other tools the ansible modules have a way to go
- and that's tricky. On the one hand I think moving that distance is
not hard - otoh I think it would be worthwhile to discuss
a common module basis with other python-based projects so we could
stop duplicating code. I've looked at a bunch of the salt modules and
the func modules and some of how bcfg2 runs things and I do think all
of these tools could all gain ground with a common way of editing
fstab or common set of service/chkconfig callers, a common yum/apt
module, etc, etc.

3. the inventory - the host inventory is a nice and straightforward
mechanism. I like that. I also like the idea of combining my existing
inventory system easily. I think there is room for growth there.

Things I'm concerned about:
1. playbooks/dsl - the playbooks and the yaml-ishness are tricky.
There are still a fair number of ways to make the playbook/yaml parser
traceback and sometimes figuring out what the syntax issues are can be
tricky. Also I worked through a number of "does this variable exist
here or not" issues this week (and some of them have been fixed,
rapidly, so I acknowledge it is improving) but I'm concerned about how
far it needs to go and if there are growing pains that might be
onerous. The issue in general is that I think there is some resistance
to calling this a language - but, ultimately, it is and that resistance
is making things harder to admit. This could be one reason to keep the
playbooks around but to write a module to use some other projects' DSL
on the system or it could be a reason to bite the bullet and have it be
a language. I think everyone is concerned about the long term
implications of writing and maintaining all the modules AND a language.
Coming from func I know how problematic maintaining the modules can
become.

2. performance scale. One of the reason I like ansible is the
push-mechanism being ssh. Now - for "I just kickstarted 5 systems - go
provision them with this playbook" is more or less fine. Running that
same playbook on all my systems - even forked - is gonna take a while,
though. I did a preliminary test and a bunch of the modules and
multiple-item execution is going to need a fair bit of work to keep the
ssh connection time from eating us alive. To be clear - I don't think
ansible was intended for the 1000+ machine scale - but if I'm going to
be learning a dsl I'd rather learn the same dsl that I can use for
c&c/post-provisioning mechanisms AND for my
maintenance-mode-run-every-30-minutes mgmt tooling. Right now if I have
150 systems and I fork off 25 forks on my master system to run the
playbooks. Then it will take 6 cycles of 25 to cover them all. So for
the partial playbook I've implemented for one system provisioning took
47s - and I suspect I wrote about 1/5th of what we normally do in
puppet to provision the system - I'm guessing about 5 minutes per
server. So if we figure 5 minutes per server and 6 cycles to cover all
150 - we're talking a half hour to get all of it done. That's a concern.

Things I'm intrigued by (as answers to what I'm concerned with :)):

Ansible Pull:
So the ansible pull mode that Stephen Fromm has worked on seems
great - but it needs the dsl and the modules to be improved to really
be more useful. I am concerned about making my whole git repo available
to each node, though. I'm more inclined to want to say "take this
playbook, collect all the files, modules and templates that it will
need to run for each node and put that into a tarball or a git repo or
whatever PER NODE and shove just that at each node" - my reasoning is
simple:
  - my git repo(s) will eventually contain certs/keys/passwords -
    all sorts of random stuff and I definitely do not want that, for all
    my systems on every system.
  - I also do not want the node to be able to get to anything other than
    explicitly what they have.

I think that set of changes should actually be quite simple to achieve.
If that's the case then many of my playbook performance concerns just
go away.

Ansible API:
As I said before I like the power of the api. If the python api is
stabilizing - and I can see a handful of other items people may want
but it looks stablish to me. Then I think I would want to start
writing other non-playbook-based programs with it. I have the start of
a couple already:
http://fedorapeople.org/gitweb?p=skvidal/public_git/scripts.git;a=tree;f=ansible;hb=HEAD

and I have 2 or 3 more that I think will be ports of things I wrote
for func.

Is there any reason to think it's not stablizing at this point?

So those are a collection of my (and other folks) thoughts after
spending most of a week trying to figure out how to make this all
function together and what portions of the infrastructure I help
maintain can benefit from this.

I'm fairly hopeful about the possibilities here. Hope that comes across
in my mail.

thanks,
-sv

Replies inline.

  1. the api and the async modules work pretty well for letting me
    communicate to a bunch of boxes (or a bunch of boxes about different
    things). I think I can use this effectively for a number of projects.
    Not the least of them being post kickstart provisioning and to sanely
    script a large-ish number of processes where doing them via puppet is
    not convenient or just not possible and doing them manually is
    error-prone.
    The api is a great addition on top of paramiko it is what I think
    people want paramiko to be and I like that a lot. Even if you just used
    the simplest modules (command) and added async to it - it gives you A
    LOT of power in a sensible mechanism for communication.

Cool.

  1. the modules - most of them work well - some need some love but b/c
    of how they run fixing and testing them is trivially easy. That’s good.
    In comparison to some other tools the ansible modules have a way to go
  • and that’s tricky. On the one hand I think moving that distance is
    not hard - otoh I think it would be worthwhile to discuss
    a common module basis with other python-based projects so we could
    stop duplicating code. I’ve looked at a bunch of the salt modules and
    the func modules and some of how bcfg2 runs things and I do think all
    of these tools could all gain ground with a common way of editing
    fstab or common set of service/chkconfig callers, a common yum/apt
    module, etc, etc.

I’m not really interested in the overhead of coordinating with other projects – this ads bloat and organizational headaches. Further, Ansible is all about keeping complexity and dependencies low.
Let’s say I’m totally not a believer in the direction any of the other projects are going. This is why this project exists.

So yeah, this isn’t going to happen. It would be a huge distraction, I’m afraid.

So far the amount of people contributing to module support and upgrades is STELLAR and I have absolutely no lack of faith in that continuing to be the case. We’ve got a huge user base already so it’s pretty easy
to stay up to date and scale things out.

  1. the inventory - the host inventory is a nice and straightforward
    mechanism. I like that. I also like the idea of combining my existing
    inventory system easily. I think there is room for growth there.

Things I’m concerned about:

  1. playbooks/dsl - the playbooks and the yaml-ishness are tricky.
    There are still a fair number of ways to make the playbook/yaml parser
    traceback and sometimes figuring out what the syntax issues are can be
    tricky. Also I worked through a number of “does this variable exist
    here or not” issues this week (and some of them have been fixed,
    rapidly, so I acknowledge it is improving) but I’m concerned about how
    far it needs to go and if there are growing pains that might be
    onerous. The issue in general is that I think there is some resistance
    to calling this a language - but, ultimately, it is and that resistance
    is making things harder to admit. This could be one reason to keep the
    playbooks around but to write a module to use some other projects’ DSL
    on the system or it could be a reason to bite the bullet and have it be
    a language. I think everyone is concerned about the long term
    implications of writing and maintaining all the modules AND a language.
    Coming from func I know how problematic maintaining the modules can
    become.

I agree and disagree.

The operating theory is that there aren’t that many things to do when configuring systems. Puppet has taught people complex idioms, and Ansible is about getting back to basics. Thinking in terms of the idioms of another project is going to lead to pain. Think differently, and it will be ok.

I think our modules have been pretty simple and we avoid bitrot (like Func acquired) by not supporting everything under the sun in core, and choosing to accept “language” features only when they apply to a very large use case.

It’s also all about minimizing code paths and complexity.

Supporting another project’s “DSL” is also never going to happen.

If you want a programming language, this is NOT going to be the project for you.

Ansible’s going to be simple and light, even if that’s not for everyone, because those were the goals I set out when creating the project – and that’s why people are here now. They recoiled at the notion of the other tools.

I will continue to resist that – perpetually. It is why Ansible is different.

As for variable issues, and intermediate bugs, sure, we have and fix bugs. It’s software. I think bugs are getting fixed blazingly fast
and I am hugely impressed with everyone. Most of the things fixed were in very niche corner cases, well out of the way of most people’s basic
usage of playbooks.

There are definitely tiers of different classes of users and environments – people trying to port Puppet over to Ansible are some of those
having the most complexity because they are trying more, where people in green field scenarios are doing basic stuff, and finding it to work out
fine.

We’ll continue to improve all of this over time.

Anyway, I’d really like to speak in terms of concrete examples about the future – as that’s the only way we can really frame discussions around
features. What language features/etc do you want, etc.

  1. performance scale. One of the reason I like ansible is the
    push-mechanism being ssh. Now - for “I just kickstarted 5 systems - go
    provision them with this playbook” is more or less fine. Running that
    same playbook on all my systems - even forked - is gonna take a while,
    though. I did a preliminary test and a bunch of the modules and
    multiple-item execution is going to need a fair bit of work to keep the
    ssh connection time from eating us alive. To be clear - I don’t think
    ansible was intended for the 1000+ machine scale - but if I’m going to
    be learning a dsl I’d rather learn the same dsl that I can use for
    c&c/post-provisioning mechanisms AND for my
    maintenance-mode-run-every-30-minutes mgmt tooling. Right now if I have
    150 systems and I fork off 25 forks on my master system to run the
    playbooks. Then it will take 6 cycles of 25 to cover them all. So for
    the partial playbook I’ve implemented for one system provisioning took
    47s - and I suspect I wrote about 1/5th of what we normally do in
    puppet to provision the system - I’m guessing about 5 minutes per
    server. So if we figure 5 minutes per server and 6 cycles to cover all
    150 - we’re talking a half hour to get all of it done. That’s a concern.

This is entirely why ansible-pull exists, deployment cases and initial provisioning setup, ad hoc needs, SSH, all work for a moderate number of hosts.

If you need more, you can still use the pull script which is just the local mode we’ve already had for a while.

When I wrote Ansible, it was written clearly with small shops in mind (particularly a past employer with 50-100 boxes), knowing it wouldn’t be appropriate for everyone – the pull based model is what you need in a larger environment.

I still thing that push model is absolutely vital to multi-tier release processes in complex web app environments, and it’s going to remain very appealing there (and also obviously for ad hoc).

Config, it depends. Repeated configuration you are probably better off with pull, if it’s just initial setup and you are NOT so concerned with repeated “whack my system back in line” Puppet style semantics, it’s less of an issue.

From blending in some outside industry experience so far, not everyone is actually concerned about that as much s DevOps circles might state. That being said, our modules are still happy and idempotent, and if you want to run them
repeatedly, you totally can.

Things I’m intrigued by (as answers to what I’m concerned with :)):

Ansible Pull:
So the ansible pull mode that Stephen Fromm has worked on seems
great - but it needs the dsl and the modules to be improved to really
be more useful. I am concerned about making my whole git repo available
to each node, though. I’m more inclined to want to say “take this
playbook, collect all the files, modules and templates that it will
need to run for each node and put that into a tarball or a git repo or
whatever PER NODE and shove just that at each node” - my reasoning is
simple:

Ultimately it’s a minimal viable product, sure. However, this is also exactly how the best of them scale Puppet up to large numbers. As I said in the email yesterday, things like NFS are also possible.

Please speak in terms of concrete examples where you are talking about modules, it’s not helpful to say they need to be improved but not indicate HOW they can be improved.

  • my git repo(s) will eventually contain certs/keys/passwords -
    all sorts of random stuff and I definitely do not want that, for all
    my systems on every system.
  • I also do not want the node to be able to get to anything other than
    explicitly what they have.

Sure, I don’t expect it to be for everybody… it’s not trying to solve all the world’s problems.

If you need to push password info, pull is not intended for that.

The reason ansible is so freaking small is because it cuts out the server implementations that provide just what the nodes need, no fileserver, etc, etc.

I will continually resist adding that, because that point it becomes just like the other config tools… and that’s not why people are here.

I think that set of changes should actually be quite simple to achieve.
If that’s the case then many of my playbook performance concerns just
go away.

Ansible API:
As I said before I like the power of the api. If the python api is
stabilizing - and I can see a handful of other items people may want
but it looks stablish to me. Then I think I would want to start
writing other non-playbook-based programs with it. I have the start of
a couple already:
http://fedorapeople.org/gitweb?p=skvidal/public_git/scripts.git;a=tree;f=ansible;hb=HEAD

and I have 2 or 3 more that I think will be ports of things I wrote
for func.

Is there any reason to think it’s not stablizing at this point?

Nope :slight_smile:

So those are a collection of my (and other folks) thoughts after
spending most of a week trying to figure out how to make this all
function together and what portions of the infrastructure I help
maintain can benefit from this.

Good info, thanks Seth… I guess my point is I’d really like to hear discussion about WHAT in terms of modules and language you need, and how we can balance that
with complexity tradeoffs first and foremost.

The comments about Ansible pull and good ways to lock up passwords are a definite question,but I don’t anticipate everybody has all of these problems per se. I am interested in hearing
creative attempts at solutions.

Right now this sort of reads like you’re frustrated with some things not being at the level you want … and I sympathize … but I also know a lot of people ARE really happy too.

So I’d prefer to frame discussions in terms of a list of what you want, and we can see what we might want to add… and whether it makes sense for the projects.

I think I need to see more concrete examples about what you might want in terms of playbook capabilities that we haven’t talked about already, and what you want to see in terms of module capabilities they don’t have…

We should have another discussion on ansible-pull and private data, sure, which is why I brought up the post yesterday. Then again, if we don’t solve it, I am still very happy we’ve created a tool that solves a huge
slew of use case problems for people that were unhappy with complexity of other tooling.

If we miss a few use cases in the names of keeping it simple, I’m kinda ok with that too.

Anyway, contribution volume lately has been AMAZING – I have no lack of faith in both our present state, ability to address concerns, or ability to move things forward where we need to. I also think our ultimate goal
is a day when Ansible inventory/runner/playbook code stops growing altogether – I want this to start happening very very soon – and almost all activity revolves around modules and the occasional subtle bugfix.

So yeah, I don’t mean to be discounting opinions – but I also want to make it clear – the simplicity Ansible has means occasionally it will be “no” to some features and capabilities, and the same way it makes it not
for some people, it makes it exactly right for other people too. I think it’s everyone’s job to figure out what is best for them, because I’m not going to try to please everyone – the original design goals will always
be a huge driving factor.

–Michael

Replies inline.

me too!

including this one!

>

I'm not really interested in the overhead of coordinating with other
projects -- this ads bloat and organizational headaches. Further,
Ansible is all about keeping complexity and dependencies low. Let's
say I'm totally not a believer in the direction any of the other
projects are going. This is why this project exists.

So yeah, this isn't going to happen. It would be a huge
distraction, I'm afraid.

So far the amount of people contributing to module support and
upgrades is STELLAR and I have absolutely no lack of faith in that
continuing to be the case. We've got a huge user base already so
it's pretty easy to stay up to date and scale things out.

Fair enough. I'm probably going to grab some code from other projects -
the 'should we coordinate' thing is a question I always ask before I do
something like that.

In particular salt's fstab/mount module looks ripe for the pickings. :slight_smile:
There are so many ways I love open source software - this is one of
them. :slight_smile:

I agree and disagree.

The operating theory is that there aren't that many things to do when
configuring systems. Puppet has taught people complex idioms, and
Ansible is about getting back to basics. Thinking in terms of the
idioms of another project is going to lead to pain. Think
differently, and it will be ok.

Differently is do-able up to a point. Right now I'm thinking less in
terms of code and more in terms of repository structure: how do I
describe these tasks/handlers in such a way that I can use it to help
me both provision and maintain the systems going forward - and also so
I can share task/handlers for others.

Anyway, I'd really like to speak in terms of concrete examples about
the future -- as that's the only way we can really frame discussions
around features. What language features/etc do you want, etc.

I think multiple-level task and handler inclusion is going to be
necessary.

I think getting the variable replacement scoping defined really well is
also important.

Those are two specific issues.

Config, it depends. Repeated configuration you are probably better
off with pull, if it's just initial setup and you are NOT so
concerned with repeated "whack my system back in line" Puppet style
semantics, it's less of an issue.

From blending in some outside industry experience so far, not
everyone is actually concerned about that as much s DevOps circles
might state. That being said, our modules are still happy and
idempotent, and if you want to run them repeatedly, you totally can.

Please speak in terms of concrete examples where you are talking
about modules, it's not helpful to say they need to be improved but
not indicate HOW they can be improved.

The specific modules I am thinking about is multiple item calls to the
following modules:
  - service, yum, apt, copy, file and virt will be expensive.

Beyond that I'm pretty sure that we have and we will find exciting
cases like the service module - when we try to cover more than rh-ish
and debian-ish boxes.

-sv

<snip>

Please speak in terms of concrete examples where you are talking
about modules, it's not helpful to say they need to be improved but
not indicate HOW they can be improved.

The specific modules I am thinking about is multiple item calls to the
following modules:
- service, yum, apt, copy, file and virt will be expensive.

I have one comment on the yum side of things. With RH systems
contacting rhn rather than an in-house satellite/spacewalk system, you
can easily get your hosts blacklisted with repeated yum actions in a
given period of time, this is something we see with the way puppet
handles '<package> => latest' and the like, since it doesn't aggregate
all the yum calls into one, (which would be quite a task.) and each
<package> => 'action' requires a yum query. This can be ameliorated if
you think about it when crafting your manifests, but it's an annoying
hoop to jump through. I don't know that there's a good way to solve
this really, but if you have a system that has a lot of package
checks/updates running on a regular basis, it's easy to get it
blacklisted from rhn. IIRC, it's something like 1000 calls in a 24hr
period.

Fair enough. I’m probably going to grab some code from other projects -
the ‘should we coordinate’ thing is a question I always ask before I do
something like that.

In particular salt’s fstab/mount module looks ripe for the pickings. :slight_smile:
There are so many ways I love open source software - this is one of
them. :slight_smile:

Might be better to implement, for clarity purposes I don’t want stuff being different licenses.

But ideas, probably, sure.

Anyway, I’d really like to speak in terms of concrete examples about
the future – as that’s the only way we can really frame discussions
around features. What language features/etc do you want, etc.

I think multiple-level task and handler inclusion is going to be
necessary.

yeah, we need to do this.

I think getting the variable replacement scoping defined really well is
also important.

unclear a bit what this means… details?

The specific modules I am thinking about is multiple item calls to the
following modules:

  • service, yum, apt, copy, file and virt will be expensive.

Excessive over SSH via repeated ops, yes.

Using with_items, I think only yum is going to be bad because of the repeated calculations?

apt-cache is pretty well optimized, I think.

That being said, I’m not opposed to them taking lists, we just need to be consistent.

I have one comment on the yum side of things. With RH systems
contacting rhn rather than an in-house satellite/spacewalk system, you
can easily get your hosts blacklisted with repeated yum actions in a
given period of time, this is something we see with the way puppet
handles ‘ => latest’ and the like, since it doesn’t aggregate
all the yum calls into one, (which would be quite a task.) and each
=> ‘action’ requires a yum query. This can be ameliorated if
you think about it when crafting your manifests, but it’s an annoying
hoop to jump through. I don’t know that there’s a good way to solve
this really, but if you have a system that has a lot of package
checks/updates running on a regular basis, it’s easy to get it
blacklisted from rhn. IIRC, it’s something like 1000 calls in a 24hr
period.

Hmm.

Most people I know have set up basic mirrors with yum reposync. This is what I’ve always encouraged
everyone to do with Cobbler.

It seems crazy to have all those systems hitting RHN with RHN not being guaranteed to be up
or reliable, with all the dependencies changing under you as you have a repo that isn’t snapshotted.

That being said, it does make sense for apt and yum to take lists. If only that, at first. It needs to be consistent so
if yum changes so does apt.

The other modules I’m less concerned about right now.

I did also suffer a bit from this in the beginning.

There's variables coming into play from all directions.
We have:
- inventory variables
- system facts
- playbook variables (internal and external)
- include variables

Where can you use each of those? Runner, task, include, with_items?

For example: should you be able to use a system fact to include a
playbook? (no!) But you can use them to include variable files in a
playbook... Are include variables seen in runner? (yes!) But it's not
immediately obvious.

The non-obviousness is in: which variables are available in a playbook
and which ones in runner? If you know the code, you know that the
action line gets passed to runner and is templated there, so host
variables are available, same for only_if. But with_items is strictly
in a playbook. So you can have two lines right next to one another.
One accepts jinja2 templating with system specific variables, the
other doesn't.

This is obvious to us, but for someone new to ansible, this can be confusing.

Same story for which syntax to use. $ or {{ }}. I'm inclined to say:
only support $ and use jinja2 for real templates only. Make you new
no_engine the default.

It's getting more consistent over time, but a list of which variables
are available in which statement in a playbook would be a good
starting point to discuss.

Jeroen

This is obvious to us, but for someone new to ansible, this can be confusing.

Same story for which syntax to use. $ or {{ }}. I’m inclined to say:
only support $ and use jinja2 for real templates only. Make you new
no_engine the default.

Agreed here.

We need to support something like ${foo.hashkey.hashkey} for accessing the nested fact data though, before we can do that.

It’s getting more consistent over time, but a list of which variables
are available in which statement in a playbook would be a good
starting point to discuss.

Yep.

include and with_items can’t make use of fact variables today, nor can inventory, because the order of execution is basically like this:

  • read inventory file
  • playbook parsing and construction of task lists
  • get system facts
  • do vars_files imports
  • run line items

I think with more usage of with_engine and moving more evaluation intro runner (like with_items is a runner code, thing, not a playbook thing, even though
it will never be surfaced in /usr/bin/ansible – we can probably eliminate those quirks.

Initial implementation meant it was a lot easier to think of Playbooks/Runner being separate tools, so with_items and includes being playbook constructs, there
evaluation is not as per-host as other things.

Fixable I’m pretty sure … may be slightly confusing to implement, but doable.

This probably could do with a brief explanation in the docs, but it would be better to make it work as much as one would expect first.

–Michael

We need to support something like ${foo.hashkey.hashkey} for accessing the
nested fact data though, before we can do that.

I wrote some code to flatten the facts data structure for use in a
script that used the ansible API. I'll see if I can re-use that.

I think with more usage of with_engine and moving more evaluation intro
runner (like with_items is a runner code, thing, not a playbook thing, even
though
it will never be surfaced in /usr/bin/ansible -- we can probably eliminate
those quirks.

This is also the most straightforward way to solve the yum needs a
list problem. But then we'd need to special case the yum and apt and
whatever package manager we will support in the futures modules in
Runner... I don't particularly like all this special casing, but your
explanation the other day was reasonable. We're almost in cdist
territory if we add too much complexity (
http://www.nico.schottelius.org/software/cdist/man/latest/man7/cdist-stages.html
).

Initial implementation meant it was a lot easier to think of
Playbooks/Runner being separate tools, so with_items and includes being
playbook constructs, there
evaluation is not as per-host as other things.

Agreed. It's useful for auditing to ask Playbook: now get me the
complete playbook with all includes in place for host X. Add anything
host-variable based to it and you pretty much lose that without
accessing the host (obviously).

Jeroen

This is also the most straightforward way to solve the yum needs a
list problem. But then we’d need to special case the yum and apt and
whatever package manager we will support in the futures modules in
Runner… I don’t particularly like all this special casing, but your
explanation the other day was reasonable. We’re almost in cdist
territory if we add too much complexity (

We will not be special casing yum in runner. This is a slippery slope
and really isn’t required.

I really like the idea of making with_items smarter, like I mentioned earlier:

action: yum name=$items state=latest
with_items:

  • a
  • b
  • c

renders to:

action: yum name=a,b,c state=latest

Whereas:

action: user name=$item state=present
with_items:

  • a
  • b
  • c

Makes three tasks.

Further, this also needs to work:

with_items: $list_variable

Obviously the yum module needs to be upgraded to support taking a comma separated list too.
And (dead horse, I know) if we do this, we must also do apt for me to accept either patch.
That’s to eliminate questions about why one of them is different :slight_smile:

http://www.nico.schottelius.org/software/cdist/man/latest/man7/cdist-stages.html
).

Yikes :slight_smile:

Yes, I think if we have to explain something like that, we have gone off the rails.

I think I misunderstood this actually. We don’t want a “yum_command” method on par with “copy”
in runner, but if with items has a list of modules it knows supports lists, that seems like a much
more elegant solution than $item vs $items.

Great!