how to share custom plugins with the community

Hello,

I’ve written a simple action plugin to import csv file data on the management machine into a variable in a playbook.

I’d like to be able to share this with the community but it seems that Ansible Galaxy is only for roles/modules which execute on the remote managed nodes.

[ Note this is an action plugin not a role/module because the csv data is on the management machine, not the managed nodes. ]

So, I have a few questions:

  1. Am I correct that Ansible Galaxy cannot be used to share action plugins ?

  2. If so, are there any plans for Ansible Galaxy to encompass other types of plugins besides roles/modules.

  3. If not, what are the chances of this being included as an Ansible feature (I’d be happy to explain the use-case & work on a fork / pull request).

Regards,
John

You can put your plugins in a role (inside plugin specific dir) and share that in galaxy, to use the plugin you just need to reference the role in a play.

OK, I’ll try that. Thanks.

I’ve tried doing as you suggest but I’m getting the error:

ERROR! module is missing interpreter line

My action plugin is a single python script called load_csv.py which I previously stored in ./action_plugins, relative to the directory in which I have my playbook is stored.
This worked.

Now I’ve moved it /etc/ansible/roles/load_csv/action_plugins and I’m getting the above error.

(/etc/ansible/roles/ is where ansible-galaxy installs roles on my system)

Looking at the ansible source around the error, it looks like it’s expecting to find some module code to go with the role (with a shebang on line 1).
But surely this would run on the target node, which is not what I’m after.

Any ideas ?
I’m new to the ansible source code so I’m finding it difficult to debug.

I have found that if you use an action_plugin, you still need a file of the same name in the library dir, even if that file is empty.

Correct Mike.

John: Look at the modules for debug and assert. You’ll see they are just documentation. All the functionality of those modules happen in their associated action plugin.

https://github.com/ansible/ansible-modules-core/blob/c86a0ef84a46133814bf6f240237640139e09fad/utilities/logic/assert.py

https://github.com/ansible/ansible/blob/devel/lib/ansible/plugins/action/assert.py

The one caveat of Brian’s suggestion is that the community will have to “import” your plugin by declaring it as a role – one that will run no tasks, just put that plugin in Ansible’s search path for that play.

Mike, TImothy & Brian,

OK, got it now thanks.

Here’s my structure…

/etc/ansible/roles

– load_csv

– action_plugins
-- load_csv.py # action_plugin code – library
`-- load_csv # empty

Called as follows from a playbook:

I think, generally, action_plugins are used if you want to do local pre-processing on the localhost before pushing changes to the remote server.

The other reason to use an action_plugin is if your task needs access to task vars (different from task_args) and other data that is not provided to the library modules. Remember that library modules don’t need to be python (I wrote one in TCL for something that required it).

For tasks that you only want to run once, Ansible provides a ‘run_once’ attribute you can assign to the task. Personally, I don’t like the ‘run_once’ because any changes are associated with the first server on your list (i.e the one for which the task was run once),so if it’s something that I just want run on the local server (e.g. open a new CM ticket to signify changes are starting), I usually create a play to run only on localhost and run that before the main one. In the CM example, I would also create a 3rd play at the end to close the ticket from localhost.

I agree that the library module vs action_plugin is confusing and counter intuitive and hopefully, this will be clarified in a future release (e.g. eliminate the need for a library module when you already have an action_plugin).

It might help to browse through many of the core module and also through any corresponding action_plugins to better illustrate the use cases.

run_once is not the most elegant thing however you can some additional mileage out of it when used with local_action.

(I didn’t have John’s cvs module to test with so I used the stat module to demonstrate the idea.)

action plugins need modules as these are then used to house your documentation.

Hi Timothy,

I thought include_vars only understood YAML & JSON ??? That’s what the docs seem to suggest. If it supports CSV, then almost certainly it would be of use to me.

(The ultimate win I suppose would be for Ansible to support pluggable data sources.)

My CSV file contains filesystem data as a proof of concept:

vg,lv,fstype,mount_point,size,owner,perms
myvg,jb,ext3,/mnt/jb,20M,root,755
myvg,jb1,ext3,/mnt/jb/jb1,12M,root,755
myvg,jb2,ext3,/mnt/jb/jb2,32M,root,755

I have a module which loads the above data into a dict and iterates over it to creates logical volumes, filesystems and mounts.

The general idea behind this…

  • My team carry out many system deployments.

  • We are fed information from other teams as spreadsheets. E.g. lists of users/groups, software, filesystems, etc.

  • If Ansible can consume data in CSV format, we can formalise the spreadsheets supplied by other teams to export CSV, to take a lot of the manual effort out of our deployments.

The organisation are currently using Puppet and could use Hiera to achieve a similar result. But they are open to suggestions, and I think Ansible might be a better fit for their way of working.

The module which uses the csv data is as follows:

roles/filesystems/tasks/main.yml

`

tasks file for filesystems roles

  • name: load csv file
    load_csv: file={{csvfile}}
    register: filesystems

  • name: create lvs
    lvol: lv=“{{item.lv}}” vg=“{{item.vg}}” size=“{{item.size}}”
    with_items: “{{filesystems.data}}”

  • name: create filesystems
    filesystem: dev=“/dev/{{item.vg}}/{{item.lv}}” fstype=“{{item.fstype}}”
    with_items: “{{filesystems.data}}”

  • name: create mounts
    mount: src=“/dev/{{item.vg}}/{{item.lv}}” name=“{{item.mount_point}}”
    fstype=“{{item.fstype}}” state=mounted
    with_items: “{{filesystems.data}}”
    `

The action plugin uses the header in line 1 of the CSV to create a dictionary.

action_plugins/load_csv.py

`
from future import (absolute_import, division, print_function)
metaclass = type

from ansible.plugins.action import ActionBase

import csv
import os

class ActionModule(ActionBase):

def run(self, tmp=None, task_vars=None):
if task_vars is None:
task_vars = dict()

super(ActionModule, self).run(tmp, task_vars)

file = self._task.args.get(‘file’)

if file is None:
return dict(failed=True, msg=“file is required”)

try:
file = os.path.expanduser(file)
fobj = open(file, ‘rb’)
reader = csv.DictReader(fobj)
data = [i for i in reader]

except Exception as err:
return dict(failed=True, msg=“failed: %s” % err)

return dict(failed=False, changed=False, data=data,)
`

The top level playbook looks like this:

`

Sorry if I wasn’t clear. I was referring to the include_vars module as an alternative implementation of what you are doing in that it avoids the extra step of the required register to a variable. Instead place the cvs data could be returned and stored as facts. Given include_vars behavior this seems more intuitive to me also.

I believe a lookup plugin would be a better fit for loading a CSV file than a module or action plugin. Much easier to write and you can get and use the data in a single task rather than having to register a result and use it later.

​csvfile lookup already exists​

Ahh, well, there you go.

Hi,

csvfile lookup is only useful if you are looking up a known key so It’s no use in my situation.

I need to read the whole file and iterate through the results using with_items.