Unable to parse /runner/inventory/hosts as an inventory source

Let me make sure I get this all straight with detail. I’m so close to figuring this out, but I have hit a roadblock. I’m simply trying to automate any process at this point within VMware by way of AWX.

I have installed the following:

pip install pyvmomi
ansible-galaxy collection install community.vmware

Running AWX 23.6.0

My “inventory” for hosts is sourced from GitHub, and that sync works fine, pointing to /(project root).

Within my “project root” directory on GitHub, I have the following files:

inventory.ini:

[vcsa]
10.0.0.3

[esxi]
10.0.0.0
10.0.0.1
ansible.cfg:

[inventory]
enable_plugins = community.vmware.vmware_vm_inventory



vmware_vm_inventory.vmware.yml:

plugin: community.vmware.vmware_vm_inventory
strict: False
hostname: 10.0.0.3
username: <removed username here>
validate_certs: False
with_tags: False
with_folders: True

Now, within AWX, my Inventory and Projects items sync just fine.

I’m running AWX on Fedora, but I wouldn’t think that would be a big concern. If I perform a Dynamic Inventory for VCSA, it works fine.

For the life of me, I can’t get any playbooks to run without the inventory file being parsed.

Here is the playbook:

---
- name: Add a distributed switch to vCenter datacenter
  hosts: localhost
  gather_facts: no
  tasks:
    - name: Add distributed switch
      community.vmware.vmware_dvs_host:
        hostname: "{{ vcenter_hostname }}"
        username: "{{ vcenter_username }}"
        password: "{{ vcenter_password }}"
        validate_certs: no
        esxi_hostname: "{{ esxi_hostname }}"
        switch_name: "{{ switch_name }}"
        datacenter_name: "HOME-1"
        state: present

I feel like I’m 99% there on getting this to work. If not, I will just go back to using Python and PowerCLI, which I have no problem getting to work, but I really want to get AWX working.

@marshit can you screenshot or copy paste the error you are experiencing?

It’s not completely clear to me what you expect the behavior to be.

1 Like

What I expect is the playbook to run against my vCenter host to create a distributed switch.

Here is the screenshot:

From the synopsys:

Uses any file which ends with vmware.yml, vmware.yaml, vmware_vm_inventory.yml, or vmware_vm_inventory.yaml as a YAML configuration file.

Your inventory isn’t named that way. I don’t know if that’s the issue, but that’s what jumps out at me.

EDIT: Looks like that’s it.

    def verify_file(self, path):
        """
        Verify plugin configuration file and mark this plugin active
        Args:
            path: Path of configuration YAML file
        Returns: True if everything is correct, else False
        """
        valid = False
        if super(InventoryModule, self).verify_file(path):
            if path.endswith(('vmware.yaml', 'vmware.yml', 'vmware_vm_inventory.yaml', 'vmware_vm_inventory.yml')):
                valid = True

        return valid
1 Like

@utoddl Hello Todd, so just to clarify. So I may have things backwards, is what you are saying.

My “vmware_vm_inventory.vmware.yml” should look like this:

[vcsa]
10.0.0.3

[esxi]
10.0.0.0
10.0.0.1

I can create another file maybe called vars.yml, and add the following:

plugin: community.vmware.vmware_vm_inventory
strict: False
hostname: 10.0.0.3
username:
validate_certs: False
with_tags: False
with_folders: True

Then based upon that I don’t need the “inventory.ini” file at all correct?

All I can say for certain is we’re both confused.

The first bit you showed above looks like a .INI style inventory, not valid YAML, so I’m pretty sure that’s wrong, at least it’s not the .yml file the plugin wants.

The second bit, starting with plugin:, looks like Parameters, which that page says is what’s supposed to be in vmware_vm_inventory.yml (or one of those similar names). You configure the location of your inventory file (which is really the config file for the vmware_vm_inventory plugin) in your ansible.cfg, probably right after enable_plugins = ….

You can have multiple inventory sources, so the inventory.ini you started with should be fine, but you need to configure them both in your [inventory] section, or with -i on the command line (multiple times if needed) or through job templates in AWX.

1 Like

I changed my inventory file back to “inventory.yml”.

all:
  children:
    vcsa:
      hosts:
        vcsa.mars12.local:
          ansible_host: 10.0.0.3
    esxi:
      hosts:
        10.0.0.1:
        10.0.0.2:
    vms:
      hosts:
        10.0.0.4:
        10.0.0.5:
        # ... and so on for each host in the 10.0.0.* range

Then I left me “vmware_vm_inventory.vmware.yml” as is:

plugin: community.vmware.vmware_vm_inventory
strict: False
hostname: 10.0.0.3
username:
validate_certs: False
with_tags: False
with_folders: True

Within AWX my Inventories item syncs fine with Github which is what I expected.

The problem still is the same when I then go and run a Template job to create a distributed switch.

Something still is causing AWX to not be able to parse the inventory file located on Github - even though my inventory sync within AWX works fine.

[WARNING]: Unable to parse /runner/inventory/hosts as an inventory source
ERROR! No inventory was parsed, please check your configuration and options.

My “ansible.cfg” file still looks the same as before.

ansible.cfg:

[inventory]
enable_plugins = community.vmware.vmware_vm_inventory

Hope that explains a little better where I’m stuck. I feel like it’s something small I need to change, but heck if I know at this point. I have tried many things thus far.

Your [inventory] section still doesn’t point to an inventory.

[inventory]
enable_plugins = community.vmware.vmware_vm_inventory
inventory = path/to/vmware_vm_inventory.vmware.yml

Ironically, that ansible configuration page shows an “inventory =” example in the first Note: on that page, but it never defines “inventory” as a valid keyword! I’m not sure how you would set it for multiple paths, but I suspect you can use a comma delimited list. That would let you include your .ini file and your vmware.yml both.

1 Like

Let me add that line and see if it works.

Is that what it should say verbatim, “path/to/” since all files are within the root of the github repository?

Ironically, the Inventory item in AWX looks like it’s working and says its successfully pulled from Github. But when I look at the logs, it still says it can’t parse anything.

I fear we’re mixing things up. If you’re running a playbook from the CLI with ansible-playbook, then your “inventory sources” (possibly multiple files) are combined into an “inventory” — a logical level we mostly ignore because in the simple case inventories and inventory sources map 1:1.

In AWX, you define an inventory as a separate “thing”, with one or more inventory sources. Some of those inventory sources may be files pulled in from projects, or they can be scripts, or blobs of config like for your vmware case.

Then (still in AWX) you define a project (from gitlab, github, or other places). Within that project you define a job template. The job template gets specific about, among other things, what inventory it’s supposed to use.

Now here’s where it all gets guessworthy! AWX spins up a separate execution environment for each job. That ee gets customized before it starts running. One of the things that happens is running ansible-galaxy to satisfy any ./roles/requirement.yml and ./collections/requirements.yml that aren’t already satisfied. The other major bit is to populate /runner/project/inventory.yml with whatever your job template has defined as the inventory it’s supposed to use.

!!! At least, that (↑) is what my mental model of AWX job setup looks like. It may even be somewhat accurate. !!!

So until you get your AWX inventory defined with its inventory sources, and associate that inventory with a Job emplate, it doesn’t matter what files you have in your SCM project (the Source Controlled/Managed(??) github thingy). It won’t affect what end up in the ee’s /runner/project/inventory.yml. That’s built from all those other bits I just listed.

1 Like

Well, looks like I’m wrong. (Not the first time. Today even.)

/runner/project seems to be the root of your SCM project, and it took your top level ansible.cfg as one would hope.

So maybe you don’t have to define a separate inventory and inventory sources if they all come from the same project.

Live and learn. Thanks for showing me that.

In my experience, we started out with what I’d call a “fully articulated” AWX organization – i.e. separate inventories for groups of loosely related groups of SCM projects. I’ve never seen one where everything is rolled into one like that. Interesting.

1 Like

Well, thanks again for the feedback. It would seem it’s a simple resolution, but darn it, I just can’t figure out what direction to go at this point. I’m starting to feel like I will forget about Ansible and go back to the good old PowerCLI scripts I know work.

I think,: why use Ansible playbooks even on a controller node if I can just use PowerCLI? But I’m pushing myself to do this and learn with AWX.

But since most people, i.e. companies, love AWX and use VMware, I would like to get this to work. You may have something with that execution environment statement, hmmm…

Okay. I think I understand a lot more about where you are now, so maybe we can make better guesses. :slight_smile:

Here’s what I’d try:

  • Lose the ansible.cfg in the top level of your SCM project. I don’t think it’s contributing anything.
  • Set your inventory in your job template to that “VMware (Git)” inventory you showed in the screenshot above.
  • Launch the job and see if you get past the inventory issue. I think it should.
2 Likes

Hmmm, now we look to be moving along. Now it seems I need to lean up my playbook. But that’s the easy part…

:grinning:

PLAY [Add a distributed switch to vCenter datacenter] **************************
TASK [Add distributed switch] **************************************************
fatal: [localhost]: FAILED! => {“changed”: false, “msg”: “Unsupported parameters for (community.vmware.vmware_dvs_host) module: datacenter_name. Supported parameters include: esxi_hostname, hostname, lag_uplinks, password, port, proxy_host, proxy_port, state, switch_name, username, validate_certs, vendor_specific_config, vmnics (admin, pass, pwd, user).”}
PLAY RECAP *********************************************************************localhost : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0

1 Like

@utoddl Funny, now I get back to the same problem as before, where new playbooks don’t show up all the time in AWX. It’s not a problem I run into all the time, but occasionally it happens.

I would say your last response of removing the “ansible.cfg” file solved my problem. Now I have another problem with new files not showing up in AWX. I will get with that in another forum.

Thanks for your help!

Glad it worked out. I learned a bit, too.

As for new playbooks not showing up, there are several ways to address that.

You can always go to the relevant AWX project page and click on the “sync project” icon in the “Actions” column. Check the “revision” column first, though, to see if it’s really necessary.

Another thing to consider is to check the box labeled “Update Revision on Launch” in your AWX project configuration. That will cause AWX to ensure it has fetched the latest content you’ve pushed to your SCM (git) before every launch of a job from one of that project’s templates. If you make way more updates than you run jobs, this might make sense. But if you run jobs much more frequently than you update your repo, maybe not. This won’t solve the problem on brand new playbooks not being selectable in a job template edit or survey – you’ll need to cause an SCM update manually for that – but it will address the “missing updates to content” problem.

We got tired of the whole problem and solved it another way. Our gitlab webhooks fire off a job which uses the AWX api and git to go through all our AWX projects. If one them either references the project directly, or if the project they do reference has a requirements.yml that references the updated repo, then that job will cause AWX to run an SCM update for that AWX project. That works remarkably well. Since we started doing that, we haven’t had to bother thinking about the problem. I remember how frustrating it was though, to make urgent changes, commit, and run a job in a hurry only to find that AWX hadn’t gotten the memo about the change. Glad those days are behind us.

Yes, I always sync over and over again. The Update Revision I always have checked as a rule of thumb, I always check the update revision.

Now, in your last paragraph using the API this looks like what you mean correct - 22. Working with Webhooks — Ansible Tower User Guide v3.8.6

If so, I can set that up.

That’s very interesting; I’d never seen webhook credentials in AWX. Thanks for pointing that out.

We actually have our gitlab webhooks triggering Jenkins jobs (bash scripts) which do the AWX api work with curl and jq. They also run ansible-lint, and do a few other tricks, although not as many lately since ansible-lint has become so capable.

We’re looking at reducing our Jenkins footprint, though, and at doing more with gitlab runners. There are lots of ways to approach these problems now. Interesting times.