However when I am running my scenario kubernetes, it want’s to use the ansible_inventory.yml from it’s own ephemeral directory which does not contain the populated inventory file listed above.
Before this, I used the inventory directory extensions/molecule/inventory. This is accessible from all scenarios so the shared state feature does work. However this inventory does not get updated with the dynamic ip addresses for my instances. I am using libvirt currently but this can also change to vSphere or AWS.
I had hoped that molecule somehow combines the static inventory with the test instances hostnames and variables with the dynamic inventory from the ephemeral directory. Or somehow reuses the ip addresses from the instance_config.yml to update the static inventory in memory before running the playbooks.
So how would I need to setup Ansible-Native configuration with shared state and dynamic ip addresses?
For create phase you will implement a playbook that does all the provisioning of instances and/or containers and generate my_inv.yml.
Other phases will read the generated inventory to run the test against.
In theory, your my_inv.yml can be a static file by using appropriate inventory plugin to dynamically inspect parameters of your newly provisioned instances. This can work for AWS for example:
Your create.yml playbook provisions the instances using ec2_instance module. No additional work needed.
Once your converge.yml playbook runs, Ansible will use inventory that uses aws_ec2 inventory plugin to dynamically retrieve instance information. There is no need to persist, transfer or share any inventory related info between these phases.
I believe that I did not do well to describe my cause. My problem is not to share an inventory between the phases (e.g. create, converge, destroy) but between scenarios (e.g. default, kubernetes, etc.).
Multiple scenarios will create multiple ephemeral directories in the ansible temporary directory:
There you can see the first directory is holding a state.yml file which tells molecule that the “resource stack” (e.g. the virtual machines) have been created.
The second directory is tracking the inventory from the default scenario which is used to create and destroy the virtual machines. This scenario has only create and destroy stage like described in the documentation.
The third directory is used to run the actual converge stage from the kubernetes scenario.
What works
I can create virtual machines with libvirt. They have dynamic ip addresses and I can do molecule login. I can also destroy those machines. The ephemeral directory also contains the generated inventory with all necessary variables.
What doesn work
The ephemeral directory of the kubernetes scenario does contain another inventory. This inventory does not have the variables required for connecting to the virtual machines (e.g. ip addresses, ansible_user).
My question
How can the kubernetes scenario reliable connect to the virtual machines to run the converge phase. Or how can the kubernetes scenario reuse the inventory of the default scenario.
Regarding the documentation of molecule this should be possible somehow. I fear it’s not explained in a way I can understand how.
Yes, I am mixing both the old and the new way. If this is not recommended how would I then spawn test resources? In my understanding the whole point of molecule is to spawn test resources and run automated playbooks.
As far as I understood, the “old way” with provisioners is deprecated and will be removed in future versions. Hence the benefit of molecule would be gone if provisioning is not a part of it anymore. Not blaming here, just want to know if I understand this correctly.
I don’t see your driver section so I don’t know how you configured molecule to use libvirt.
Anyway, you cannot mix traditional and Ansible-Native approach. As soon as molecule sees platforms section in your config, it will assume you are using traditional approach.
I think that your example is one of those that showed the weakness of traditional Molecule. Many things, including how drivers worked, were hard coded and could not be workarounded.
So in Ansible-Native approach, instead of using a driver (libvirt in your example) you effectively implement your own by making a create.yml playbook that provisions your test instances. The molecule drivers were Ansible playbooks anyway but with Ansible-Native approach you now have a complete control of the provisioning process.
So what I initially said still holds. With Ansible-Native approach you have to:
Implement your own provisioning process in create.yml playbook. In your case you would use community.libvirt.virt module to get your instances up and running. You can use the create.yml playbook from the existing libvirt molecule driver as an inspiration:
Just a warning that it is overly complex because it tries to cover a lot of different environments and requirements. You can probably make it much simpler.
In the same create.yml playbook, implement a few tasks that generate your inventory (write a file) at some location that is accessible and referenced by all scenarios. Alternatively, you can use community.libvirt.libvirt dynamic inventory plugin to dynamically discover your instances.
If you want to stick to the traditional way, try creating per scenario molecule.yml that contains the following section:
I think this documentation page illustrates Ansible-Native approach of inventory handling well:
It uses containers as test instances but the only thing you have to change is what module is used for instance provisioning, like community.libvirt.virt instead of containers.podman.podman_container as shown in the example. Of course, there is additional stuff to do but you can use it as a skeleton of your create.yml playbook.
I did not configure the driver since I don’t want to use the original libvirt driver. So it probably is default. I used the new and old way both in a project inside my company and it works without problems. Only difference here is the shared_state option which is true for this project.
I could’ve shared this earlier now that I post it.
Okay what I understood is that I need a to create a second inventory inside my create playbook which then will be used by the other scenarios. I was unsure if this is the way reading the documentation.
I will go this route and implement it to see how this actually works. Thanks for your input so far.
Will report back when I have closure.
So it seems that shared_state does not work correctly in general as inventory is not shared. This is unlikely to be fixed because this is specific to traditional approach. With Ansible-Native approach you must handle this yourself anyway.
Now that I see your create.yml it now makes sense how you ended up in a situation where you don’t use any driver (defaults to delegated) but have instance_config.yml created which is specific to driver usage. Your create.yml explicitly creates it (last two tasks). It looks to me as if they were copy-pasted from the libvirt driver (AI generated?). instance_config.yml is kinda part of the protocol of communication between molecule and a driver but is meaningless with Ansible-Native approach. You make your own protocol including how inventory is shared between scenarios.
I personally use shared_state for testing but with systemd enabled Docker images so things are much simpler and no fiddling with inventory is necessary.
I’ve solved my problem with the unshared inventory. Instead of the instance_config.yml I now write a second inventory file into extensions/molecule/inventory/ which is generated from the created instances. This forces me to add a .gitignore entry for the generated inventory but solves the problem of the same inventory for different scenarios for now.
The ticket you mentioned was the reason I registered with this forum since there are no answers on the ticket that helped me. I’d like this issue to be solved so that I can remove my workaround.
Unfortunately without the platform key in molecule.yml I can not longer use molecule login since the method does not read the inventory and thus is not supported with the Ansible-Native way. However this is a small price to pay for the freedom of having a working libvirt setup that I can maintain easily.
I’ve pushed the code to my above mentioned repository if you want to take a look. Probably gonna write a blog post in the next few days to document my findings. Again, thanks for your help.