I am one of the small-time contributors on the repo. Adding some features where I need them.
Currently my testing methodology is very crude. I have a test playbook that exercised the module in various ways and hope that I test all the possible combinations. Once happy with the changes they get committed and the test playbook is deleted. How do you generally exercise your modules yourself ?
The more I am doing this the more it dawns on me that regression is bound to happen. Given the high rate of change ( around 20 commits per day ) and that testing is probably also done manually by the reviewers I am afraid of the result. We’re talking about changes that are potentially critical to an infrastructure.
Has there been any talk to introduce runnable tests for ansible modules ?
We setup travis to run the unit tests automatically when you merge, it
is still not at 100% as we sometimes break it when adding new tests,
also the v2 switch also broke some tests, some are invalid, others are
because we have not finished implementing issues.
Hopefully this will be more stable soon and you'll be able to rely on
those results going forward, still not all modules have tests nor full
coverage for those which do. We also have integration tests we run
internally, most dealing with clouds which require a 'pay account'.
I’m unsure what you are referring to. What are “doctests”? Are you referring to the examples provided with a module? If so then not really, that would be part of an integration test.
Outside of that I am not aware of any other tests. Eventually there will probably be some additional tests done via travis, but not for the sake of validating functionality, but validating that it conforms to requirements.
I'm unsure what you are referring to. What are "doctests"? Are you
referring to the examples provided with a module? If so then not really,
that would be part of an integration test.