I am struggling to break out variables in the “var” directory of roles into individual files
I have tried and continue to get tracebacks. I thought this would be straight forward after seeing the documentation for include_vars, but evidently I am missing something here.
I was trying something like this with Ansible 1.4.4
> ansible-playbook vm.yml
Traceback (most recent call last):
File “/usr/bin/ansible-playbook”, line 269, in
sys.exit(main(sys.argv[1:]))
File “/usr/bin/ansible-playbook”, line 209, in main
pb.run()
File “/usr/lib/python2.6/site-packages/ansible/playbook/init.py”, line 229, in run
play = Play(self, play_ds, play_basedir)
File “/usr/lib/python2.6/site-packages/ansible/playbook/play.py”, line 83, in init
ds = self._load_roles(self.roles, ds)
File “/usr/lib/python2.6/site-packages/ansible/playbook/play.py”, line 327, in _load_roles
roles = self._build_role_dependencies(roles, , self.vars)
File “/usr/lib/python2.6/site-packages/ansible/playbook/play.py”, line 192, in _build_role_dependencies
role_vars = utils.combine_vars(vars_data, role_vars)
File “/usr/lib/python2.6/site-packages/ansible/utils/init.py”, line 1008, in combine_vars
return dict(a.items() + b.items())
AttributeError: ‘list’ object has no attribute ‘items’
PLAY [admin-vm] ***************************************************************
TASK: [provision | Creating virtual machine instances] ************************
fatal: [10.0.0.6] => One or more undefined variables: ‘centos64’ is undefined
FATAL: all hosts have already failed – aborting
PLAY RECAP ********************************************************************
to retry, use: --limit @/root/vm.retry
10.0.0.6 : ok=0 changed=0 unreachable=1 failed=0
I have tried using a relative and absolute paths to imagenames.yml with no luck.
Another approach that is working for me is to pass in a vars file that points to all the other vars files that you may need that change from run to run.
The use case is for creating clusters of hadoop based apps that have a high degree of configuration for various environments.
I find ansible’s variables architecture ideal for setting up a situation that is more or less the same from run to run.
But when you have network variables that change from one datacenter to the next, or a cluster configuration that depends on staging/prod/dev
I find In need a more robust and de-coupled way of passing in re-usable but swappable configs.
Note that you could do this by putting all the vars into a group_vars/cluster_name.yml but then you have to get creative with groups or lose the re-usability of components that can be shared between groups.
Here’s how i do it.
cluster_cards/deployment_name/cluster_config.yml in my ansible playbook directory contains
network_config: “path to networking details for deployment”
hadoop_config: “path to architecture of hadoop cluster”
environment_config: “path to file with specific dev/prod config stuff”
Then in site.yml (or a tasks file with include_vars: )
If you want to include multiple files and have that role associated versus play associated, the best way to do this is to use the “include_vars” module inside a task file.
There’s an open RFE to include every file in “vars/” automatically with “main” coming first, but I’m thinking we’re likely to close that idea entirely – as conditional includes are useful things.
If you want to include multiple files and have that role associated versus play associated, the best way to do this is to use the “include_vars” module inside a task file.
There’s an open RFE to include every file in “vars/” automatically with “main” coming first, but I’m thinking we’re likely to close that idea entirely – as conditional includes are useful things.
I had most of the items using include_vars to begin with, but I make heavy use of --start-at-task while developing scripts / debugging.
It would be nice if include_vars at the top of the file would get loaded even when starting at a task below that.
It’s hard to imagine a case where that would be bad.
" It would be nice if include_vars at the top of the file would get loaded even when starting at a task below that."
If include vars is below a task they are definitely injected into the namespace by the time a task below it is executed.
If you mean you want to write include vars after a task apply to a task before, this can’t happen, because the path may be derived from a registered variable or a call to a custom fact module, etc.
I wanted to jump back into this conversation. It has been interesting.
I did like Kesten’s use of command line variables, but it looks like it is a better practice to have conditional inclusion of variables that are specific to a particular use of a role if I read this right.
We certainly have use cases like Kesten’s where I want to reuse a role but change variables. For instance, using the Nova compute module in a role to provision, it is ideal to have flexible variables to define the users OpenStack environment and project details like image IDs, security groups and other user specific configurations.
The ability to include variable files via the main in the “vars/” directory of a role seems it would be an improvement, and would be more consistent with the behavior of “tasks/”, but perhaps there is a specific reason this was not implemented from the get-go.
"The ability to include variable files via the main in the “vars/” directory of a role seems it would be an improvement, "
It wouldn’t, and it provides a bit too much of a “more than seven ways to do this” kind of thing.
If something in variables is defining a block of variables, having an “include” in the middle of the data means you’re now defining a way that a datastructure can reference other data structures by naming specific files that they are in, and how do you distinguish a valid datastructure with a key named “include:” versus the desire to include?
It muddles semantics.
include_vars is going to be the way we are going to do this for multiple files.
Then help me understand how this is any different than includes in a main task file?
There could be a block of tasks, with an include in the middle. Also that means the behavior is inconsistent across the role sub-systems. What did I do when I wanted to break out variables and organize them logically in my vars/main.yml? I followed the logic in tasks/main.yml to break out tasks.
Its fine. As this thread is demonstrating, there are already a number of ways to do this. From a user perspective what I suggested makes sense to me, but perhaps I should fall in-line with semantics.
ansible playbook/task files ARE YAML, but they get processed a 2nd time with the ansible engine, vars/files do not, they only get processed by YAML, they were just meant to be static data files.