passing "var" from one playbook to another playbook

hi ,

i’m trying to call one playbook1.yaml into master-playbook.yaml and i want to use variable that defined through set_facts into playbook1 into master playbook. but i dont know what is correct way to do…please help me here with better example…

playbook1.yaml:

  • name: local_ip_fact
    set_fact:
    local_ip: “{{ sites[site][‘peer_ip’] | ipaddr(‘address’) }}”
    when: “site in sites.keys()”

master-playbook.yaml:

here i want to use local_ip , into master playbook…

You need to set set_stats to pass “artifacts” back from a playbook to a workflow. Artifacts from upstream playbooks are add to extra_vars in downstream playbooks.

set_stats:
data:
local_ip: “{{ sites[site][‘peer_ip’] | ipaddr(‘address’) }}”

You can set as many artifacts as you want in a single set_stats task. The artifacts of set_stats CANNOT be used within the playbook they originate. You have to use set_fact for that. So it is conceivable that you would set_fact AND set_stats in a single playbook. The set_fact task would create a var you can use later within the playbook. The set_stats task would create artifacts for use in downstream playbooks.

Walter

seems like not working for me, not sure if i’m missing anything here…

cat playbook1.yaml

Are you running this in Ansible Tower? This only works in Ansible Tower. It won’t work from the command line.

Walter

What exactly is the functionality provided by Tower only?

Because Ansible Tower lets your craft "workflows" that chain together playbooks. Command Line ansible does not do that.

Walter

No I’m not using ansible tower. I’m Just making umbrella type of structure that being said I need to call out one play book which has few variable that needs to be also used in master play book… kind of importing one playbook within another playbook and all variable in that playbook needs to also call out too…

Hope this makes clear…!!

With one monolithic playbook you can use task files for specific tasks just to break up the “long” file.

  • name: task 1 tasks
    include_tasks: taskfile1.yml

  • name: task 2 tasks
    include_tasks: taskfile2.yml

  • name: task 3 tasks
    include_tasks: taskfile3.yml

  • name: task 4 tasks
    include_tasks: taskfile4.yml

We do this for our server provisioning. The task files can be short or long depending on what they do. In some cases we use roles because they are used also in other playbooks. You main playbook looks cleaner this way and it lets you add items as you mature your service by simply developing the role or task file and adding another stanza like these.

  • name: install packages
    include_tasks: ./common/packages.yml

  • name: create local users and groups
    include_tasks: ./common/users_groups.yml

  • name: join to active directory
    include_role:
    name: nist-sssd-role

  • name: configure volumes and filesystems
    include_tasks: ./common/volumes.yml

Hope that helps.

Walter

Walter,

what exactly is the functionality provided by Tower only?

Thank you,

Best regards

Tower lets you manage inventories, create "smart inventories", run chains of playbooks together in workflows, have a centralized "controller" that runs your jobs, centralizes logging, lets you authenticate users against LDAP, AD, etc, lets you create "surveys" that fill in vars used in your jobs, integrates with your version control system, lets you store secrets (credentials) that are passed to your playbooks. Centralizing all this on a single host makes your IT security team feel better too. All the security controls around your playbooks and their privileged execution are maintained in a single place they can more easily audit and assess.

Walter