Export environment variables

I need to export some environment variables before running a task, but it appears that setting variables with “environment” does not export them; they remain local. For test, I wrote the following simple playbook:


  • hosts: localhost
    gather_facts: no
    environment:
    MYVAR: “TEST1”

tasks:

  • shell: echo “$MYVAR”
    register: mytest

  • debug:
    msg:

  • “Ansible MYVAR = {{mytest.stdout}}”

  • “Exported MYVAR = {{lookup(‘env’, ‘MYVAR’)}}”

And when I run it with the shell MYVAR set, I get the following results:

export MYVAR=TEST2;ansible-playbook ~/devansible/playbooks/mytest

PLAY [localhost] ***************************************************************

TASK [shell] *******************************************************************
changed: [localhost]

TASK [debug] *******************************************************************
ok: [localhost] =>
msg:

  • Ansible MYVAR = TEST1
  • Exported MYVAR = TEST2

PLAY RECAP *********************************************************************
localhost : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0

So, “environment” is not exporting the value as I expected.
The main issue I have is that I am trying to set SSH_AUTH_SOCK so that subsequent tasks can be executed on remote servers. In my playbook, I start ssh-agent locally, grab the SSH_AUTH_SOCK and SSH_AGENT_PID values, and place them into the Ansible environment. During the next task, I use ssh-add to add a SSH key to the now running agent. That works perfectly, and in fact I can query the running agent from outside of Ansible and see the SSH key that was added inside of the playbook. But any remote tasks I try to execute after adding the SSH key to the agent fail with “UNREACHABLE”. Apparently the remote tasks are using the original shell version of SSH_AUTH_SOCK to connect to ssh-agent, not the Ansible version. The Ansible version of SSH_AUTH_SOCK needs to be exported first.
Is there a way to actually export variables from within Ansible? I can always start ssh-agent prior to running Ansible so that the SSH_AUTH_SOCK environment variable will contain the correct path to the agent socket, but I was trying to avoid doing that for various reasons; I want to try to keep all of the steps, including starting and stopping the SSH agent, contained within a single playbook.
Is there a flag set or something I can do that will cause the variables to be truly exported?

lookups do not run on the target host, and are not affected by the environment keyword.

In my test, the target host is localhost, the same place that the lookup runs. But the lookup call was only to provide a demonstration of what is happening here and will not be used in my actual code.

Ansible uses some form of SSH (Paramiko, directly calling ssh, etc.), and that mechanism uses the value contained within SSH_AUTH_SOCK to determine how to talk to ssh-agent. But it is using the shell version of SSH_AUTH_SOCK, not the version that I have set within my Ansible playbook. I need a way to inform the SSH mechanism to use the Ansible version of SSH_AUTH_SOCK, not the shell version (preferred), or I need a way to push the Ansible version into the shell version (not as preferred).

Is there a way to do either of those?

Environment variables, set using environment:, only affect the tasks that are run on the target host. It does not affect how they are communicated with or ansible itself.

You would have to configure SSH_AUTH_SOCK to the value you need in Ansible, before executing Ansible.

You are missing the point. I am not trying to set SSH_AUTH_SOCK on the target host. I am trying to set it on the Ansible host, so that the SSH mechanism on the Ansible host can use the value of SSH_AUTH_SOCK to connect to the ssh-agent that is running on the Ansible host. The target host is not involved at this point; it is all still local on the Ansible host. Once the SSH mechanism has connected to the local ssh-agent, it can use the SSH key that is contained within the ssh-agent to establish a connection to the target host. But the SSH mechanism is using the wrong version of SSH_AUTH_SOCK. I need a way to inform it of the Ansible version, or to override the value in the shell version with the Ansible version.

When I use the shell command on the Ansible host (delegate_to: localhost) to execute ssh-add, ssh-add uses the Ansible version of SSH_AUTH_SOCK to connect to ssh-agent. But if I try to run a task on a target, the SSH mechanism that connects to the target host is clearly using the shell version of SSH_AUTH_SOCK.

I know exactly the point you are making. I am telling you it is not feasible, and explaining to you what environment: does.

What you want, ansible cannot do. Environment variables needed for the operation of Ansible, or SSH, cannot be set by ansible. They must be set before you execute ansible. Ansible cannot impact its own execution, which includes the invocation of the SSH command from Ansible.

You will not be able to start the ssh-agent from ansible, and have ansible use that, unless you specify a static path to the socket, that can be set before ansible is started, and set SSH_AUTH_SOCK to that before you execute ansible.

FOUND IT!!!

I added this to the playbook:

  • name: Test connection

command:

id

vars:

ansible_ssh_common_args: “-o IdentityAgent={{socket}}”

Ansible variable “socket” contains the path to the ssh-agent socket, the same value that is set in SSH_AUTH_SOCKET. That string (-o IdentityAgent={{socket}}) is passed to ssh when connecting to the target host and ssh uses that instead of SSH_AUTH_SOCK. The connection is made and all is well! I added “ansible_ssh_common_args” to the playbook vars section instead of at the individual tasks level and that works equally well.

The bottom line here is that I can now dynamically start ssh-agent, add keys to it, and then run tasks that use that ssh-agent to authenticate to hosts. The whole point of all of this is that I wanted to fetch keys from safe storage elsewhere and use them for client host access, all without ever writing the keys to file on the Ansible host. I think I have found a method of doing that here.

I have the env file which is present in target machine and it contain certain number of variables with export command(the export command itself was present inside the file).

export AB_HOME=/et/dev/abinitio/abinitio-V3
export PATH=${AB_HOME}/bin:${PATH}

I have executed the env file using the below playbook and I tried to read the variables which are exported using the output1 which is a register variable in my playbook. But I am able to see my register variable is empty. Is there any way to get the variables which are all exported. I don’t know the variables name which are present inside the file, So I am not able to use the ECHO command. Is there any work around for this

- hosts: dev
  gather_facts: false
  tasks:   
    - name: get the environment variables
      shell: "su <id> & . ./.env"
      args:
        chdir: /path to the file
      register: output1

    - debug: var=output1.stdout_lines  

I have the env file which is present in target machine and it contain certain number of variables with export
command(the export command itself was present inside the file).

>export AB_HOME=/et/dev/abinitio/abinitio-V3 export PATH=${AB_HOME}/bin:${PATH} |

I have executed the env file using the below playbook and I tried to read the variables which are exported using the
output1 which is a register variable in my playbook. But I am able to see my register variable is empty. Is there any
way to get the variables which are all exported. I don't know the variables name which are present inside the file, So I
am not able to use the ECHO command. Is there any work around for this

>- hosts: dev gather_facts: false tasks: - name: get the environment variables shell: "su <id> & . ./.env" args: chdir:
/path to the file register: output1 - debug: var=output1.stdout_lines |

Please don't hijack an existing thread. It is less likely you get responses and also disrupting for the ones following
the original thread.

Otherwise your approach doesn't even work from the shell on your target. To see the environment variables in the output,
you could use the printenv command.

Regards
         Racke

Sorry I am new to this so I have posted my question here, I used printenv and env but it is showing the remote machine environment variables, not the exported environment variable.

Sorry I am new to this so I have posted my question here, I used printenv and env but it is showing the remote machine
environment variables, not the exported environment variable.

Please specify exactly what you are trying to achieve. You left me pretty much clueless ...

Regards
         Racke

Hi THank you for your response,

I have the env file which is present in target machine and it contain certain number of variables with export command(the export command itself was present inside the file).

export AB_HOME=/et/dev/abinitio/abinitio-V3
export PATH=${AB_HOME}/bin:${PATH}

I have executed the env file using the below playbook and I tried to read the variables which are exported using the output1 which is a register variable in my playbook. But I am able to see my register variable is empty. Is there any way to get the variables which are all exported. I don’t know the variables name which are present inside the file, So I am not able to use the ECHO command as well.

  • hosts: dev
    gather_facts: false
    tasks:

  • name: get the environment variables
    shell: “su & . ./.env”
    args:
    chdir: /path to the file
    register: output1

  • debug: var=output1.stdout_lines

  • name: check status
    shell: “su & ” #command to check status
    environment: “{{output1.stdout_lines}}” #setting the environment in which the above status command need to run
    register: output2

  • debug: var=output2.stdout_lines