Delegate_to: localhost is not working as expected

delegate_to: localhost is not working as expected..
Dear community,

Looks like I am stuck in a loop.So I have a project which has the main playbook which calls a role for the GCP connection using IAP.

My project is

- name: Establish GCVE connectivity and run remote command on all hosts
  hosts: all
  gather_facts: false
  vars:
    gcp_project: "{{ hostvars[inventory_hostname]['project'] }}"
    gcp_zone: "{{ hostvars[inventory_hostname]['zone'] }}"
    gcp_instance_name: "{{ hostvars[inventory_hostname]['name'] }}"
  tasks:
    - name: Importing task for GCP Auth on localhost
      include_role:
        name: gcp_connectivity_role
    - name: Run remote command via SSH through IAP tunnel
      ansible.builtin.command: "uname -a"
      delegate_to: localhost

And my role gcp_connectivity_role

---
- name: Run GCP tasks via gcloud
  block:
    - name: Set facts from credential and vars
      ansible.builtin.set_fact:
        gcp_sa_json_file: "{{ lookup('env', 'gcp_sa_json') }}"
        gcp_impersonate_sa: "{{ lookup('env', 'gcp_sa_user_id') }}"

    - name: Check current authenticated service account
      shell: gcloud auth list --filter=status:ACTIVE --format="value(account)"
      register: current_auth
      delegate_to: localhost 

    - name: Authenticate only if not already authenticated
      shell: gcloud auth activate-service-account --key-file={{ gcp_sa_json_file }}
      when: current_auth.stdout.strip() != gcp_impersonate_sa
      register: auth_result
      failed_when: auth_result.rc != 0
     delegate_to: localhost 

    - name: Set the GCP project
      shell: gcloud config set project {{ gcp_project }}
      register: project_result
      failed_when: project_result.rc != 0
     delegate_to: localhost 

    - name: Connect to GCP instance via SSH
      shell: >
        gcloud compute ssh {{ gcp_instance_name }}
        --zone={{ gcp_zone }}
        --impersonate-service-account={{ gcp_impersonate_sa }}
        --command="hostname"
      register: ssh_result
      failed_when: ssh_result.rc != 0
     delegate_to: localhost 

So here I have couple of issues/challenges which I am facing.

  1. delegate_to: localhost is not working as expected. It should be executed in my custom execution pod but it is running on the target host, Not sure why.
  2. I want to set the hostvars first being in target host, then switch to local for gcloud connection and then switch back to the target host to execute commands, How can we achieve this ? I tried several methods like connection: local , delegate_to: localhost or 127.0.0.1 , nothing works. Please suggest how can we achieve this?

Thanks and Regards
Saravana Selvaraj

@kurokobo , Please help here as well if you can. I am really stuck with this

What is the output of ansible-inventory --host localhost?

Hello @bcoca , thanks for the response , and this is the output

ansible-inventory --host localhost
[WARNING]: No inventory was parsed, only implicit localhost is available
{
    "ansible_connection": "local",
    "ansible_python_interpreter": "/usr/bin/python3"
}

And looks like this is working as expected and getting executed in the localhost. But Can you please help me with the below because it is a blocker as of now for me.

I’m sorry, i forgot to ask you to add the inventory you are using with -i

An avenue I think may work for you is to use the ProxyCommand SSH configuration option. You should be able to siphon enough information from the Putty documentation and man 5 ssh_config (search for ProxyCommand).

Pair this with a definition of the Ansible variable for each VM of of ansible_ssh_args for something like:

---
ansible_ssh_args: -o ProxyCommand='gcloud.cmd compute start-iap-tunnel VM_NAME PORT_NUMBER --listen-on-stdin --project=PROJECT_ID --zone=ZONE'

You’ll of course want to ensure the gcloud CLI is initialized / authenticated on the control node (pod in AWX) first. I’m not sure if delegating to localhost in pre_tasks will be enough in the case of AWX. You have have to do something first in the pod specification.

Sure @wayt , Thanks for the response and I will try it out. I am using dynamic inventory. My source is

---
projects:
  - XXX
  - XXXX 
hostnames:
  - name
compose:
  ansible_host: networkInterfaces[0].networkIP
  ansible_ssh_args: -o ProxyCommand='gcloud.cmd compute start-iap-tunnel VM_NAME PORT_NUMBER --listen-on-stdin --project=PROJECT_ID --zone=ZONE'
#use_extra_vars: true
# extra_vars.yml 
#extra_vars:
#  foo: bar
keyed_groups:
  - key: labels
    prefix: label
  - key: zone
    prefix: zone
  - key: (tags.items|list)
    prefix: tag

After the sync I cannot see the ansible_ssh_args is added to all the hosts in the inventory. I tried to use extra_vars and place it in a seperate line as well but it doesn’t work. Can you please let me know what I am missing here ?

You’ll need to turn the parts of the ProxyCommand into variables that need to be like VM_NAME might be:

compose:
  ansible_ssh_args: >-
    “-o ProxyCommand=‘gcloud.cmd compute start-iap-tunnel {{ inventory_hostname }} 22 —listen-on-stdin —project={{ gcloud_project }} —zone={{ gcloud_zone }}”

You may have to do some fiddling with the compose syntax to get it to do the string substitution (maybe the ~ or + to combine the literal string sections with the variable parts.

You may also end up needing to look at a constructed inventory to define some parts or variables in a separate more static inventory that is merged with the data coming out of GCP if it gets to crazy with Jinja conditionals in your dynamic inventory.

I would also advocate for starting with a simple pedantic case like an inventory in AWX directory or just a basic INI hosts file where you can confirm the pattern of ProxyCommand ends up working for your use case before spinning on a dynamic inventory.

Got it , I will explore the options. Thanks a lot !!

@wayt , I tried several ways to connect using ssh -o ProxyCommand but it does not work. Is this really works ?
FYI, I am trying the below command

ssh -o ProxyCommand="gcloud compute start-iap-tunnel  XXX-d-ubuntu24-usea1-1 22 --listen-on-stdin --project=XXX-b2b-d-XXXXX-us-1 --zone=us-east1-b --impersonate-service-account='XXXX@XXXXXXX.iam.gserviceaccount.com'" "XXXX@XXXXXXX.iam.gserviceaccount.com'"@10.XXX.XX.X