Ansible skipping tasks that were already applied

Hi, I have an ansible project that sets up a k3s cluster. This works fine. However, I want to rerun the entire setup (without doing a tear down) and it seems like my differences in configurations dont get applied (doing a copy of a yaml and then apply of yaml as shown below). Im sure it has something to do with the fact that the project was already applied and cluster setup. If I run this by tearing down the cluster (reset) then it does apply these changes.

- name: Copy kong gateway tls yaml
  become: yes
  become_user: "{{ ansible_user }}"
    src: "kong_gateway_tls.yml"
    dest: "{{ ansible_env.HOME }}"
  when: ansible_hostname == hostvars[groups[group_name_master | default('master')][0]]['ansible_hostname']

- name: Create Kong gateways for each namespace
  become: yes
  become_user: "{{ ansible_user }}"
  shell:  kubectl apply -f "{{ ansible_env.HOME }}"/kong_gateway_tls.yml -n {{ item }}
    executable: /bin/bash
  - "{{ namespaces.split(',') }}"
  when: ansible_hostname == hostvars[groups[group_name_master | default('master')][0]]['ansible_hostname']
    pause: 5

any suggestions on how to make these changes apply on a second pass?

I’ve got no experiences with kubernetes. But I guess you must use the kubectl Module kubernetes.core.kubectl connection – Execute tasks in pods running on Kubernetes. — Ansible Community Documentation instead of wrapping kubectl inside the shell module, to benefit from change detection etc in ansible.

1 Like

our issue isnt with using kubectl or the module. Its with running the ansible playbook a second time. It seems to skip tasks if they were applied earlier?

if i have already applied an ansible playbook and configured some resources but then realized I need to tweak the configuration a bit, it doesnt seem to allow me to adjust the configuration and rerun.

On a second pass, if the template itself hasn’t changed, then it will report “OK” instead of “Changed” since the destination host already has the template.

Similarly, kubectl apply -f is smart enough to detect differences between runs through annotations. So if nothing has changed in the template, kubectl will have nothing to do on a second pass.

That being said, as @markuman has pointed out, you can consolidate this to 1 step by using kubernetes.core.k8s module to apply ansible templates as k8s yaml. Again, if the underlying kube api doesn’t see any differences in the applied yaml on subsequent passes, it “skips” since there is nothing to do. This is expected and desirable behavior. If you’re wanting to bounce pods, then you actually need to terminate the pods and redeploy or scale down/up their respective parent config (deployment/statefulset/etc).

Edit: Adding for reference: Declarative Management of Kubernetes Objects Using Configuration Files | Kubernetes

If you are applying changes that need to remove something, depending on what/how that change is implemented, kubectl may need the --prune option. I.e. kubectl apply -f kong_gatway_tls.yml --prune, but I’m not sure that the ansible module has an equivalent option for those cases.


Take care when using --prune with kubectl apply in allow list mode. Which objects are pruned depends on the values of the --prune-allowlist , --selector and --namespace flags, and relies on dynamic discovery of the objects in scope. Especially if flag values are changed between invocations, this can lead to objects being unexpectedly deleted or retained.


Apply with prune should only be run against the root directory containing the object manifests. Running against sub-directories can cause objects to be unintentionally deleted if they were previously applied, have the labels given (if any), and do not appear in the subdirectory.

Thanks for the info. Is there a way to force ansible to reapply changes.

i have a couple scenarios where I have changed the script or added another node after deploying a cluster. On a new node (new change to host.ini inventory), it will not do anything with this new node added to host.ini. Its like it has no effect adding a new node to host.ini unless I reset the entire cluster with ansible-playbook reset.yaml

is there a way to force it to pull in these new changes?

With the information given so far, I don’t see any reason for additional nodes to affect the play. Your conditionals force the tasks to only run against the first master in the play. Adding nodes to your inventory file won’t change that. If your templates are supposed to take inventory into account, then share that with us so we can try and narrow down the problem.

That being said, I would think simply adding the node to the cluster should automatically roll out changes via kubernetes contollers/operators handling all of the magic. Very little changes, if any, should need to be handled by Ansible here.