blockinfile and lineinfile behaviour

I have a Host VM that I am using to launch the creation of multiple OpenShift Clusters. Every time I create a new cluster, I need to update the /etc/hosts and dnsmasq.conf files of this Host VM with the new cluster endpoints.
I am using the following code block:

  • name: update hosts
    blockinfile:
    path: /etc/hosts
    block: |
    ${var.network_lb_ip_address} api.${var.cluster_id}.${var.base_domain}
    ${var.network_lb_ip_address} api-int.${var.cluster_id}.${var.base_domain}
    state: present
  • name: update dnsmasq
    lineinfile:
    path: /etc/dnsmasq.conf
    line: address=/.apps.${var.cluster_id}.${var.base_domain}/${var.network_lb_ip_address}
    state: present

What it is doing is instead of adding a new block when I create a new cluster with new ip’s and ranges, it simply replaces the existing block with the new info so I only ever have 1 cluster’s info. When the file started out there was no managed block. The first time I ran the script, it added the block . What I currently get after the 2nd run is

BEGIN ANSIBLE MANAGED BLOCK

172.16.0.20 api.ocp2.cdastu.com
172.16.0.20 api-int.ocp2.cdastu.com

END ANSIBLE MANAGED BLOCK

What I would like after the 2nd run is :

BEGIN ANSIBLE MANAGED BLOCK

172.16.0.19 api.ocp1.cdastu.com
172.16.0.19 api-int.ocp1.cdastu.com

172.16.0.20 api.ocp2.cdastu.com
172.16.0.20 api-int.ocp2.cdastu.com

END ANSIBLE MANAGED BLOCK

I also had the ever so slight idea that Ansible could delete the entries that it created when I delete the cluster. Thanks!

I have a Host VM that I am using to launch the creation of multiple OpenShift Clusters. Every time I create a new
cluster, I need to update the /etc/hosts and dnsmasq.conf files of this Host VM with the new cluster endpoints.
I am using the following code block:
- name: update hosts
blockinfile:
path: /etc/hosts
block: |
${var.network_lb_ip_address} api.${var.cluster_id}.${var.base_domain}
${var.network_lb_ip_address} api-int.${var.cluster_id}.${var.base_domain}
state: present
- name: update dnsmasq
lineinfile:
path: /etc/dnsmasq.conf
line: address=/.apps.${var.cluster_id}.${var.base_domain}/${var.network_lb_ip_address}
state: present

What it is doing is instead of adding a new block when I create a new cluster with new ip's and ranges, it simply
replaces the existing block with the new info so I only ever have 1 cluster's info. When the file started out there was
no managed block. The first time I ran the script, it added the block . What I currently get after the 2nd run is

# BEGIN ANSIBLE MANAGED BLOCK
172.16.0.20 api.ocp2.cdastu.com
172.16.0.20 api-int.ocp2.cdastu.com
# END ANSIBLE MANAGED BLOCK

What I would like after the 2nd run is :
# BEGIN ANSIBLE MANAGED BLOCK
172.16.0.19 api.ocp1.cdastu.com
172.16.0.19 api-int.ocp1.cdastu.com
172.16.0.20 api.ocp2.cdastu.com
172.16.0.20 api-int.ocp2.cdastu.com
# END ANSIBLE MANAGED BLOCK

I also had the ever so slight idea that Ansible could delete the entries that it created when I delete the cluster. Thanks!

Hello,

do make this work in a proper fashion, you need to have *all* clusters in your inventory and not the current one on its
own.

Regards
        Racke

Sorry, a little new to this. So I don’t know all the cluster names up front. I might create 1 cluster today and then tomorrow somebody will ask for a new cluster. My inventory file currently has the ip address of the VM Host that the ansible is running on. Is there something different that I should be doing? Do I have to read the existing host file and then add the new values? Can you point to a snippet that would do that? Thanks!