ec2 asg does not spin or replace instances when existing auto scaling group is modified with new launch configuration

Hi I am facing an issue with ec2_asg module when I use target group arn it does not spin or replace instances. I I use classic load balancer it works fine replaces the instances.
When new asg is created it spins up the instances. If existing asg is modified to use new launch configuration then it does not spin or replaces instances but module throws an error.

ERROR

"msg": "Waited too long for ELB instances to be healthy
  • elb_target_group:
    name: test-target-group
    protocol: http
    port: 80
    vpc_id: “{{ vpc-id }}”
    health_check_path: /login
    successful_response_codes: “200”
    state: present
    region: us-east-1
    register: target_group

  • elb_application_lb:
    name: test-alb
    subnets: “{{ subnets }}”
    security_groups: “{{ securitygroups }}”
    scheme: internet-facing
    purge_listeners: no
    purge_rules: no
    region: us-east-1
    listeners:

  • Protocol: HTTP
    Port: 80
    DefaultActions:

  • Type: forward
    TargetGroupName: test-target-group
    Rules:

  • Conditions:

  • Field: path-pattern
    Values:

  • ‘/test’
    Priority: ‘1’
    Actions:

  • TargetGroupName: test-target-group
    Type: forward
    state: present

  • name: create launch configuration
    ec2_lc:
    name: “test-lc-{{ ansible_date_time.iso8601_basic_short }}”
    image_id: ami-id
    key_name: “{{ keypair }}”
    security_groups: “{{ securitygroups }}”
    instance_type: “{{ instancetype }}”
    region: us-east-1
    volumes:

  • device_name: /dev/sda1
    volume_size: 8
    volume_type: gp2
    delete_on_termination: true
    register: launch_config

  • name: create or update autoscaling group
    ec2_asg:
    name: test-asg
    launch_config_name: “{{ launch_config.result.launch_configuration_name }}”
    vpc_zone_identifier: “{{ subnets }}”
    health_check_period: 90
    health_check_type: ELB
    min_size: 1
    max_size: 3
    desired_capacity: 1
    region: us-east-1
    termination_policies: OldestInstance
    wait_for_instances: yes
    default_cooldown: 60
    wait_timeout: 200
    lc_check: yes
    replace_all_instances: yes
    target_group_arns: “{{ target_group.target_group_arn }}”

Intially when I run it creates everything but when I run second time it creates new launch configuration modifies the asg with with new launch configuration name but does not replace or spin any instances by throwing above error.