rds module | determining endpoint when module exits without complete result data

OS X 10.10.5
Python 2.7.11
Ansible 2.0.2.0
boto 2.40.0
boto3 1.3.1
botocore 1.4.24

Using the following tasks in a play:

- name: db | rds | create RDS instance
  become: no
  local_action: rds
  args:
    command: create
    instance_name: "{{ rds_instance_name }}"
    instance_type: "{{ rds_instance_class }}"
    vpc_security_groups: "{{ rds_instance_vpc_security_groups }}"
    multi_zone: "{{ rds_instance_multi_zone }}"
    subnet: "{{ rds_instance_subnet_group }}"
    db_engine: "{{ rds_instance_engine }}"
    publicly_accessible: "{{ rds_instance_public }}"
    db_name: "{{ mysql_app_db_name }}"
    size: "{{ mysql_app_db_size }}"
    username: "{{ mysql_admin_user }}"
    password: "{{ mysql_admin_pass }}"
  async: 600
  poll: 60
  register: rds
  tags: [db, rds]

- name: db | rds | output db info
  debug:
    msg: "The new MySQL DB endpoint is {{ rds.instance.endpoint }}"
    #var: rds
  tags: [db, rds]

The result is that data in rds.instance is not populated completely at
the time the module returns successfully (changed status in this
case). Importantly, the 'endpoint' value is not missing. In the AWS
RDS console, it can be seen that the RDS instance is continuing to be
deployed (in "modifying" status for several minutes) and has not
populated a number of attributes. This is the result in Ansible when
it returns "early":

TASK [app : db | rds | create RDS instance] **********************************
changed: [example.com -> localhost]

TASK [app : db | rds | output db info] ****************************************
ok: [example.com] => {
    "msg": "The new MySQL DB endpoint is "
}

Though above, the debug 'msg' attribute shows in this case that
rd.instance.endpoint is empty, on a previous run I dumped the rds
variable and it shows many fields unpopulated:

TASK [app : db | rds | output db info] ****************************************
ok: [example.com] => {
    "rds": {
        "ansible_job_id": "34553352208.25036",
        "changed": false,
        "finished": 1,
        "instance": {
            "availability_zone": "us-west-2b",
            "backup_retention": 1,
            "create_time": 1464425508.477,
            "endpoint": null,
            "id": "database",
            "instance_type": "db.m4.large",
            "iops": null,
            "maintenance_window": "sun:10:23-sun:10:53",
            "multi_zone": false,
            "port": null,
            "replication_source": null,
            "status": "modifying",
            "username": "mysqladmin",
            "vpc_security_groups": "sg-4f02f029"
        }
    }
}

In the examples [1] for the Ansible rds module, the task uses
wait/wait_timeout before returning and shows that the registered
variable has a full set of fields. I assumed that since the RDS
instance creation can take a great deal of time on AWS side, it may be
better to use asynchronous calls to avoid any long waits/timeouts.
Should this work properly using async/poll? Any reason why the module
returns but is not able to supply a complete instance dictionary? Is
there a better approach for this case?

[1] http://docs.ansible.com/ansible/rds_module.html

Regards,

Here are two ideas:

I’m about to implement this.

1.-Do fire and forget then check back when it’s ready and you REALLY need it:
http://docs.ansible.com/ansible/playbooks_async.html (see last example)
http://toroid.org/ansible-parallel-dispatch
In my case I have plenty of things to do before I need the endpoint so is fine for me to wait while the instance gets instantiated, asynchronously.

2.-If you need to do this across plays(or not) you can use the rds_facts to gather the facts of the instance and loop until you get the value that you need. Is basically the same thing you’re doing.

Thanks for the ideas. I took another look at this using the rds
module's 'wait' and 'wait_timeout' arguments and realized that they
did what I wanted after all. When the instance setup is complete and
it goes to available status, the task immediately completes - and
these set a maximum amount of time to wait for it. Sorry for the
noise, I didn't need to overcomplicate it after all.

I ended up with this task:

- name: "provision rds instance (timeout: {{ rds_instance_deploy_timeout }}s)"
  become: no
  local_action: rds
  args:
    command: create
    instance_name: "{{ rds_instance_name }}"
    instance_type: "{{ rds_instance_class }}"
    vpc_security_groups: "{{ rds_instance_vpc_security_groups }}"
    multi_zone: "{{ rds_instance_multi_zone }}"
    subnet: "{{ rds_instance_subnet_group }}"
    db_engine: "{{ rds_instance_engine }}"
    publicly_accessible: "{{ rds_instance_public }}"
    maint_window: "{{ rds_instance_maintenance_window }}"
    backup_retention: "{{ rds_instance_backup_retention }}"
    size: "{{ mysql_app_db_size }}"
    db_name: "{{ mysql_app_db_name }}"
    username: "{{ mysql_admin_user }}"
    password: "{{ mysql_admin_pass }}"
    wait: yes
    wait_timeout: "{{ rds_instance_deploy_timeout }}"
  register: rds
  when: rds_create_new_instance
  tags: [db, rds]

You’re welcome,

I’m doing something similar but the opposite in some ways.
I want to wait for the instance to have the endpoint fied populated with a CNAME. This has nothing to do with the whole instance being available.

This is what I’ve learned since I posted.

1.-DO NOT USE debug combined with an until condition to loop, it won’t work. Use gather facts with a do until or wait_for the condition you’re looking for.
2.-The rds instance goes through the following status values:
creating
backing-up
modifying
available
deleting.

When the instance transitions from creating to backing-up the endpoint shows up.

It doesn’t matter what you evaluate you’re right, you HAVE To wait for the instance to become available to get any data:

https://github.com/ansible/ansible-modules-core/issues/3865

You shouldn’t have to wait for the instance to go through EVERY status to get the dictionary, fields should be available as soon as they are. If async doesn’t solve the problem either then this should be a feature or bug fix. See ticket above.

The solution was to piggy back on aws cli to pull the RDS endpoint after waiting for the RDS instance to shift out of creating into backing-up (use a !=“creating” comparison)