get ec2 instance id after ec2 creation

Hey, so I feel like I am following all the tutorials… but I can’t seem to get the instance id after I create and instance with the ec2 module, to use for other modules, like the ec2_vol module, which depends on the ec2 instance id to attach it to the correct instance. I am not sure if I am missing some underlying concept, or missing a variable or missing something else. Here are my ec2 and ec2_vol modules:

`

It is strange, look like all must to work. I don’t understand why you received “undefined attribute”.
Try to add after first task:

`

  • debug: var=ec2
    `

hey, thanks, thats a good idea. Here is my output. Instances is clearly empty… I am using private vpc subnet instances… Do you think that may be why they aren’t loading?

my debug output is:
`

ok: [localhost] => {

“ec2”: {

“changed”: false,

“msg”: “All items completed”,

“results”: [

{

“_ansible_no_log”: false,

“changed”: false,

“instance_ids”: null,

“instances”: ,

“invocation”: {

“module_args”: {

“assign_public_ip”: false,

“aws_access_key”: null,

“aws_secret_key”: null,

“count”: 1,

“count_tag”: {

“Name”: “s-test”

},

“ebs_optimized”: false,

“ec2_url”: null,

“exact_count”: 1,

“group”: null,

“group_id”: [

“sg-81398ee4”,

“sg-a6398ec3”

],

“id”: null,

“image”: “ami-06116566”,

“instance_ids”: null,

“instance_profile_name”: null,

“instance_tags”: {

“Name”: “s-test”,

“Type”: “staging”

},

“instance_type”: “t2.medium”,

“kernel”: null,

“key_name”: “ansible_provisioning”,

“monitoring”: false,

“network_interfaces”: null,

“placement_group”: null,

“private_ip”: “10.101.1.33”,

“profile”: null,

“ramdisk”: null,

“region”: “us-east-1”,

“security_token”: null,

“source_dest_check”: true,

“spot_launch_group”: null,

“spot_price”: null,

“spot_type”: “one-time”,

“spot_wait_timeout”: 600,

“state”: “present”,

“tenancy”: “default”,

“termination_protection”: false,

“user_data”: null,

“validate_certs”: true,

“volumes”: null,

“vpc_subnet_id”: “subnet-819f45cd8”,

“wait”: true,

“wait_timeout”: 300,

“zone”: null

},

“module_name”: “ec2”

},

“item”: [

{

“environment”: “staging”

},

{

“name”: “s-test”,

“private_ip”: “10.101.1.33”,

“type”: “app”

}

],

“tagged_instances”: [

{

“ami_launch_index”: “0”,

“architecture”: “x86_64”,

“block_device_mapping”: {

“/dev/sda1”: {

“delete_on_termination”: true,

“status”: “attached”,

“volume_id”: “vol-c14a1569”

}

},

“dns_name”: “”,

“ebs_optimized”: false,

“groups”: {

“sg-81398ee4”: “ssh”,

“sg-a6398ec3”: “default”

},

“hypervisor”: “xen”,

“id”: “i-11eeg8a3”,

“image_id”: “ami-06116566”,

“instance_type”: “t2.medium”,

“kernel”: null,

“key_name”: “ansible_provisioning”,

“launch_time”: “2016-02-03T22:47:37.000Z”,

“placement”: “us-west-1a”,

“private_dns_name”: “ip-10-101-1-33.us-west-1.compute.internal”,

“private_ip”: “10.101.1.33”,

“public_dns_name”: “”,

“public_ip”: null,

“ramdisk”: null,

“region”: “us-west-1”,

“root_device_name”: “/dev/sda1”,

“root_device_type”: “ebs”,

“state”: “running”,

“state_code”: 16,

“tags”: {

“Name”: “s-colin”,

“Type”: “integ”

},

“tenancy”: “default”,

“virtualization_type”: “hvm”

}

]

}

]

}

}

`

I just confirmed that having only a private ip is NOT the reason for the failure here. I haven’t yet tested wether being under a vpc subnet is the reason, which I would doubt…