IP address not available with CentOs7

I am trying to get the IP address of my docker container (which has a centos7.9 image) however, the ‘ansible_default_ipv4’ element does not seem to exist.
Does anyone have an idea to how to get the ip ?


You could use docker_container_info module to do this;

Let’s deploy a test container first:

14:30|ptn@bender:~/conf (main // M(u):2) (bg:1)$ docker run --rm -d alpine:latest tail -f /dev/null
Unable to find image 'alpine:latest' locally
latest: Pulling from library/alpine
96526aa774ef: Pull complete
Digest: sha256:eece025e432126ce23f223450a0326fbebde39cdf496a85d8c016293fc851978
Status: Downloaded newer image for alpine:latest

14:32|ptn@bender:~/conf (main // M(u):2) (bg:1)$ docker inspect 4be972b824af -f '{{ .NetworkSettings.Networks.bridge.IPAddress }}'

Here is an example playbook to extract and print IPAddr value:


- name: Test
  connection: local
  hosts: localhost
  gather_facts: false
    - name: Get container info
        name: 4be972b824af
      register: _container_info

    - name: Extract IPAddr from gathered info
        msg: "{{ item.container.NetworkSettings.Networks.bridge.IPAddress }}"
        - "{{ _container_info }}"

And the output:

14:41|ptn@bender:~/conf (main // M(u):2) (bg:1)$ ansible-playbook ~/TEMP/pbtest.yml

PLAY [Test] ****************************************************************************************************************************************************************************************************************************************************************************************************************

TASK [Get container info] **************************************************************************************************************************************************************************************************************************************************************************************************
Friday 03 November 2023  14:42:36 +0100 (0:00:00.766)       0:00:00.772 *******
ok: [localhost]

TASK [Extract IPAddr from gathered info] ***********************************************************************************************************************************************************************************************************************************************************************************
Friday 03 November 2023  14:42:36 +0100 (0:00:00.349)       0:00:01.122 *******
ok: [localhost] => (item={'changed': False, 'exists': True, 'container': {'Id': '4be972b824af2e81c5d3d8bbb46d5ecd9848134e126ec1a97274c92c629722cd', 'Created': '2023-11-03T13:30:58.109701091Z', 'Path': 'tail', 'Args': ['-f', '/dev/null'], 'State': {'Status': 'running', 'Running': True, 'Paused': False, 'Restarting': False, 'OOMKilled': False, 'Dead': False, 'Pid': 4044465, 'ExitCode': 0, 'Error': '', 'StartedAt': '2023-11-03T13:30:59.114398352Z', 'FinishedAt': '0001-01-01T00:00:00Z'}, 'Image': 'sha256:8ca4688f4f356596b5ae539337c9941abc78eda10021d35cbc52659c74d9b443', 'ResolvConfPath': '/var/lib/docker/containers/4be972b824af2e81c5d3d8bbb46d5ecd9848134e126ec1a97274c92c629722cd/resolv.conf', 'HostnamePath': '/var/lib/docker/containers/4be972b824af2e81c5d3d8bbb46d5ecd9848134e126ec1a97274c92c629722cd/hostname', 'HostsPath': '/var/lib/docker/containers/4be972b824af2e81c5d3d8bbb46d5ecd9848134e126ec1a97274c92c629722cd/hosts', 'LogPath': '/var/lib/docker/containers/4be972b824af2e81c5d3d8bbb46d5ecd9848134e126ec1a97274c92c629722cd/4be972b824af2e81c5d3d8bbb46d5ecd9848134e126ec1a97274c92c629722cd-json.log', 'Name': '/dazzling_joliot', 'RestartCount': 0, 'Driver': 'overlay2', 'Platform': 'linux', 'MountLabel': '', 'ProcessLabel': '', 'AppArmorProfile': 'docker-default', 'ExecIDs': None, 'HostConfig': {'Binds': None, 'ContainerIDFile': '', 'LogConfig': {'Type': 'json-file', 'Config': {'max-file': '3', 'max-size': '100m'}}, 'NetworkMode': 'default', 'PortBindings': {}, 'RestartPolicy': {'Name': 'no', 'MaximumRetryCount': 0}, 'AutoRemove': True, 'VolumeDriver': '', 'VolumesFrom': None, 'ConsoleSize': [77, 316], 'CapAdd': None, 'CapDrop': None, 'CgroupnsMode': 'private', 'Dns': [], 'DnsOptions': [], 'DnsSearch': [], 'ExtraHosts': None, 'GroupAdd': None, 'IpcMode': 'private', 'Cgroup': '', 'Links': None, 'OomScoreAdj': 0, 'PidMode': '', 'Privileged': False, 'PublishAllPorts': False, 'ReadonlyRootfs': False, 'SecurityOpt': None, 'UTSMode': '', 'UsernsMode': '', 'ShmSize': 67108864, 'Runtime': 'runc', 'Isolation': '', 'CpuShares': 0, 'Memory': 0, 'NanoCpus': 0, 'CgroupParent': '', 'BlkioWeight': 0, 'BlkioWeightDevice': [], 'BlkioDeviceReadBps': [], 'BlkioDeviceWriteBps': [], 'BlkioDeviceReadIOps': [], 'BlkioDeviceWriteIOps': [], 'CpuPeriod': 0, 'CpuQuota': 0, 'CpuRealtimePeriod': 0, 'CpuRealtimeRuntime': 0, 'CpusetCpus': '', 'CpusetMems': '', 'Devices': [], 'DeviceCgroupRules': None, 'DeviceRequests': None, 'MemoryReservation': 0, 'MemorySwap': 0, 'MemorySwappiness': None, 'OomKillDisable': None, 'PidsLimit': None, 'Ulimits': None, 'CpuCount': 0, 'CpuPercent': 0, 'IOMaximumIOps': 0, 'IOMaximumBandwidth': 0, 'MaskedPaths': ['/proc/asound', '/proc/acpi', '/proc/kcore', '/proc/keys', '/proc/latency_stats', '/proc/timer_list', '/proc/timer_stats', '/proc/sched_debug', '/proc/scsi', '/sys/firmware', '/sys/devices/virtual/powercap'], 'ReadonlyPaths': ['/proc/bus', '/proc/fs', '/proc/irq', '/proc/sys', '/proc/sysrq-trigger']}, 'GraphDriver': {'Data': {'LowerDir': '/var/lib/docker/overlay2/330d2bb10c6f8bd5e3aaa2a6fd082c32e3bce4f322533bd74eb34d1674441b83-init/diff:/var/lib/docker/overlay2/857420dc31a581c863cda9a9d57ab7f9c1c4e5e0a646971864d4dcea8528537e/diff', 'MergedDir': '/var/lib/docker/overlay2/330d2bb10c6f8bd5e3aaa2a6fd082c32e3bce4f322533bd74eb34d1674441b83/merged', 'UpperDir': '/var/lib/docker/overlay2/330d2bb10c6f8bd5e3aaa2a6fd082c32e3bce4f322533bd74eb34d1674441b83/diff', 'WorkDir': '/var/lib/docker/overlay2/330d2bb10c6f8bd5e3aaa2a6fd082c32e3bce4f322533bd74eb34d1674441b83/work'}, 'Name': 'overlay2'}, 'Mounts': [], 'Config': {'Hostname': '4be972b824af', 'Domainname': '', 'User': '', 'AttachStdin': False, 'AttachStdout': False, 'AttachStderr': False, 'Tty': False, 'OpenStdin': False, 'StdinOnce': False, 'Env': ['PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin'], 'Cmd': ['tail', '-f', '/dev/null'], 'Image': 'alpine:latest', 'Volumes': None, 'WorkingDir': '', 'Entrypoint': None, 'OnBuild': None, 'Labels': {}}, 'NetworkSettings': {'Bridge': '', 'SandboxID': '8d690b92fda2170c5eedce2a504166e62748fd311787c6d0a27441b246e28670', 'HairpinMode': False, 'LinkLocalIPv6Address': '', 'LinkLocalIPv6PrefixLen': 0, 'Ports': {}, 'SandboxKey': '/var/run/docker/netns/8d690b92fda2', 'SecondaryIPAddresses': None, 'SecondaryIPv6Addresses': None, 'EndpointID': 'd64050d7876aa6a35e000262dedde8b9adc92d92b3056b9a7f716626422d5c19', 'Gateway': '', 'GlobalIPv6Address': '', 'GlobalIPv6PrefixLen': 0, 'IPAddress': '', 'IPPrefixLen': 24, 'IPv6Gateway': '', 'MacAddress': '02:42:0a:fe:00:01', 'Networks': {'bridge': {'IPAMConfig': None, 'Links': None, 'Aliases': None, 'NetworkID': '9e35186ddc17e5b75d6d8975aa6eb7979e0d1b378e42f4b95706b21b190ef6fa', 'EndpointID': 'd64050d7876aa6a35e000262dedde8b9adc92d92b3056b9a7f716626422d5c19', 'Gateway': '', 'IPAddress': '', 'IPPrefixLen': 24, 'IPv6Gateway': '', 'GlobalIPv6Address': '', 'GlobalIPv6PrefixLen': 0, 'MacAddress': '02:42:0a:fe:00:01', 'DriverOpts': None}}}}, 'failed': False}) => {
    "msg": ""

PLAY RECAP *****************************************************************************************************************************************************************************************************************************************************************************************************************
localhost                  : ok=3    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

Friday 03 November 2023  14:42:36 +0100 (0:00:00.048)       0:00:01.170 *******
Get container info -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 0.35s
Extract IPAddr from gathered info ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 0.05s

Hi, thank you for your help.
I tried your solution, however, I got this message on the “Get container info” task :

fatal: [serveur1]: FAILED! => {"changed": false, "msg": "Failed to import the required Python library (requests) on 96959a20c1b7's Python /usr/bin/python. Please read the module documentation and install it in the appropriate location. If the required library is installed, but Ansible is using the wrong Python interpreter, please consult the documentation on ansible_python_interpreter"}
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: ImportError: No module named requests

Ok, it seems you are missing ‘requests’ package; you could install it using pip or your favorite python package manager.
Note that you have to install it in the same context ansible binaries are, so if you use a venv, install this package there, not with your distro package manager.

If not clear, just tell me how you installed ansible and I should be able to point you in the right direction.

1 Like

I’m curious why you need the internal IP address of the container? :thinking:

The reason I ask is there’s a possibility there’s an easier and more resilient way to do what you’re attempting to accomplish, potentially with a connection plugin or something else. Figured I’d mention it in case there’s an easier way to reach your final goal that members of the community could offer up a solution to achieve. :slight_smile:

1 Like

I want to have a load balancer, so I have two docker containers playing the servers and another one playing as the loadbalancer.
@ptn, I don’t think I understand what you mean… I installed ansible with yum, if I remember correctly.

I want to have a load balancer, so I have two docker containers playing the servers and another one playing as the loadbalancer.

Ok, so three containers on the same machine, LB container being either on same network as the two other, or at least having access to each of them separate ones.

So the right way to do it depends on the LB you’re using, and what kind of traffic you’d like to proxify; for instance, Traefik would listen to docker events and use labels for dynamic routing, Nginx would be more traditional, using hostnames or IP addresses, Keepalived running directly in front of your services, moving one or more floating VIP between them, etc… So, feel free to give more details if you’d like some suggestions on this matter :slight_smile:, but in you case, you could probably just use your container hostnames instead of IP addresses, containers in the same network being able to be reached this way.

I don’t think I understand what you mean… I installed ansible with yum, if I remember correctly.

What I mean is you can install python packages on different scopes, using a venv being best practice in most cases. That being said, you installed Ansible with your distro packages manager, so it should have been installed globally, for every users on this system to use. You could then just install missing package globally as well using dnf install python3-requests command.

Edit: Also please note my previous example to gather container IP address is very barebone and would have to be adapted depending on your context, for instance targetting mutliple containers, running in Swarm, etc…

1 Like

I am using the loadbalancer from nginx directly, with the default (round-robin, I guess) method.
I tried putting this in the config file :

  upstream backend {
{% for item in groups['webservers'] %}
        server {{hostvars[item]['inventory_hostname']}}:80;
{% endfor %}

However, I get this error when I try to start nginx :

fatal: [load-balancer]: FAILED! => {"changed": false, "msg": "Unable to start service nginx: Job for nginx.service failed because the control process exited with error code. See \"systemctl status nginx.service\" and \"journalctl -xe\" for details.\n"}

So I checked the error logs in the container :

2023/11/03 15:33:12 [emerg] 5556#5556: host not found in upstream "serveur1:80" in /etc/nginx/nginx.conf:41

Nginx is telling you it can’t resolve ‘serveur1’. From the template you use to generate upstream config, I see you use inventory names as backend servers; are your container’s hostname matching their inventory name ? What if you try to ping one of these containers by its hostname from your nginx container ?

Also, are there other servers in your upstream pool ? If so, it would be weird only one of them to not being resolvable.

The inventory hostnames are the same as the docker hostnames. When I try pinging the containers using ansible, it works, and I have two servers, the error also happens when I try to put “serveur2” above “serveur1”.

Hi Laure,

Sorry, that’s not what I meant. Ansible ping module uses your inventory hostnames to perform the action, which is probably not the same as your container’s hostname, if you haven’t define it while deploying containers. I’m referring to: Services top-level element | Docker Docs (here for docker compose, but it’s kind of the same for docker run. Parameter may vary depending on which tool you used to deploy your containers).
Also, ping module is not an ICMP ping FYI.

You could start by checking your backend containers actual hostname (which is not the same as container name): docker exec -t serveur1 hostname, then see if you can ping it from your nginx container: docker exec -t load-balancer ping <hostname>.
If both containers are on the same network (assuming you are using bridge driver), they should be able to talk to each other.


  • Assuming resulting hostname is not ‘serveur1’, it means you haven’t defined it on container deploy, so you can redeploy your container with missing parameter (or fallback to using IP address -your initial demand-), then use it in your upstream template
  • Assuming the hostname is correct, and you can ping it from nginx-1, then your problem would lie elsewhere

Hope it makes sense !