Issue with AWX EE and Connecting to vcenter

Recently installed and configured AWX on K3s using the example from GitHub - kurokobo/awx-on-k3s: An example implementation of AWX on single node K3s using AWX Operator, with easy-to-use simplified configuration with ownership of data and passwords. (Great reference thank you!). Everything there seems to work as intended. EE is able to communicate with AD domain over kerberos/winrm. The issue I am currently facing is when trying to run a playbook using the community.vmware.vmware_guest module where I am receiving the error “Unable to connect to vCenter or ESXi API at VCENTER-FQDN on TCP/443: [SSL] unsupported (_ssl.c:1006)”

  • vcenter certificates have been copied to EE Image during ansible-builder build, validated that certs are present within /etc/pki/ca-trust/extracted
  • Same error when using validate_certs: false on playbook. So doesn’t seem that its related to certificates themselves.
  • Same playbook can ran successfully via ansible-engine\cli

When running openssl s_client test to vcenter from within EE via podman run -it:

 -----END CERTIFICATE-----
No client certificate CA names sent
Peer signing digest: SHA512
Peer signature type: RSA
Server Temp Key: ECDH, prime256v1, 256 bits
SSL handshake has read 3286 bytes and written 392 bytes
Verification: OK
New, (NONE), Cipher is (NONE)
Server public key is 2048 bit
Secure Renegotiation IS supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
SSL-Session:
    Protocol  : TLSv1.2
    Cipher    : 0000
    Session-ID:
    Session-ID-ctx:
    Master-Key:
    PSK identity: None
    PSK identity hint: None
    SRP username: None
    Start Time: 1713305621
    Timeout   : 7200 (sec)
    Verify return code: 0 (ok)
    Extended master secret: no 

When running the same test from the host itself:

-----END CERTIFICATE-----
 No client certificate CA names sent
Peer signing digest: SHA512
Peer signature type: RSA
Server Temp Key: ECDH, P-256, 256 bits
SSL handshake has read 3544 bytes and written 446 bytes
Verification: OK
New, TLSv1.2, Cipher is ECDHE-RSA-AES256-GCM-SHA384
Server public key is 2048 bit
Secure Renegotiation IS supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
SSL-Session:
    Protocol  : TLSv1.2
    Cipher    : ECDHE-RSA-AES256-GCM-SHA384 

So not sure why I am getting “New, (NONE), Cipher is (NONE)” from within the container. If I try doing the test on some random sites I get back normal results. Feels like I’m missing something installed/configured within the container.

Any help or push in the right direction with this would be appreacited as I feel completely lost at this point :crazy_face:

Version Info
awx-operator 2.12.1
awx 23.8.1
awx-ee base_image: quay.io/ansible/awx-ee:23.8.1 (have tried different versions as well , same results).

vCenter: 7.0.3.01700
collection: community.vmware 4.2.0
RHEL 8.9: Running k3s, in FIPS mode

Ansible Version

ansible [core 2.16.6]
  config file = None
  configured module search path = ['/runner/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/local/lib/python3.11/site-packages/ansible
  ansible collection location = /runner/requirements_collections:/runner/.ansible/collections:/usr/share/ansible/collections
  executable location = /usr/local/bin/ansible-playbook
  python version = 3.11.7 (main, Jan 22 2024, 00:00:00) [GCC 11.4.1 20231218 (Red Hat 11.4.1-3)] (/usr/bin/python3.11)
  jinja version = 3.1.3
  libyaml = True

Play to reproduce issue

- name: Clone a virtual machine from a windows template and customize
  community.vmware.vmware_guest:
    hostname: '{{ TGT_VCENTER_HOSTNAME }}'
    username: '{{ priv_user }}'
    password: '{{  priv_user_password  }}'
    validate_certs: true
    state: powered-on
    folder: '{{ TGT_VCENTER_FOLDER }}'
    template: '{{ TEMPLATE_NAME }}'
    datastore: '{{ TGT_VCENTER_DATASTORE }}'
    datacenter: '{{ TGT_VCENTER_DATACENTER }}'
    name: '{{ TGT_HOSTNAME }}'
    cluster: '{{ TGT_VCENTER_CLUSTER }}'
    networks:
      - name: '{{ TGT_NETWORK_NAME }}'
        ip: '{{ TGT_IP }}'
        netmask: '{{ TGT_NETMASK }}'
        gateway: '{{ TGT_GATEWAY }}'
        connected: true
        start_connected: true
    wait_for_ip_address: true
    wait_for_ip_address_timeout: 600
    wait_for_customization: true
    wait_for_customization_timeout: 300
    customization:
      domain: '{{ TGT_DOMAIN }}'
      domainadmin: '{{ priv_user }}'
      domainadminpassword: '{{  priv_user_password  }}'
      joindomain: '{{ TGT_DOMAIN }}'
      hostname: '{{ TGT_HOSTNAME }}'
      existing_vm: true
      dns_servers:
        - 172.24.11.19
        - 172.25.11.19
      dns_suffix:
        - '{{ TGT_DOMAIN }}'
  delegate_to: localhost
  register: build_task

Actual Results

ansible-playbook [core 2.16.6]
  config file = None
  configured module search path = ['/runner/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/local/lib/python3.11/site-packages/ansible
  ansible collection location = /runner/requirements_collections:/runner/.ansible/collections:/usr/share/ansible/collections
  executable location = /usr/local/bin/ansible-playbook
  python version = 3.11.7 (main, Jan 22 2024, 00:00:00) [GCC 11.4.1 20231218 (Red Hat 11.4.1-3)] (/usr/bin/python3.11)
  jinja version = 3.1.3
  libyaml = True
No config file found; using defaults
[DEPRECATION WARNING]: ANSIBLE_COLLECTIONS_PATHS option, does not fit var 
naming standard, use the singular form ANSIBLE_COLLECTIONS_PATH instead. This 
feature will be removed from ansible-core in version 2.19. Deprecation warnings
 can be disabled by setting deprecation_warnings=False in ansible.cfg.
SSH password: 
Vault password: 
setting up inventory plugins
Loading collection ansible.builtin from 
host_list declined parsing /runner/inventory/hosts as it did not pass its verify_file() method
Set default localhost to localhost
Parsed /runner/inventory/hosts inventory source with script plugin
statically imported: /runner/project/nix/roles/windows_build/tasks/subtasks/load_vars.yml
statically imported: /runner/project/nix/roles/windows_build/tasks/subtasks/vmware_build_and_customize.yml
Loading collection community.vmware from /usr/share/ansible/collections/ansible_collections/community/vmware
[WARNING]: file
/runner/project/nix/roles/windows_build/tasks/subtasks/adhoc.yml is empty and
had no tasks to include
Loading callback plugin default of type stdout, v2.0 from /usr/local/lib/python3.11/site-packages/ansible/plugins/callback/default.py
Loading callback plugin awx_display of type stdout, v2.0 from /usr/local/lib/python3.11/site-packages/ansible_runner/display_callback/callback/awx_display.py
Skipping callback 'awx_display', as we already have a stdout callback.
Skipping callback 'default', as we already have a stdout callback.
Skipping callback 'minimal', as we already have a stdout callback.
Skipping callback 'oneline', as we already have a stdout callback.

PLAYBOOK: windows_build.yml ****************************************************
Positional arguments: nix/windows_build.yml
verbosity: 4
remote_user: VCENTER-USERNAME@DOMAIN
connection: ssh
ask_pass: True
become_method: sudo
tags: ('build',)
check: True
inventory: ('/runner/inventory/hosts',)
subset: localhost
extra_vars: ('@/runner/env/extravars',)
ask_vault_pass: True
forks: 5
1 plays in nix/windows_build.yml

PLAY [all] *********************************************************************

TASK [windows_build : include vaulted var] *************************************
task path: /runner/project/nix/roles/windows_build/tasks/subtasks/load_vars.yml:2
Trying secret <ansible.parsing.vault.PromptVaultSecret object at 0x7f2143ae4450> for vault_id=default
ok: [localhost] => {
    "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result",
    "changed": false
}

TASK [windows_build : include build_var.yml] ***********************************
task path: /runner/project/nix/roles/windows_build/tasks/subtasks/load_vars.yml:7
ok: [localhost] => {
    "ansible_facts": {
        "EPO_SERVER_HOSTNAME": "EPO-1",
        "RD_USERS": [
            null,
            null,
            null
        ],
        "TEMPLATE_NAME": null,
        "TGT_DOMAIN": "DOMAIN.COM",
        "TGT_DOMAIN_OU": null,
        "TGT_GATEWAY": "192.11.11.1",
        "TGT_HOSTNAME": "test-host1",
        "TGT_IP": "192.11.11.25",
        "TGT_NETMASK": "255.255.255.0",
        "TGT_NETWORK_NAME": "192.11.11.0",
        "TGT_TIMEZONE": "Eastern Standard Time",
        "TGT_VCENTER_CLUSTER": "DC1.DEV",
        "TGT_VCENTER_DATACENTER": "DC1",
        "TGT_VCENTER_DATASTORE": "DS1",
        "TGT_VCENTER_FOLDER": "/VCENTER/vm/DEV/IT",
        "TGT_VCENTER_HOSTNAME": "VCENTER.DOMAIN.COM"
    },
    "ansible_included_var_files": [
        "/runner/project/nix/roles/windows_build/vars/build_var.yml"
    ],
    "changed": false
}

TASK [windows_build : Clone a virtual machine from a windows template and customize] ***
task path: /runner/project/nix/roles/windows_build/tasks/subtasks/vmware_build_and_customize.yml:2
<localhost> ESTABLISH LOCAL CONNECTION FOR USER: 1000
<localhost> EXEC /bin/sh -c 'echo ~1000 && sleep 0'
<localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /runner/.ansible/tmp `"&& mkdir "` echo /runner/.ansible/tmp/ansible-tmp-1713287265.5335696-24-264715555691609 `" && echo ansible-tmp-1713287265.5335696-24-264715555691609="` echo /runner/.ansible/tmp/ansible-tmp-1713287265.5335696-24-264715555691609 `" ) && sleep 0'
<localhost> Attempting python interpreter discovery
<localhost> EXEC /bin/sh -c 'echo PLATFORM; uname; echo FOUND; command -v '"'"'python3.12'"'"'; command -v '"'"'python3.11'"'"'; command -v '"'"'python3.10'"'"'; command -v '"'"'python3.9'"'"'; command -v '"'"'python3.8'"'"'; command -v '"'"'python3.7'"'"'; command -v '"'"'python3.6'"'"'; command -v '"'"'/usr/bin/python3'"'"'; command -v '"'"'/usr/libexec/platform-python'"'"'; command -v '"'"'python2.7'"'"'; command -v '"'"'/usr/bin/python'"'"'; command -v '"'"'python'"'"'; echo ENDFOUND && sleep 0'
<localhost> EXEC /bin/sh -c '/usr/bin/python3.11 && sleep 0'
Using module file /usr/share/ansible/collections/ansible_collections/community/vmware/plugins/modules/vmware_guest.py
<localhost> PUT /runner/.ansible/tmp/ansible-local-187lm6grv5/tmptm1pgaz2 TO /runner/.ansible/tmp/ansible-tmp-1713287265.5335696-24-264715555691609/AnsiballZ_vmware_guest.py
<localhost> EXEC /bin/sh -c 'chmod u+x /runner/.ansible/tmp/ansible-tmp-1713287265.5335696-24-264715555691609/ /runner/.ansible/tmp/ansible-tmp-1713287265.5335696-24-264715555691609/AnsiballZ_vmware_guest.py && sleep 0'
<localhost> EXEC /bin/sh -c '/usr/bin/python3 /runner/.ansible/tmp/ansible-tmp-1713287265.5335696-24-264715555691609/AnsiballZ_vmware_guest.py && sleep 0'
<localhost> EXEC /bin/sh -c 'rm -f -r /runner/.ansible/tmp/ansible-tmp-1713287265.5335696-24-264715555691609/ > /dev/null 2>&1 && sleep 0'
The full traceback is:
  File "/tmp/ansible_community.vmware.vmware_guest_payload_2dw51at_/ansible_community.vmware.vmware_guest_payload.zip/ansible_collections/community/vmware/plugins/module_utils/vmware.py", line 784, in connect_to_api
    service_instance = connect.SmartConnect(**connect_args)
                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/pyVim/connect.py", line 961, in SmartConnect
    supportedVersion = __FindSupportedVersion(protocol, host, port, path,
                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/pyVim/connect.py", line 775, in __FindSupportedVersion
    serviceVersionDescription = __GetServiceVersionDescription(
                                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/pyVim/connect.py", line 699, in __GetServiceVersionDescription
    return __GetElementTree(protocol, server, port,
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/pyVim/connect.py", line 653, in __GetElementTree
    conn.request(method="GET", url=path, headers=headers)
  File "/usr/lib64/python3.11/http/client.py", line 1294, in request
    self._send_request(method, url, body, headers, encode_chunked)
  File "/usr/lib64/python3.11/http/client.py", line 1340, in _send_request
    self.endheaders(body, encode_chunked=encode_chunked)
  File "/usr/lib64/python3.11/http/client.py", line 1289, in endheaders
    self._send_output(message_body, encode_chunked=encode_chunked)
  File "/usr/lib64/python3.11/http/client.py", line 1048, in _send_output
    self.send(msg)
  File "/usr/lib64/python3.11/http/client.py", line 986, in send
    self.connect()
  File "/usr/lib64/python3.11/http/client.py", line 1466, in connect
    self.sock = self._context.wrap_socket(self.sock,
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib64/python3.11/ssl.py", line 517, in wrap_socket
    return self.sslsocket_class._create(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib64/python3.11/ssl.py", line 1108, in _create
    self.do_handshake()
  File "/usr/lib64/python3.11/ssl.py", line 1383, in do_handshake
    self._sslobj.do_handshake()
fatal: [localhost]: FAILED! => {
    "changed": false,
    "invocation": {
        "module_args": {
            "advanced_settings": [],
            "annotation": null,
            "cdrom": [],
            "cluster": "DC1.DEV",
            "convert": null,
            "customization": {
                "autologon": null,
                "autologoncount": null,
                "dns_servers": [
                    "192.11.11.19",
                    "192.12.11.19"
                ],
                "dns_suffix": [
                    "DOMAIN.COM"
                ],
                "domain": "DOMAIN.COM",
                "domainadmin": "VCENTER-USERNAME@DOMAIN.COM",
                "domainadminpassword": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
                "existing_vm": true,
                "fullname": null,
                "hostname": "test-host1",
                "hwclockUTC": null,
                "joindomain": "DOMAIN.COM",
                "joinworkgroup": null,
                "orgname": null,
                "password": null,
                "productid": null,
                "runonce": null,
                "script_text": null,
                "timezone": null
            },
            "customization_spec": null,
            "customvalues": [],
            "datacenter": "DC1",
            "datastore": "DS1",
            "delete_from_inventory": false,
            "disk": [],
            "encryption": {
                "encrypted_ft": null,
                "encrypted_vmotion": null
            },
            "esxi_hostname": null,
            "folder": "/VCENTER/vm/DEV/IT",
            "force": false,
            "guest_id": null,
            "hardware": {
                "boot_firmware": null,
                "cpu_limit": null,
                "cpu_reservation": null,
                "cpu_shares": null,
                "cpu_shares_level": null,
                "hotadd_cpu": null,
                "hotadd_memory": null,
                "hotremove_cpu": null,
                "iommu": null,
                "max_connections": null,
                "mem_limit": null,
                "mem_reservation": null,
                "mem_shares": null,
                "mem_shares_level": null,
                "memory_mb": null,
                "memory_reservation_lock": null,
                "nested_virt": null,
                "num_cpu_cores_per_socket": null,
                "num_cpus": null,
                "scsi": null,
                "secure_boot": null,
                "version": null,
                "virt_based_security": null,
                "vpmc_enabled": null
            },
            "hostname": "VCENTER.DOMAIN.COM",
            "is_template": false,
            "linked_clone": false,
            "name": "test-host1",
            "name_match": "first",
            "networks": [
                {
                    "connected": true,
                    "gateway": "192.11.11.1",
                    "ip": "192.11.11.25",
                    "name": "192.11.11.0",
                    "netmask": "255.255.255.0",
                    "start_connected": true
                }
            ],
            "nvdimm": {
                "label": null,
                "size_mb": 1024,
                "state": null
            },
            "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
            "port": 443,
            "proxy_host": null,
            "proxy_port": null,
            "resource_pool": null,
            "snapshot_src": null,
            "state": "powered-on",
            "state_change_timeout": 0,
            "template": "TEMPLATE1",
            "use_instance_uuid": false,
            "username": "VCENTER-USERNAME@DOMAIN.COM",
            "uuid": null,
            "validate_certs": true,
            "vapp_properties": [],
            "wait_for_customization": true,
            "wait_for_customization_timeout": 300,
            "wait_for_ip_address": true,
            "wait_for_ip_address_timeout": 600
        }
    },
    "msg": "Unable to connect to vCenter or ESXi API at VCENTER.DOMAIN.COM on TCP/443: [SSL] unsupported (_ssl.c:1006)"
}

PLAY RECAP *********************************************************************
localhost                  : ok=2    changed=0    unreachable=0    failed=1    skipped=0    rescued=0    ignored=0   



{
  "msg": "Unable to connect to vCenter or ESXi API at VCENTER-FQDN on TCP/443: [SSL] unsupported (_ssl.c:1006)",
  "exception": "  File \"/tmp/ansible_community.vmware.vmware_guest_payload_2dw51at_/ansible_community.vmware.vmware_guest_payload.zip/ansible_collections/community/vmware/plugins/module_utils/vmware.py\", line 784, in connect_to_api\n    service_instance = connect.SmartConnect(**connect_args)\n                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n  File \"/usr/local/lib/python3.11/site-packages/pyVim/connect.py\", line 961, in SmartConnect\n    supportedVersion = __FindSupportedVersion(protocol, host, port, path,\n                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n  File \"/usr/local/lib/python3.11/site-packages/pyVim/connect.py\", line 775, in __FindSupportedVersion\n    serviceVersionDescription = __GetServiceVersionDescription(\n                                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n  File \"/usr/local/lib/python3.11/site-packages/pyVim/connect.py\", line 699, in __GetServiceVersionDescription\n    return __GetElementTree(protocol, server, port,\n           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n  File \"/usr/local/lib/python3.11/site-packages/pyVim/connect.py\", line 653, in __GetElementTree\n    conn.request(method=\"GET\", url=path, headers=headers)\n  File \"/usr/lib64/python3.11/http/client.py\", line 1294, in request\n    self._send_request(method, url, body, headers, encode_chunked)\n  File \"/usr/lib64/python3.11/http/client.py\", line 1340, in _send_request\n    self.endheaders(body, encode_chunked=encode_chunked)\n  File \"/usr/lib64/python3.11/http/client.py\", line 1289, in endheaders\n    self._send_output(message_body, encode_chunked=encode_chunked)\n  File \"/usr/lib64/python3.11/http/client.py\", line 1048, in _send_output\n    self.send(msg)\n  File \"/usr/lib64/python3.11/http/client.py\", line 986, in send\n    self.connect()\n  File \"/usr/lib64/python3.11/http/client.py\", line 1466, in connect\n    self.sock = self._context.wrap_socket(self.sock,\n                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n  File \"/usr/lib64/python3.11/ssl.py\", line 517, in wrap_socket\n    return self.sslsocket_class._create(\n           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n  File \"/usr/lib64/python3.11/ssl.py\", line 1108, in _create\n    self.do_handshake()\n  File \"/usr/lib64/python3.11/ssl.py\", line 1383, in do_handshake\n    self._sslobj.do_handshake()\n",
  "invocation": {
    "module_args": {
      "hostname": "VCENTER.DOMAIN.COM",
      "username": "VCENTER-USERNAME@DOMAIN.COM",
      "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
      "validate_certs": true,
      "state": "powered-on",
      "folder": "/VCENTER/vm/DEV/IT",
      "template": "TEMPLATE1",
      "datastore": "DS1",
      "datacenter": "DC1",
      "name": "test-host1",
      "cluster": "DC1.DEV",
      "networks": [
        {
          "name": "192.11.11.0",
          "ip": "192.11.11.25",
          "netmask": "255.255.255.0",
          "gateway": "192.11.11.1",
          "connected": true,
          "start_connected": true
        }
      ],
      "wait_for_ip_address": true,
      "wait_for_ip_address_timeout": 600,
      "wait_for_customization": true,
      "wait_for_customization_timeout": 300,
      "customization": {
        "domain": "DOMAIN.COM",
        "domainadmin": "VCENTER-USERNAME@DOMAIN.COM",
        "domainadminpassword": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
        "joindomain": "DOMAIN.COM",
        "hostname": "test-host1",
        "existing_vm": true,
        "dns_servers": [
          "192.11.11.19",
          "192.12.11.19"
        ],
        "dns_suffix": [
          "DOMAIN.COM"
        ],
        "autologon": null,
        "autologoncount": null,
        "fullname": null,
        "hwclockUTC": null,
        "joinworkgroup": null,
        "orgname": null,
        "password": null,
        "productid": null,
        "runonce": null,
        "script_text": null,
        "timezone": null
      },
      "port": 443,
      "is_template": false,
      "customvalues": [],
      "advanced_settings": [],
      "name_match": "first",
      "use_instance_uuid": false,
      "disk": [],
      "nvdimm": {
        "size_mb": 1024,
        "state": null,
        "label": null
      },
      "cdrom": [],
      "hardware": {
        "boot_firmware": null,
        "cpu_limit": null,
        "cpu_reservation": null,
        "hotadd_cpu": null,
        "hotadd_memory": null,
        "hotremove_cpu": null,
        "vpmc_enabled": null,
        "max_connections": null,
        "mem_limit": null,
        "cpu_shares_level": null,
        "mem_shares_level": null,
        "cpu_shares": null,
        "mem_shares": null,
        "mem_reservation": null,
        "memory_mb": null,
        "memory_reservation_lock": null,
        "nested_virt": null,
        "num_cpu_cores_per_socket": null,
        "num_cpus": null,
        "scsi": null,
        "secure_boot": null,
        "version": null,
        "virt_based_security": null,
        "iommu": null
      },
      "encryption": {
        "encrypted_vmotion": null,
        "encrypted_ft": null
      },
      "force": false,
      "state_change_timeout": 0,
      "linked_clone": false,
      "vapp_properties": [],
      "delete_from_inventory": false,
      "proxy_host": null,
      "proxy_port": null,
      "annotation": null,
      "uuid": null,
      "guest_id": null,
      "esxi_hostname": null,
      "snapshot_src": null,
      "resource_pool": null,
      "customization_spec": null,
      "convert": null
    }
  },
  "_ansible_no_log": false,
  "changed": false,
  "_ansible_delegated_vars": {
    "ansible_host": "localhost",
    "ansible_port": null,
    "ansible_user": "VCENTER-USERNAME@DOMAIN.COM",
    "ansible_connection": "local"
  }
}

Can you share your ansible-builder config? This sounds like a dependency problem with your EE.

Sure. I appreciate you taking a look.

requirements.yml

---
collections:
  - name: community.general
    version: 8.4.0
    source: https://galaxy.ansible.com [galaxy.ansible.com]

  - name: community.vmware
    version: 4.2.0
    source: https://galaxy.ansible.com [galaxy.ansible.com]

  - name: ansible.windows
    version: 2.2.0
    source: https://galaxy.ansible.com [galaxy.ansible.com]

bindep.txt

gcc
python3.11-devel.x86_64
libxml2-devel
krb5-devel
krb5-libs
krb5-workstation
openssh-clients
sshpass
git-core
findutils
which

requirements.txt

pywinrm
pywinrm[kerberos]
requests
pykerberos
certifi
crytography

Note: I just ran a test adding certifi and cryptography and had the same results

execution-environment.yml

#y.io/ansible/awx-ee Refer to Ansible Builder Documentation for details for each options:
# https://ansible.readthedocs.io/projects/builder/en/latest/definition/ [ansible.readthedocs.io]
version: 3
# build_arg_defaults:
#   ANSIBLE_GALAXY_CLI_COLLECTION_OPTS: "--pre"
#   ANSIBLE_GALAXY_CLI_ROLE_OPTS: "–no-deps"
images:
  base_image:
    name: quay.io/ansible/awx-ee:23.8.1
options:
  # container_init:
  #   package_pip: dumb-init==1.2.5
  #   entrypoint: '["/opt/builder/bin/entrypoint", "dumb-init"]'
  #   cmd: '["bash"]'
  package_manager_path: /usr/bin/dnf
  # relax_password_permissions: true
  # skip_ansible_check: false
  # workdir: /runner
  # user: 1000
dependencies:
  python_interpreter:
    package_system: python3.11
    python_path: /usr/bin/python3.11
  ansible_core:
    package_pip: ansible-core~=2.15
  ansible_runner:
    package_pip: ansible-runner~=2.3.6
  system: dependencies/bindep.txt
  python: dependencies/requirements.txt
  galaxy: dependencies/requirements.yml
additional_build_files:
  - src: files/ansible.cfg
    dest: configs
additional_build_steps:
  prepend_base:
    - COPY _build/configs/3817c297.0 /usr/share/pki/ca-trust-source/anchors
    - COPY _build/configs/3817c297.1 /usr/share/pki/ca-trust-source/anchors
    - COPY _build/configs/41f91347.0 /usr/share/pki/ca-trust-source/anchors
    - COPY _build/configs/b3814b1c.0 /usr/share/pki/ca-trust-source/anchors
    - COPY _build/configs/ec2fe91d.0 /usr/share/pki/ca-trust-source/anchors
    - COPY _build/configs/f3733cc2.0 /usr/share/pki/ca-trust-source/anchors
    - RUN update-ca-trust
  append_base:
    # - RUN echo "Additional steps for append_base"
    - RUN alternatives --install /usr/bin/python3 python3 /usr/bin/python3.11 0
  prepend_galaxy:
    # - RUN echo "Additional steps for prepend_galaxy"
    - ADD _build/configs/ansible.cfg ~/.ansible.cfg
  # append_galaxy:
  #   - RUN echo "Additional steps for append_galaxy"
  # prepend_builder:
  #   - RUN echo "Additional steps for prepend_builder"
  # append_builder:
  #   - RUN echo "Additional steps for append_builder"
  #prepend_final: |
  #  - RUN ls -la /opt/builder/bin/entrypoint
  #   - RUN echo "Additional steps for prepend_final"
  append_final:
    - RUN chmod 777 /opt/builder/bin/entrypoint
  #   - RUN echo "Additional steps for append_final"
1 Like

Ah, you’re extending the awx-ee image, and trying to switch it to python3.11 when currently it uses python3.9, and I think you may be wanting ansible-core~=2.15.0 (so you don’t end up with >=2.16?) .

I suggest either starting from scratch and building python3.11 by default, using awx-ee’s execution-environment.yml as a reference, or sticking with python3.9 if you want to continue extending the image until they upgrade python in awx-ee upstream. The way you’re doing it now, you will have both python3.9 and python3.11 installed side-by-side inside your EE, but they won’t share the dependencies installed in one or the other. So your python3.11 is missing some of awx-ee’s dependencies:

python: |
    git+https://github.com/ansible/ansible-sign
    ncclient
    paramiko
    pykerberos
    pyOpenSSL
    pypsrp[kerberos,credssp]
    pywinrm[kerberos,credssp]
    toml
    pexpect>=4.5
    python-daemon
    pyyaml
    six
    receptorctl

Edit: I also wouldn’t use alternatives since this is supposed to be el9 based now.

1 Like

I will give this a try in a bit and report back. I knew I was doing something silly :blush:

1 Like

@Denney-tech

Ok so I was able to rebuild using awx-ee/execution-environment.yml at devel ¡ ansible/awx-ee ¡ GitHub and just leaving it as is for now with python3.9. Additionally I just trimmed down some of the collections that im not using and copied in my certificates.

This looks to be a lot cleaner in general and obviously have quite a bit to learn about everything :slight_smile: I certainly appreciate your input.

Unfortunately still seeing the same behavior running openssl s_client but…I noticed something I didnt notice before due to trying to sanitize my output from the openssl s_client test.

verify return:1
807BB2CF337F0000:error:1C8000E9:Provider routines:kdf_tls1_prf_derive:ems not enabled:providers/implementations/kdfs/tls1_prf.c:200:
807BB2CF337F0000:error:0A08010C:SSL routines:tls1_PRF:unsupported:ssl/t1_enc.c:83:

Looks to be realted to RHEL 9 clients with FIPS enabled are experiencing communication issues with Satellite 6.11. - Red Hat Customer Portal . Im guessing “RHEL 9 clients in FIPS mode cannot connect to servers that only support TLS 1.2 without EMS”

For testing I need to look into how to disable FIPS within the container, just adding something like the below did not seem to actually disable it:

additional_build_steps:
  prepend_base:
    - RUN /usr/bin/fips-mode-setup --disable 

Any ideas how to disable it during the build? Alternatively may need to try and build it out using an el8 image where this is most likely not an issue.

One thing I didn’t realize was that FIPS was being inherited form the host so just disabling it may not be as trivial. Ultimately I didn’t want to disable it permanently.

When building a stream9 EE I was able to get past the openssl s_client test with the following

  prepend_base:
    - RUN update-crypto-policies --set FIPS:NO-ENFORCE-EMS

sourced from: Reddit thread: RHEL9 and FIPS breaking SSSD

Though after that I was experiencing some other ssl related issues so I moved on to testing on el8.

In the end I was able to source a ‘ee-minimal-rhel8’ image that I was able to extend with the collections and python modules I needed and that worked perfectly.

I want to revisit building a full image from scratch but for now this has gotten me up and running again. Ultimately the issue was with el9 - fips and support for TLS 1.2 without EMS.

I feel like I learned a lot in the process so that’s always good…

Appreciate the guidance provided. Cheers!

1 Like