Community.okd.k8s and kubernetes.core.k8s hang/get stuck indefinitely

I was thinking the kubernetes>=24.2.0 might be the issue this time, not jsonpatch. Something in the kubernetes pip package is calling yaml.load() and failing the module, so that was why I thought that might the problem.

Hi, that was a good point. I’ve now tried with three different kubernetes versions:

  • kubernetes-22.6.0
  • kubernetes-24.2.0
  • kubernetes-29.0.0

and the issue was persistent. playbook still hangs at community.okd.k8s unfortunately :confused:

@Denney-tech I’d like to mention again that I think issue is with ansible-core as that is the only package I couldn’t install an older version of.

Okay, so I split the newlines from your actual error output earlier. It’s the python3.8/ansible2.13.3 output, so it wouldn’t hurt to double-check what you’re getting now.

On to the output:

/usr/lib/python3/dist-packages/kubernetes/config/ YAMLLoadWarning: calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read for full details.
Traceback (most recent call last):
  File "/home/user1/.ansible/tmp/ansible-tmp-1712652899.1646488-422-100298754495323/", line 107, in <module>
  File "/home/user1/.ansible/tmp/ansible-tmp-1712652899.1646488-422-100298754495323/", line 99, in _ansiballz_main
    invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)
  File "/home/user1/.ansible/tmp/ansible-tmp-1712652899.1646488-422-100298754495323/", line 47, in invoke_module
    runpy.run_module(mod_name='', init_globals=dict(_module_fqn='', _modlib_path=modlib_path),
  File "/usr/lib/python3.8/", line 207, in run_module
    return _run_module_code(code, init_globals, run_name, mod_spec)
  File "/usr/lib/python3.8/", line 97, in _run_module_code
    _run_code(code, mod_globals, init_globals,
  File "/usr/lib/python3.8/", line 87, in _run_code
    exec(code, run_globals)
  File "/tmp/ansible_community.okd.k8s_payload_j5r7uimp/", line 323, in <module>
  File "/tmp/ansible_community.okd.k8s_payload_j5r7uimp/", line 314, in main
  File "/tmp/ansible_community.okd.k8s_payload_j5r7uimp/", line 44, in __init__
  File "/tmp/ansible_community.okd.k8s_payload_j5r7uimp/", line 49, in __init__
  File "/tmp/ansible_community.okd.k8s_payload_j5r7uimp/", line 352, in get_api_client
  File "/tmp/ansible_community.okd.k8s_payload_j5r7uimp/", line 246, in wrapper
  File "/tmp/ansible_community.okd.k8s_payload_j5r7uimp/", line 259, in create_api_client
NameError: name 'k8sdynamicclient' is not defined

The last line I think is important. k8sdynamicclient was introduced in kubernetes.core==2.0.0 collection.

If you run ansible-galaxy collection list kubernetes.core do you have multiple versions listed? If you have anything older than 2.0.0, remove them and try again.


Thank you for the comment. I managed to fix the issue. The token variable needed to be okd_auth.openshift_auth.api_key and referenced like this:

    - name: Create a k8s namespace
        api_key: "{{ okd_auth.openshift_auth.api_key }}"

This was certainly an error I needed to fix but it wasn’t the root cause. The root cause was me not setting up the environment properly. I’ve now added the ansible_python_interpreter variable to point to python binary inside the virtual environment, removed all the ‘supporting python packages’ that were installed outside of the virtual environment and that basically fixed it for me.

There were some python packages that were not installed within the environment and ansible was picking them from outside the environment. And that caused the issue apparently.

Apologies for taking up your time and thank you for your support.


Glad you figured it out. We definitely got error blind trying to troubleshoot your python/collection dependency issues to miss that the api_key was using a literal string instead of the variable.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.