I’m trying to load a dynamic inventory from ServiceNow using https://galaxy.ansible.com/servicenow/servicenow. I’m almost there, but we use on-prem ServiceNow, not the cloud service, so we have a custom CA signer. I need to trust that CA. My error is:
Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self-signed certificate in certificate chain
I can repro that error with pysnow (since the servicenow collection uses pysnow) by logging on to the terminal of my EE node and running this code:
I get to work on this in 15 minute chunks between interruptions, just like all of us I’m sure. So, I now know I can fix this problem by creating a Container Group with the signing cert on the file system and referencing it with a bash environment variable like:
export REQUESTS_CA_BUNDLE=/path/cert.pem
I suspect I can fix it by putting the signer in /usr/share/pki/ca-trust-source/anchors/. I’m supposed to run update-ca-trust after doing that, but I bet that gets run when the node spins up. So, that will be my plan going forward.
I’m having the same issue with when AWX synchronizes a project with collections. My company’s firewall does MITM decryption, returning an ad-hoc certificate Python does not recognize.
Only If the host is also Redhat based:Install the certificates on the k8s host, as you described. I suspect this will work because in the job settings, it mounts two pki directories. Presumably for precisely this use case.
Hopefully, this will help.
I have one related question: In the title is mentioned ‘on-prem Servicenow instance’. Is it possible to deploy it locally ?
Yes, we have an on-prem ServiceNow installation. It’s highly customized, slow, and brutal to modify. I’m not sure I’d drive anyone down the same road and I’m not sure we are not just grandfathered in from a decade or more ago. Shrug.
I think my certificates are on the RedHat host already, so I probably just need to let python know they’re there, but it will have to wait until Tuesday at the earliest. Today got yoinked out from under me and Monday is already spoken for. We all know the drill.
See “Appendix: Trust custom CA for jobs” section on this page.
I see you’ve already considered Method 1, but yes, it’s not applicable for inventory sync, so you can go Method 2 or 3.
I got Ansible Builder to work! In the same week I started! I’m so happy. I have to start with a bare metal RHEL 8 box, so I needed to find a path to getting stuff to work, then figure out how to walk it. The final path of many was to build Python from source with ZLib and SSL modules, build Docker, and finally build Ansible Builder. Success.
New hurdle. Running the Inventory Source Sync on the new EE tells me I need to install the “requests” module into the Python on the EE. I assume I need to do that in one of the build documents. Off I go. Please install “requests” Python module as this is required for ServiceNow
[WARNING]: * Failed to parse /runner/project/servicenow.yml with auto plugin:
Please install "requests" Python module as this is required for ServiceNow
dynamic inventory plugin.
#29 [final 11/13] RUN pip3 install netaddr
#29 0.642 Requirement already satisfied: netaddr in /usr/local/lib/python3.8/site-packages (0.8.0)
#29 0.755 WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
#29 DONE 0.9s
#30 [final 12/13] RUN pip3 install requests
#30 0.682 Requirement already satisfied: requests in /usr/lib/python3.8/site-packages (2.22.0)
#30 0.690 Requirement already satisfied: chardet<3.1.0,>=3.0.2 in /usr/lib/python3.8/site-packages (from requests) (3.0.4)
#30 0.690 Requirement already satisfied: idna<2.9,>=2.5 in /usr/lib/python3.8/site-packages (from requests) (2.8)
#30 0.691 Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/lib/python3.8/site-packages (from requests) (1.25.7)
#30 0.797 WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
#30 DONE 0.9s
Running with the new EE resulted in the same output.
I logged on to the automation job node, and tried looking for the modules and saw this:
sh-4.4$ pip install requests
Defaulting to user installation because normal site-packages is not writeable
Requirement already satisfied: requests in /usr/lib/python3.8/site-packages (2.22.0)
Requirement already satisfied: chardet<3.1.0,>=3.0.2 in /usr/lib/python3.8/site-packages (from requests) (3.0.4)
Requirement already satisfied: idna<2.9,>=2.5 in /usr/lib/python3.8/site-packages (from requests) (2.8)
Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/lib/python3.8/site-packages (from requests) (1.25.7)
The error says the module needs to be added to the controller, not the automation job node. I’m assuming this means I need to rebuild the controller. I figure the answer has to live somewhere inside:
I’m still stumped. When I go to the terminal of my awx-operator-controller-manager, I can do this:
sh-4.4$ python3
Python 3.8.13 (default, Jun 14 2022, 17:49:07)
[GCC 8.5.0 20210514 (Red Hat 8.5.0-13)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import requests
>>> import netaddr
>>>
Does that not mean the controller has the requests module installed? If it were not there, the import would have failed, right?
This is the job output leading up to the error:
ansible-inventory [core 2.12.5.post0]
config file = None
configured module search path = ['/runner/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.8/site-packages/ansible
ansible collection location = /runner/requirements_collections:/runner/.ansible/collections:/usr/share/ansible/collections:/usr/share/automation-controller/collections
executable location = /usr/local/bin/ansible-inventory
python version = 3.8.12 (default, Sep 21 2021, 00:10:52) [GCC 8.5.0 20210514 (Red Hat 8.5.0-3)]
jinja version = 2.10.3
libyaml = True
No config file found; using defaults
setting up inventory plugins
host_list declined parsing /runner/project/servicenow.yml as it did not pass its verify_file() method
script declined parsing /runner/project/servicenow.yml as it did not pass its verify_file() method
Loading collection servicenow.servicenow from /runner/requirements_collections/ansible_collections/servicenow/servicenow
toml declined parsing /runner/project/servicenow.yml as it did not pass its verify_file() method
[WARNING]: * Failed to parse /runner/project/servicenow.yml with auto plugin:
Please install "requests" Python module as this is required for ServiceNow
dynamic inventory plugin.
Aargh. It looks like servicenow.servicenow is archived? And servicenow.itsm is the live collection? Based on GitHub. But based on the Ansible Collections document it looks like servicenow.servicenow is the thing?
New error. I think this one means the parsing was successful, but the connection took too long. That’s not a surprise. Our ServiceNow instance is slower than Christmas, so I’ll need to figure out where to extend the timeout.
Loading collection servicenow.itsm from /runner/requirements_collections/ansible_collections/servicenow/itsm
toml declined parsing /runner/project/servicenow.yml as it did not pass its verify_file() method
[WARNING]: * Failed to parse /runner/project/servicenow.yml with auto plugin:
The read operation timed out
Thank you, again, @kurokobo, for the timely help. I’d still be in a state of rapid balding without you.
OK. I got live data back from the ServiceNow host, but it was incomplete. That’s a lot further than I was last week. I suspect the data were truncated by the timeout, but it’s hard to say. I need to figure out how to extend the timeout. It looks like it should be a variable I can set in the custom Credential type, but I can only set strings there, and the code complains it needs a number. I may be able to set it in the environment of AWX itself. I am burnt for the day. Back on Tuesday.