Execution Environment failed to run any job on awx-ee:latest and awx-ee:23.6.0

I recently provisioned AWX 23.6.0 on k8s.
However, I’m unable to run any jobs/template.
Here is the error message I got each time I try to run any job.

 Failed to JSON parse a line from worker stream. Error: Expecting value: line 1 column 1 (char 0) Line with invalid JSON data: b''  

Version of running AWX and k8s:
AWX: version 23.6.0
kubectl: v1.25.4 (client) and v1.28.5 (server)

Can someone with more experience please assist to point me in the right direction to resolve this issue. Please let me know if any additional information is needed.
Thanks in advance.

that error message is coming from ansible runner – it is getting corrupt data back from receptor.

There are couple of things you can do to resolve this

in the UI > Settings > Troubleshooting you can disable cleanup paths and disable receptor release work (can’t remember the exact settings names)

now when you run a job, the job pod should stay up

kubectl get pods to list the pods

find the job pod (says “automation” in the name) and do a logs on it

kubectl logs <podname>

What is the output of that? Also you might check that the pod terminated in a healthy state by doing a kubectl describe on it. I’m wondering if you pod is suddenly terminating before receptor streams the full output back to controller.

Hope this gets you started on debugging this issue

Seth

Note: remember to flip back the troubleshooting options long term so that files are cleaned up as jobs run

1 Like

Hello @fosterseth thanks for the reply and pointing me in that direction.

The log on the pod indicates issue with how the pod interfaces with the proxy endpoint

$ kubectl logs automation-job-73-dn9jc -n awx


Error from server: Get "https://192.168.20.160:10250/containerLogs/awx/automation-job-73-dn9jc/worker": proxyconnect tcp: proxy error from 127.0.0.1:6443 while dialing proxyIPAddress-here:8080, code 502: 502 Bad Gateway
1 Like

Thanks again @fosterseth for your help on this, really appreciate it.
The issue has been resolved. I had to add the proxy config here and restarted the service:

cat /etc/systemd/system/k3s.service.env 

HTTP_PROXY="http://proxy.int:8080"
HTTPS_PROXY="http://proxy.int:8080"
NO_PROXY="localhost,127.0.0.1,10.96.0.0/12,192.168.59.0/24,192.168.49.0/24,192.168.39.0/24"
3 Likes

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.