AWX Openshift route not working

Hi.
I installed AWX with Helm in Openshift. Everything looks fine, all pods are up & running. But i cant access the AWX console. It gives me 503 Service Unavailable error. I cant see any issues in logs. Any idea?

Thnak you

Hi,

503/HTTP means an issue on backend service, so surely not a routing issue AFAICT.

Have you tried port forwarding directly through your pod ? (or proxy, more or less the same)

I cant see any issues in logs

Which logs ? Is there anything else ? Do you have logs on your ingress controller (you mentioned routes so it should be haproxy) ? Some info here.

Could you also post your AWX pod, service and ingress / route configs ?

Service:

Route type is Edge and uses our default ingress TLS certificates.

awx-web pod logs:

1:C 12 Oct 2023 17:18:27.830 * oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
1:C 12 Oct 2023 17:18:27.830 * Redis version=7.2.1, bits=64, commit=00000000, modified=0, pid=1, just started
1:C 12 Oct 2023 17:18:27.830 * Configuration loaded
1:M 12 Oct 2023 17:18:27.830 * monotonic clock: POSIX clock_gettime
1:M 12 Oct 2023 17:18:27.831 * Running mode=standalone, port=0.
1:M 12 Oct 2023 17:18:27.831 * Server initialized
1:M 12 Oct 2023 17:18:27.831 * Ready to accept connections unix

awx-operator-controller-manager pod logs:

Flag --logtostderr has been deprecated, will be removed in a future release, see https://github.com/kubernetes/enhancements/tree/master/keps/sig-instrumentation/2845-deprecate-klog-specific-flags-in-k8s-components
W1012 17:17:21.682328       1 kube-rbac-proxy.go:152] 
==== Deprecation Warning ======================

Insecure listen address will be removed.
Using --insecure-listen-address won't be possible!

The ability to run kube-rbac-proxy without TLS certificates will be removed.
Not using --tls-cert-file and --tls-private-key-file won't be possible!

For more information, please go to https://github.com/brancz/kube-rbac-proxy/issues/187

===============================================

		
I1012 17:17:21.682391       1 kube-rbac-proxy.go:272] Valid token audiences: 
I1012 17:17:21.682429       1 kube-rbac-proxy.go:363] Generating self signed cert as no cert is provided
I1012 17:17:22.452094       1 kube-rbac-proxy.go:414] Starting TCP socket on 0.0.0.0:8443
I1012 17:17:22.452794       1 kube-rbac-proxy.go:421] Listening securely on 0.0.0.0:8443

There are no awx related logs in openshift-ingress pod logs.

Pods:

Disclaimer: I don’t use AWX and don’t know much about it apart from a functional standpoint.

Your service (awx-service) is apparently routing to your pod (awx-web) on port 8052/TCP; is this port listening on pod ? I also suggested earlier testing direct access to your pod so you can rule out any potential upstream routing issue; can you try that and post some curl -Iv 127.0.0.1:<yourLocalPort> trace ?

awx-web logs are only showing a Redis instance starting but nothing else. I’m not sure what to expect here, but awx API container should be running in this pod; could you run these commands and paste outputs here ?: kubectl get pod -n awx <yourAWXPodNameHere> -o json | jq -c '.spec.containers[] | select( .name == "awx-web")' | jq and kubectl get pod -n <yourAWXPodNameHere> -o json | jq -c '.status.containerStatuses[] | select( .name == "awx-web")' | jq

You posted the operator logs, though I have no idea on what it does (apart from deploying AWX stack in the first place), and how it works. I see mentions to kube-rbac-proxy in logs but I don’t know what it is. I suggest to focus on awx-web pod access issue and leave operator on the side for a while.

Also not to scorn you or anything but could you post yaml config of your objects instead of GUI screenshots ? There would be more pertinent information in there, like selectors and status. For now, could you post your route specs ?

I found the issue. The problem was port 80. Openshift uses restricted SCC, which prevents the use of ports < 1024. The solution was to enable the anyuid SCC for the service account used by the awx-web pod.

2 Likes

i see your awx-web container have 2 restart can u confirm that its not in a crash loop?

@TheRealHaoLiu I performed a manual restart of the pod. I found a solution, so I will mark this topic as “resolved” :slight_smile: .

2 Likes

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.