We have our own GitHub Enterprise hosted locally at our premise. We are able to resolve public GitHub and sync the project, however, unable to resolve our own one. We have to use our own GitHub due to policy.
"stderr": "fatal: unable to access 'https://github.company.com/My-User-Name-Here/test.git/': Could not resolve host: github.company.com\n",
@i2linuxz
If it’s okay for you to access your private GitHub host through proxy (it means offload name resolution to proxy server), you can add proxy settings to AWX by opening Settings > Jobs settings page in the AWX UI and modifying Extra Environment Variables block in JSON format (see my guide for details):
If you want to use hosts file instead of offloading name resolution to proxy server, one handy way is running dnsmasq on K3s hosts, which allows the contents of the hosts file to be queried as DNS server (see my guide for details).
If you do not want to use dnsmasq, this is a bit tricky way, but you can add new entries to the configmap that is used by CoreDNS (K3s’ DNS service) as extra hosts file:
Hmm what exactly does “it is gone” mean? Which setting file did you change and how?
As help text for this option described, this is additional environment variables for project updates, so you can use proxy by configuring this variables.
Additional environment variables set for playbook runs, inventory updates, project updates, and notification sending.
[root@hostname~]# kubectl -n kube-system delete pod -l k8s-app=kube-dns
pod "coredns-597584b69b-dhggq" deleted
pod "coredns-597584b69b-64rgt" deleted
<cursor haging here>
I don’t know why there are two pods for CoreDNS. If you are on single node k3s, it should be one. Is there a problem with your cluster? What kubectl -n kube-system get pod,deployment,service outputs?
I see so if you used extra_settings, then it is expected that you will not be able to edit env vars in the GUI. So now if your settings are correct, your project updates should already be attempted via proxy.
You can also verify that name resolution using dnsmasq is successful with the following command: kubectl run -it --rm --restart=Never busybox --image=busybox:1.28 -- nslookup demo.example.com
Remove extra_settings in your base/awx.yaml and apply it again by kubectl apply -k base. Or you can edit AWX CR directly by kubectl -n awx edit awx awx.
After applying (or editing), wait until Operator starts to redeploy and complete: kubectl -n awx logs -f deployments/awx-operator-controller-manager
Try changing the name of the pod: kubectl run -it --rm --restart=Never <POD_NAME> --image=busybox:1.28 -- nslookup <FQDN_TO_BE_TESTED>
Then I try to re-sync the project from local private GitHub, looks like github.company.com is resolved but I believe this issue is due to port 443 is being blocked right?:
"stderr": "fatal: unable to access 'https://github.company.com/My-User-Name-Here/test.git/': Failed to connect to github.company.com port 443: Connection timed out\n\n",
Yes, I think you have a name resolution, but you may still have a proxy configured, or it may be blocked somewhere on the route to your corporate GitHub.
You can run CentOS 8 Stream on K8s with the following command, and can control bash as root. It will help you troubleshoot.
kubectl run -it --rm --restart=Never debug --image=quay.io/centos/centos:stream8 -- bash