Unable to resolve local GitHub Enterprise

We have our own GitHub Enterprise hosted locally at our premise. We are able to resolve public GitHub and sync the project, however, unable to resolve our own one. We have to use our own GitHub due to policy.

"stderr": "fatal: unable to access 'https://github.company.com/My-User-Name-Here/test.git/': Could not resolve host: github.company.com\n",

What I have done:

  1. Hardcode github.company.com in host file of AWX server
  2. Set OS proxy in AWX server
  3. Set git proxy under ~/.gitconfig AWX server

Even unset proxy also does not solve the issue. Ping github.company.com from terminal is resolvable.

I found that there are GitHub settings under Settings > Authentication > GitHub settings. Do I need to do anything here?

I am using AWX 21.9.0. It worked with local store playbooks. Now I need to move to our own GitHub Enterprise

@kurokobo

@i2linuxz
If it’s okay for you to access your private GitHub host through proxy (it means offload name resolution to proxy server), you can add proxy settings to AWX by opening Settings > Jobs settings page in the AWX UI and modifying Extra Environment Variables block in JSON format (see my guide for details):

{
  "HTTPS_PROXY": "http://proxy.example.com:3128",
  "HTTP_PROXY": "http://proxy.example.com:3128",
  "NO_PROXY": "127.0.0.1,localhost,.example.com"
}

If you want to use hosts file instead of offloading name resolution to proxy server, one handy way is running dnsmasq on K3s hosts, which allows the contents of the hosts file to be queried as DNS server (see my guide for details).

If you do not want to use dnsmasq, this is a bit tricky way, but you can add new entries to the configmap that is used by CoreDNS (K3s’ DNS service) as extra hosts file:

$ kubectl -n kube-system edit configmap coredns
...
data:
  Corefile: |
    ...
  NodeHosts: |
    192.168.0.219 kuro-k3s01.kurokobo.internal
    192.168.0.219 git.example.com     👈👈👈

Job settings were already done during the AWX deployment quite long time ago:

{
  "HTTPS_PROXY": "http://user:passwd@x.x.x.x:8080",
  "HTTP_PROXY": "http://user:passwd@x.x.x.x:8080",
  "NO_PROXY": "127.0.0.1,localhost,x.x.x.x,.company.com"
}

I can see it on GUI but when I want to edit it, it is gone. Maybe I manually did it in setting file.

Is this setting enough right?

Anyway, I just gave a try to use dnsmasq but the step stuck at deleting pod:

[root@hostname~]# kubectl -n kube-system delete pod -l k8s-app=kube-dns
pod "coredns-597584b69b-dhggq" deleted
pod "coredns-597584b69b-64rgt" deleted
<cursor haging here>

Hmm what exactly does “it is gone” mean? Which setting file did you change and how?

As help text for this option described, this is additional environment variables for project updates, so you can use proxy by configuring this variables.

Additional environment variables set for playbook runs, inventory updates, project updates, and notification sending.

[root@hostname~]# kubectl -n kube-system delete pod -l k8s-app=kube-dns
pod "coredns-597584b69b-dhggq" deleted
pod "coredns-597584b69b-64rgt" deleted
<cursor haging here>

I don’t know why there are two pods for CoreDNS. If you are on single node k3s, it should be one. Is there a problem with your cluster? What kubectl -n kube-system get pod,deployment,service outputs?

Hmm what exactly does “it is gone” mean? Which setting file did you change and how?

I am not able to edit the proxy settings on edit mode but I can see it on view mode. I edited manually in:

/etc/systemd/system/k3s.service.env

and extra_settings: in base/awx.yaml

I was just followed your guide at https://github.com/kurokobo/awx-on-k3s/blob/main/tips/use-http-proxy.md

Here is the output:

# kubectl -n kube-system get pod,deployment,service
NAME                                          READY   STATUS        RESTARTS        AGE
pod/svclb-traefik-1d3b820d-ctl7h              2/2     Running       10 (364d ago)   370d
pod/metrics-server-5c8978b444-9v42h           1/1     Terminating   9 (364d ago)    370d
pod/local-path-provisioner-79f67d76f8-tv6wf   1/1     Terminating   9               370d
pod/coredns-597584b69b-dhggq                  1/1     Terminating   5 (364d ago)    370d
pod/traefik-bb69b68cd-w8ktt                   1/1     Terminating   5 (364d ago)    370d
pod/helm-install-traefik-crd-kbfrq            0/1     Terminating   0               357d
pod/helm-install-traefik-fn294                0/1     Terminating   0               357d
pod/helm-install-traefik-zrxh2                0/1     Completed     0               357d
pod/helm-install-traefik-crd-kh8mk            0/1     Completed     0               357d
pod/svclb-traefik-1d3b820d-sds48              2/2     Running       10 (34d ago)    357d
pod/local-path-provisioner-79f67d76f8-mtzbb   1/1     Running       8 (34d ago)     357d
pod/metrics-server-5c8978b444-tb6fc           1/1     Running       8 (34d ago)     357d
pod/traefik-bb69b68cd-f2xwb                   1/1     Running       5 (34d ago)     357d
pod/coredns-597584b69b-zf95l                  1/1     Running       0               30h

NAME                                     READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/traefik                  1/1     1            1           370d
deployment.apps/local-path-provisioner   1/1     1            1           420d
deployment.apps/metrics-server           1/1     1            1           420d
deployment.apps/coredns                  1/1     1            1           420d

NAME                     TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)                      AGE
service/kube-dns         ClusterIP      10.x.x.x      <none>          53/UDP,53/TCP,9153/TCP       420d
service/metrics-server   ClusterIP      10.x.x.x   <none>          443/TCP                      420d
service/traefik          LoadBalancer   10.x.x.x    172.x.x.x   80:30409/TCP,443:30884/TCP   370d

I see so if you used extra_settings, then it is expected that you will not be able to edit env vars in the GUI. So now if your settings are correct, your project updates should already be attempted via proxy.

You can also verify that name resolution using dnsmasq is successful with the following command: kubectl run -it --rm --restart=Never busybox --image=busybox:1.28 -- nslookup demo.example.com

I guess our internal GitHub Enterprise does not need to be attempted via proxy:

[root@hostname~]# curl https://github.company.com
curl: (7) Failed to connect to github.company.com port 443: Connection timed out

For me, it looks like resolvable, just that firewall is blocking it?

If so, how to remove the proxy? If possible I want it to be editable via GUI.

I got below message when running the command:

[root@hostname~]# kubectl run -it --rm --restart=Never busybox --image=busybox:1.28 -- nslookup demo.example.com
Error from server (AlreadyExists): pods "busybox" already exists

Remove extra_settings in your base/awx.yaml and apply it again by kubectl apply -k base. Or you can edit AWX CR directly by kubectl -n awx edit awx awx.

After applying (or editing), wait until Operator starts to redeploy and complete: kubectl -n awx logs -f deployments/awx-operator-controller-manager

Try changing the name of the pod: kubectl run -it --rm --restart=Never <POD_NAME> --image=busybox:1.28 -- nslookup <FQDN_TO_BE_TESTED>

Hello,

Sorry, I do not know Kubernetes. After reading the whole threads, now I understand how to use dnsmasq and how to query it:

[root@hostname~]# kubectl run -it --rm --restart=Never coredns --image=busybox:1.28 -- nslookup github.company.com
Server:    10.43.x.x
Address 1: 10.43.x.x kube-dns.kube-system.svc.cluster.local

Name:      github.company.com
Address 1: 10.x.x.x github.company.com
pod "coredns" deleted

Then I try to re-sync the project from local private GitHub, looks like github.company.com is resolved but I believe this issue is due to port 443 is being blocked right?:

"stderr": "fatal: unable to access 'https://github.company.com/My-User-Name-Here/test.git/': Failed to connect to github.company.com port 443: Connection timed out\n\n",

Yes, I think you have a name resolution, but you may still have a proxy configured, or it may be blocked somewhere on the route to your corporate GitHub.

You can run CentOS 8 Stream on K8s with the following command, and can control bash as root. It will help you troubleshoot.

kubectl run -it --rm --restart=Never debug --image=quay.io/centos/centos:stream8 -- bash
2 Likes