Ansible AWX (Minukube and Docker) has problems after IP-Change

Hello guys,

After we have changed the IP-Address (from 10.123.96.187 to 212.38.X.X) we faced some problems in the syslog:

Jul 25 11:30:27 x24ansible1 systemd[1]: Stopping User Manager for UID 1001...
Jul 25 11:30:27 x24ansible1 systemd[518680]: Stopped target Main User Target.
Jul 25 11:30:27 x24ansible1 systemd[518680]: Stopped target Basic System.
Jul 25 11:30:27 x24ansible1 systemd[518680]: Stopped target Paths.
Jul 25 11:30:27 x24ansible1 systemd[518680]: Stopped target Sockets.
Jul 25 11:30:27 x24ansible1 systemd[518680]: Stopped target Timers.
Jul 25 11:30:27 x24ansible1 systemd[518680]: Closed D-Bus User Message Bus Socket.
Jul 25 11:30:27 x24ansible1 systemd[518680]: Closed GnuPG network certificate management daemon.
Jul 25 11:30:27 x24ansible1 systemd[518680]: Closed GnuPG cryptographic agent and passphrase cache (access for web browsers).
Jul 25 11:30:27 x24ansible1 systemd[518680]: Closed GnuPG cryptographic agent and passphrase cache (restricted).
Jul 25 11:30:27 x24ansible1 systemd[518680]: Closed GnuPG cryptographic agent (ssh-agent emulation).
Jul 25 11:30:27 x24ansible1 systemd[518680]: Closed GnuPG cryptographic agent and passphrase cache.
Jul 25 11:30:27 x24ansible1 systemd[518680]: Closed debconf communication socket.
Jul 25 11:30:27 x24ansible1 systemd[518680]: Closed REST API socket for snapd user session agent.
Jul 25 11:30:27 x24ansible1 systemd[518680]: Removed slice User Application Slice.
Jul 25 11:30:27 x24ansible1 systemd[518680]: Reached target Shutdown.
Jul 25 11:30:27 x24ansible1 systemd[518680]: Finished Exit the Session.
Jul 25 11:30:27 x24ansible1 systemd[518680]: Reached target Exit the Session.
Jul 25 11:30:27 x24ansible1 systemd[1]: user@1001.service: Deactivated successfully.
Jul 25 11:30:27 x24ansible1 systemd[1]: Stopped User Manager for UID 1001.
Jul 25 11:30:27 x24ansible1 systemd[1]: Stopping User Runtime Directory /run/user/1001...
Jul 25 11:30:27 x24ansible1 systemd[1]: run-user-1001.mount: Deactivated successfully.
Jul 25 11:30:27 x24ansible1 systemd[1]: user-runtime-dir@1001.service: Deactivated successfully.
Jul 25 11:30:27 x24ansible1 systemd[1]: Stopped User Runtime Directory /run/user/1001.
Jul 25 11:30:27 x24ansible1 systemd[1]: Removed slice User Slice of UID 1001.
Jul 25 11:30:30 x24ansible1 kubelet[871]: E0725 11:30:30.801937     871 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"x24ansible1\" not found"
Jul 25 11:30:34 x24ansible1 kubelet[871]: W0725 11:30:34.649360     871 reflector.go:539] k8s.io/client-go@v0.0.0/tools/cache/reflector.go:229: failed to list *v1.RuntimeClass: Get `"https://10.123.96.187:6443/apis/node.k8s.io/v1/runtimeclasses?`limit=500&resourceVersion=0": dial tcp 10.123.96.187:6443: i/o timeout
Jul 25 11:30:34 x24ansible1 kubelet[871]: I0725 11:30:34.649443     871 trace.go:236] Trace[908599880]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.0.0/tools/cache/reflector.go:229 (25-Jul-2024 11:30:04.648) (total time: 30001ms):
Jul 25 11:30:34 x24ansible1 kubelet[871]: Trace[908599880]: ---"Objects listed" error:Get "https://10.123.96.187:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.123.96.187:6443: i/o timeout 30000ms (11:30:34.649)
Jul 25 11:30:34 x24ansible1 kubelet[871]: Trace[908599880]: [30.001001277s] [30.001001277s] END
Jul 25 11:30:34 x24ansible1 kubelet[871]: E0725 11:30:34.649466     871 reflector.go:147] k8s.io/client-go@v0.0.0/tools/cache/reflector.go:229: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.123.96.187:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.123.96.187:6443: i/o timeout
Jul 25 11:30:39 x24ansible1 kubelet[871]: E0725 11:30:39.162721     871 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.123.96.187:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/x24ansible1?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="7s"
Jul 25 11:30:40 x24ansible1 kubelet[871]: E0725 11:30:40.802239     871 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"x24ansible1\" not found"
Jul 25 11:30:46 x24ansible1 kubelet[871]: I0725 11:30:46.510542     871 status_manager.go:853] "Failed to get status for pod" podUID="2869022999c4da8e01f39bc2c7e142c7" pod="kube-system/kube-apiserver-x24ansible1" err="Get \"https://10.123.96.187:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-x24ansible1\": dial tcp 10.123.96.187:6443: i/o timeout"
Jul 25 11:30:50 x24ansible1 kubelet[871]: E0725 11:30:50.803281     871 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"x24ansible1\" not found"
Jul 25 11:30:52 x24ansible1 kubelet[871]: E0725 11:30:52.683121     871 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.123.96.187:6443/api/v1/nodes\": dial tcp 10.123.96.187:6443: i/o timeout" node="x24ansible1"
Jul 25 11:30:54 x24ansible1 kubelet[871]: E0725 11:30:54.957534     871 event.go:355] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.123.96.187:6443/api/v1/namespaces/default/events/x24ansible1.17e52a9cfe8d48a6\": dial tcp 10.123.96.187:6443: i/o timeout" event="&Event{ObjectMeta:{x24ansible1.17e52a9cfe8d48a6  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:x24ansible1,UID:x24ansible1,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node x24ansible1 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:x24ansible1,},FirstTimestamp:2024-07-24 15:54:46.313019558 +0200 CEST m=+0.657307887,LastTimestamp:2024-07-24 15:56:31.419008818 +0200 CEST m=+105.763297147,Count:38,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:x24ansible1,}"
Jul 25 11:30:56 x24ansible1 kubelet[871]: W0725 11:30:56.074245     871 reflector.go:539] k8s.io/client-go@v0.0.0/tools/cache/reflector.go:229: failed to list *v1.CSIDriver: Get "https://10.123.96.187:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.123.96.187:6443: i/o timeout
Jul 25 11:30:56 x24ansible1 kubelet[871]: I0725 11:30:56.074322     871 trace.go:236] Trace[52395520]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.0.0/tools/cache/reflector.go:229 (25-Jul-2024 11:30:26.073) (total time: 30000ms):
Jul 25 11:30:56 x24ansible1 kubelet[871]: Trace[52395520]: ---"Objects listed" error:Get "https://10.123.96.187:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.123.96.187:6443: i/o timeout 30000ms (11:30:56.074)
Jul 25 11:30:56 x24ansible1 kubelet[871]: Trace[52395520]: [30.000766556s] [30.000766556s] END
Jul 25 11:30:56 x24ansible1 kubelet[871]: E0725 11:30:56.074337     871 reflector.go:147] k8s.io/client-go@v0.0.0/tools/cache/reflector.go:229: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.123.96.187:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.123.96.187:6443: i/o timeout
Jul 25 11:30:56 x24ansible1 kubelet[871]: E0725 11:30:56.164629     871 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.123.96.187:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/x24ansible1?timeout=10s\": context deadline exceeded" interval="7s"

“x24ansible1” is the hostname. As you can see the old address still occurs in the syslog.
Are the some config-files that we can adjust? We have already adjusted this files (The did a grep -R under /etc/kubernetes for the old IP):

/etc/kubernetes/manifests/kube-apiserver.yaml: kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint
/etc/kubernetes/manifests/kube-apiserver.yaml
/etc/kubernetes/manifests/kube-apiserver.yaml
/etc/kubernetes/manifests/kube-apiserver.yaml
/etc/kubernetes/manifests/kube-apiserver.yaml
/etc/kubernetes/manifests/etcd.yaml: kubeadm.kubernetes.io/etcd.advertise-client-urls
/etc/kubernetes/manifests/etcd.yaml
/etc/kubernetes/manifests/etcd.yaml
/etc/kubernetes/manifests/etcd.yaml
/etc/kubernetes/manifests/etcd.yaml
/etc/kubernetes/manifests/etcd.yam
/etc/kubernetes/super-admin.conf
/etc/kubernetes/scheduler.conf
/etc/kubernetes/controller-manager.conf
/etc/kubernetes/admin.conf
/etc/kubernetes/kubelet.conf

Are there any more files left that we did not find? Any help is appreciated. Thanks in Advance