Hello all,
I am looking for a way to clone a single node AWX on k3s.
When I use aws-on-k3s/backup it only seems to backup the postgres data.
Any hints?
Regards Hans-Peter
Hello all,
I am looking for a way to clone a single node AWX on k3s.
When I use aws-on-k3s/backup it only seems to backup the postgres data.
Any hints?
Regards Hans-Peter
Hi,
The backup directory contains required secrets and spec for your awx instance for restoration in addition to postgres data.
So if you are not using an external database, I think you can safely clone your instance by restoring your backup data to whole new K3s host, by using same version of AWX Operator.
Regards,
Hi,
Do you mean a filesystem backup?
Or the backup as described at https://github.com/kurokobo/awx-on-k3s/tree/main/backup?
Regards Hans-Peter
Hi,
It is not a file system backup, but a backup created by AWX Operator.
If you follow the guide in my repository (thanks for using :D), you should have a directory named “tower-openshift-backup-yyyy-MM-dd-HH:mm:ss” in /data/backup on your k3s host.
This directory contains everything required for restoration for your AWX instance, e.g. the database backup, dump of your secrets, and instance specs.
You can bring this entire directory to the new K3s host and use it for restoration after installing K3s and deploying the same version of AWX Operator that you used for the backup.
Regards,
Hi
Thanks for your help. Much appreciated.
The only think I am concerned about is the projects volume awx/awx-projects-claim
Will that be recreated?
it does contain data.
Regards Hans-Peter
Hi,
The only think I am concerned about is the projects volume awx/awx-projects-claim
Will that be recreated?
If you are using a “Manual” type project, /data/project
is required to copy to new host manually.
If your AWX does not have a “Manual” type project, you do not need to copy the project directory since AWX will automatically fetch the required data again from SCM and other remote sources. Of course, you can copy whole /data/project
to new host manually to eliminate the need for re-fetching.
Regards,
Super thanks
Hi,
Do you have an older version of awx-on-k3s in which the projects an postgres claims/volumes are “inside” k3s?
I need to replicate an environment which does not have /data/projects and /data/postgres
/var/lib/rancher/k3s/storage# kubectl get pv -A
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-ae6615ac-a473-4f56-899e-47cca2d89c38 8Gi RWO Delete Bound default/postgres-awx-postgres-0 local-path 296d
pvc-06337c8a-bf93-400c-ae7c-5e0967dfa969 2Gi RWO Delete Bound default/static-data-pvc local-path 296d
pvc-f56399e7-fa07-42af-8251-6c2d20c19b2a 8Gi RWO Delete Bound default/awx-projects-claim local-path 290d
awx-backup-volume 4Gi RWO Retain Bound default/awx-backup-claim awx-backup-volume 2d3h
awx-snapshot-volume 4Gi RWO Retain Bound default/awx-snapshot-claim awx-snapshot-volume 28h
/var/lib/rancher/k3s/storage# kubectl get pvc -A
NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
default postgres-awx-postgres-0 Bound pvc-ae6615ac-a473-4f56-899e-47cca2d89c38 8Gi RWO local-path 296d
default static-data-pvc Bound pvc-06337c8a-bf93-400c-ae7c-5e0967dfa969 2Gi RWO local-path 296d
default awx-projects-claim Terminating pvc-f56399e7-fa07-42af-8251-6c2d20c19b2a 8Gi RWO local-path 290d
default awx-backup-claim Bound awx-backup-volume 4Gi RWO awx-backup-volume 2d3h
default awx-snapshot-claim Bound awx-snapshot-volume 4Gi RWO awx-snapshot-volume 28h
I am struggling to replicate the environment by adapting the base/*.yaml files.
Regards Hans-Peter
Hi,
The actual data in your “local-path” based PV is stored under “/var/lib/rancher/k3s/storage/_<claim_name>”, so you can copy any data in your PV from this path.
You can get actual path for your PV by following command:
kubectl get pv pvc-15b0947a-eb81-47d6-944a-3c58fb1ee7a6 -o jsonpath=‘{.spec.hostPath.path}’
I think you can restore it anyway without any modification, and place the contents of the PV for your project directly into /var on the new host after restoring.
The contents of the PV for PostgreSQL will be restored, so there is no need to copy them manually.
If it gets hard for you, you can consider backing up and restoring the entire OS by some tools, or cloning the VM.
Regards,
Hi Kurokobo and all.
One minor issue. When using the ansible method for backing up I had the following errror:
“msg”: “AWXBackup awxbackup-2022-07-21-15-40-33: Failed to retrieve requested object: b’{"kind":"Status","apiVersion":"v1","metadata":{}
,"status":"Failure","message":"awxbackups.awx.ansible.com \\"awxbackup-2022-07-21-15-40-33\\" is forbidden:
User \\"system:k3s-controller\\" cannot get resource \\"awxbackups\\" in API group \\"awx.ansible.com\\" in the namespace \\"awx\\""
,"reason":"Forbidden","details":{"name":"awxbackup-2022-07-21-15-40-33","group":"awx.ansible.com","kind":"awxbackups"},"code":403}\n’”,
“reason”: “Forbidden”,
“status”: 403
I solved it by adding a clusterrolebinding:
kubectl create clusterrolebinding cluster-system-anonymous --clusterrole=cluster-admin --user=system:k3s-controller
That resolved the issue. It seems a that it is a bit of over kill however.
Did I do something wrong?
You are super helpful.
Regards Hans-Peter
Hi,
I’ve never faced that error, but the user “system:k3s-controller” should have “system:k3s-controller” by default, and “system:k3s-controller” role doesn’t include any perms for APIs under awx.ansible.com.
So it’s expected that k3s-controller can’t get awxbackups.
I don’t know why the accsess for awxbackups by k3s-controller has been invoked during AWX backup, but it could be a timing issue or something wrong with your K3s. You may want to restart the node, reinstall K3s, or upgrade if your K3s is old.
There’s a potential that you are on a very old version of awx-operator and your operator is in a different namespace than your awx CR. Restores are only expected to work for backups from the same operator version. Though they may work across versions.
Thanks,
AWX Team
Hi,
The version of my operator on the destination is 0.19. On the source system it is 0.14 which is indeed “very” old.
So that explains it probably.
Regards Hans