AWX Migration from version 17 to version 23.0

Hi all, I created a new awx server using
kind: AWX
name: ansible-awx
namespace: awx
service_type: nodeport
postgres_storage_class: gp2

postgres_configuration_secret: ansible-awx-postgres-configurations

secret_key_secret: awx-secret-key #old server secret key

After creating the new server. I migrated the old database to the new one using the following commands

  1. kubectl exec -it ansible-awx-postgres-13-0 -- psql -U awx
  2. \c postgres
  5. exit
  6. kubectl exec -it ansible-awx-postgres-13-0 -- psql -U awx < awx.sql
    After the migration, i am able to access all the data from the old server in the new server. However, when i launch a template on the new server, i get this error below and i have tried various approach to resolve it, but no luck. I would appreciate if anyone could help.

Traceback (most recent call last):
File “/var/lib/awx/venv/awx/lib64/python3.9/site-packages/cryptography/”, line 134, in _verify_signature
cryptography.exceptions.InvalidSignature: Signature did not match digest.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File “/var/lib/awx/venv/awx/lib64/python3.9/site-packages/awx/main/tasks/”, line 512, in run
env = self.build_env(self.instance, private_data_dir, private_data_files=private_data_files)
File “/var/lib/awx/venv/awx/lib64/python3.9/site-packages/awx/main/tasks/”, line 1486, in build_env
env = injector.build_env(inventory_update, env, private_data_dir, private_data_files)
File “/var/lib/awx/venv/awx/lib64/python3.9/site-packages/awx/main/models/”, line 1462, in build_env
injector_env = self.get_plugin_env(inventory_update, private_data_dir, private_data_files)
File “/var/lib/awx/venv/awx/lib64/python3.9/site-packages/awx/main/models/”, line 1523, in get_plugin_env
ret = super(ec2, self).get_plugin_env(*args, **kwargs)
File “/var/lib/awx/venv/awx/lib64/python3.9/site-packages/awx/main/models/”, line 1493, in get_plugin_env
env = self._get_shared_env(inventory_update, private_data_dir, private_data_files)
File “/var/lib/awx/venv/awx/lib64/python3.9/site-packages/awx/main/models/”, line 1482, in _get_shared_env
getattr(builtin_injectors, cred_kind)(credential, injected_env, private_data_dir)
File “/var/lib/awx/venv/awx/lib64/python3.9/site-packages/awx/main/models/credential/”, line 14, in aws
env[‘AWS_SECRET_ACCESS_KEY’] = cred.get_input(‘password’, default=‘’)
File “/var/lib/awx/venv/awx/lib64/python3.9/site-packages/awx/main/models/credential/”, line 282, in get_input
return decrypt_field(self, field_name)
File “/var/lib/awx/venv/awx/lib64/python3.9/site-packages/awx/main/utils/”, line 159, in decrypt_field
return smart_str(decrypt_value(key, value))
File “/var/lib/awx/venv/awx/lib64/python3.9/site-packages/awx/main/utils/”, line 136, in decrypt_value
value = f.decrypt(encrypted)
File “/var/lib/awx/venv/awx/lib64/python3.9/site-packages/cryptography/”, line 91, in decrypt
return self._decrypt_data(data, timestamp, time_info)
File “/var/lib/awx/venv/awx/lib64/python3.9/site-packages/cryptography/”, line 152, in _decrypt_data
File “/var/lib/awx/venv/awx/lib64/python3.9/site-packages/cryptography/”, line 136, in _verify_signature
raise InvalidToken

Signature did not match digest.

This is usually caused by incorrect SECRET_KEY.

It seems you’ve specified secret_key_secret, so ensure your /etc/tower/SECRET_KEY on awx-task container in awx-task pod matches the old SECRET_KEY from your old AWX exactly.

$ kubectl -n awx exec -it deployment/ansible-awx-task -c ansible-awx-task – cat /etc/tower/SECRET_KEY

If this is not correct SECRET_KEY for you, your secret awx-secret-key is not created correctly, or awx-task pod is not restarted after updating the secret.


Thank you so much!! I was actually using the old secret but in base64 encoded format. I had to decode it for it to work for the inventory sync.
I am now seeing a different issue whilst running the job template. It remains in pending state and shows “This job is not ready to start because there is not enough available capacity.”

I was able to resolve the “This job is not ready to start because there is not enough available capacity.” issue by setting instance group as “Default” in the job templates

Hi @vajisola,
We are so happy to see that our community was able to help you! Would you mind selecting one of the responses and marking it as solved? That way this issue will show as resolved and others who may have similar questions can look here for answers.

Thank you so much for being an active member of our community!

1 Like

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.