##### SUMMARY
According to to [documentation](https://docs.ansible.com/ansible/d…evel/collections/community/proxmox/proxmox_module.html#parameter-force), to replace an existing container, I need to have attributes `state: present` and `force: true` when calling a `community.proxmox.proxmox` task. I set the task accordingly, but it returns an error complaining about that the container already exists. As a workaround I need to delete the container before creating it again.
```log
fatal: [mox-node01 -> localhost]: FAILED! => {
"changed": false,
"invocation": {
"module_args": {
# ...
}
},
"msg": "An error occurred: 500 Internal Server Error: CT <CTID> already exists on node '<node name>'"
}
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
community.proxmox.proxmox
##### ANSIBLE VERSION
```
ansible [core 2.18.8]
config file = None
configured module search path = ['/home/miguel/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /tmp/ansible-venv/lib/python3.11/site-packages/ansible
ansible collection location = /home/miguel/.ansible/collections:/usr/share/ansible/collections
executable location = /tmp/ansible-venv/bin/ansible
python version = 3.11.2 (main, Apr 28 2025, 14:11:48) [GCC 12.2.0] (/tmp/ansible-venv/bin/python)
jinja version = 3.1.6
libyaml = True
```
##### COLLECTION VERSION
```
ansible-galaxy collection list community.proxmox
# /home/miguel/.ansible/collections/ansible_collections
Collection Version
----------------- -------
community.proxmox 1.3.0
# /tmp/ansible-venv/lib/python3.11/site-packages/ansible_collections
Collection Version
----------------- -------
```
##### CONFIGURATION
```
CONFIG_FILE() = None
DEFAULT_TIMEOUT(env: ANSIBLE_TIMEOUT) = 600
GALAXY_SERVERS:
```
##### OS / ENVIRONMENT
Ansible controller:
Windows 10
WSL
Linux 6.6.87.2-microsoft-standard-WSL2 #1 SMP PREEMPT_DYNAMIC Thu Jun 5 18:30:46 UTC 2025 x86_64 GNU/Linux
Proxmox nodes:
uname: Linux 6.8.12-11-pve #1 SMP PREEMPT_DYNAMIC PMX 6.8.12-11 (2025-05-22T09:39Z) x86_64 GNU/Linux
pveversion: pve-manager/8.4.1/2a5fa54a8503f96d (running kernel: 6.8.12-11-pve)
##### STEPS TO REPRODUCE
This is the task that is failing. `container` is provided based on each yugabyte instance type (vars `master` or `tserver`). There are 3 hosts. Each host has a master and a tserver, listed in `hosts[].containers` and each item contains settings specific for that container.
Inventory:
```yaml
all:
vars:
ansible_user: [redacted]
api_user: [redacted]
api_token_id: [redacted]
api_token_secret: [redacted]
pve_api_headers:
Authorization: "PVEAPIToken={{ api_user }}!{{ api_token_id }}={{ api_token_secret }}"
version: "2024.2.4.0"
ssh:
descriptors: ["ansible", "mgmt-client"]
path: "{{ inventory_dir }}/certs"
pubkeys: "{{ inventory_dir }}/certs/pubkeys.pub"
type: ed25519
service_name: Yugabyte
ostemplate: pve-yugabyte_{{ version }}.tar.gz
template_storage: "mox-storage"
template_type: "vztmpl"
base_tags: "{{ ['yugabyte', 'postgres', 'cassandra', 'intranet', 'nodenet', 'storage', 'db'] }}"
backup_storage: mox-storage
rcp_port: "7100"
private_subnet: "10.3.1."
public_subnet: "10.4.0."
private_address: "{{ private_subnet }}{{ item.ipsuffix }}"
public_address: "{{ public_subnet }}{{ item.ipsuffix }}"
master_addresses: >-
{{ hostvars
| dict2items
| map(attribute='value.containers')
| select('defined')
| flatten
| selectattr('type','equalto','master')
| map(attribute='ipsuffix')
| map('regex_replace','^(.*)$', private_subnet ~ '\1:' ~ rcp_port)
| join(',') }}
public_addresses: >-
{{ hostvars
| dict2items
| map(attribute='value.containers')
| select('defined')
| flatten
| map(attribute='ipsuffix')
| map('regex_replace','^(.*)$', public_subnet ~ '\1:' ~ rcp_port)
| join(',') }}
container_fields: &container_fields
vmid: "{{ item.vmid }}"
hostname: "{{ item.hostname }}"
description: "{{ item.description }}"
swap: 512
cores: 4
password: "[redacted]"
rootfs: 'local-lvm:8'
netif:
net0: "name=eth0,bridge=storage,ip={{ private_address }}/24"
net1: "name=eth1,bridge=intranet,ip={{ public_address }}/24"
# net2: "name=eth2,bridge=nodenet,ip=dhcp"
master:
<<: *container_fields
tags: "{{ base_tags + ['ydb-master'] }}"
memory: 4000
mounts:
mp0: "local-lvm:datafs-{{ item.vmid }}-disk-0,size=32G,mp=/mnt/yugabyte"
command: "yb-master"
tserver:
<<: *container_fields
tags: "{{ (item.tags | default([])) | union(base_tags) | union(['ydb-tserver']) }}"
memory: 1000
mounts:
mp0: "local-lvm:datafs-{{ item.vmid }}-disk-0,size=8G,mp=/mnt/yugabyte"
command: "yb-tserver"
hosts:
mox-node01:
ansible_host: 192.168.1.201
containers:
- hostname: "mox-db01"
vmid: 109
description: "YugabyteDB Master db01"
ipsuffix: "10"
type: master
zone_id: db01
tag: db01
- hostname: "mox-db01-ts01"
vmid: 204
description: "YugabyteDB TServer db01-01"
ipsuffix: "11"
type: tserver
zone_id: db01
tag: db01
mox-node02:
ansible_host: 192.168.1.202
containers:
- hostname: "mox-db02"
vmid: 202
description: YugabyteDB Master db02
ipsuffix: "20"
type: master
zone_id: db02
tag: db02
#...
```
Tasks:
```yaml
- name: Set facts
ansible.builtin.set_fact:
container: "{{ master if item.type == 'master' else tserver }}"
- name: Create container
community.proxmox.proxmox:
node: "{{ inventory_hostname }}"
vmid: "{{ container.vmid }}"
description: "{{ container.description | default() }}"
tags: "{{ container.tags | default({}) }}"
ostemplate: "{{ template_storage }}:{{ template_type }}/{{ ostemplate }}"
hostname: "{{ container.hostname }}"
cores: "{{ container.cores }}"
swap: "{{ container.swap }}"
memory: "{{ container.memory }}"
disk: "{{ container.rootfs }}"
netif: "{{ container_netif | default({}) }}"
pubkey: "{{ lookup('file', ssh.pubkeys) | default() }}"
password: "{{ container.password }}"
state: present
force: true
```
##### EXPECTED RESULTS
It is expected to replace the container with a new one.
##### ACTUAL RESULTS
```log
_________________________
< TASK [Create container] >
-------------------------
\ ^__^
\ (oo)\_______
(__)\ )\/\
||----w |
|| ||
task path: /mnt/d/backup/Projects/MoxCluster/tools/src/deploy_ct/deploy.yml:13
File lookup using /mnt/d/backup/Projects/MoxCluster/components/lxc/yugabyte/certs/pubkeys.pub as file
<localhost> ESTABLISH LOCAL CONNECTION FOR USER: miguel
<localhost> EXEC /bin/sh -c 'echo ~miguel && sleep 0'
<localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/miguel/.ansible/tmp `"&& mkdir "` echo /home/miguel/.ansible/tmp/ansible-tmp-1756819428.6523044-20919-212550523484382 `" && echo ansible-tmp-1756819428.6523044-20919-212550523484382="` echo /home/miguel/.ansible/tmp/ansible-tmp-1756819428.6523044-20919-212550523484382 `" ) && sleep 0'
Using module file /home/miguel/.ansible/collections/ansible_collections/community/proxmox/plugins/modules/proxmox.py
<localhost> PUT /home/miguel/.ansible/tmp/ansible-local-204666u5es2uc/tmpc_rm7nd8 TO /home/miguel/.ansible/tmp/ansible-tmp-1756819428.6523044-20919-212550523484382/AnsiballZ_proxmox.py
<localhost> EXEC /bin/sh -c 'chmod u+rwx /home/miguel/.ansible/tmp/ansible-tmp-1756819428.6523044-20919-212550523484382/ /home/miguel/.ansible/tmp/ansible-tmp-1756819428.6523044-20919-212550523484382/AnsiballZ_proxmox.py && sleep 0'
<localhost> EXEC /bin/sh -c 'PROXMOX_HOST=192.168.1.201 PROXMOX_USER=[redacted] PROXMOX_TOKEN_ID=[redacted] PROXMOX_TOKEN_SECRET=[redacted] /tmp/ansible-venv/bin/python3 /home/miguel/.ansible/tmp/ansible-tmp-1756819428.6523044-20919-212550523484382/AnsiballZ_proxmox.py && sleep 0'
<localhost> EXEC /bin/sh -c 'rm -f -r /home/miguel/.ansible/tmp/ansible-tmp-1756819428.6523044-20919-212550523484382/ > /dev/null 2>&1 && sleep 0'
The full traceback is:
File "/tmp/ansible_community.proxmox.proxmox_payload_k4kb49hc/ansible_community.proxmox.proxmox_payload.zip/ansible_collections/community/proxmox/plugins/modules/proxmox.py", line 1746, in main
File "/tmp/ansible_community.proxmox.proxmox_payload_k4kb49hc/ansible_community.proxmox.proxmox_payload.zip/ansible_collections/community/proxmox/plugins/modules/proxmox.py", line 799, in run
File "/tmp/ansible_community.proxmox.proxmox_payload_k4kb49hc/ansible_community.proxmox.proxmox_payload.zip/ansible_collections/community/proxmox/plugins/modules/proxmox.py", line 909, in lxc_present
File "/tmp/ansible_community.proxmox.proxmox_payload_k4kb49hc/ansible_community.proxmox.proxmox_payload.zip/ansible_collections/community/proxmox/plugins/modules/proxmox.py", line 1143, in new_lxc_instance
File "/tmp/ansible_community.proxmox.proxmox_payload_k4kb49hc/ansible_community.proxmox.proxmox_payload.zip/ansible_collections/community/proxmox/plugins/modules/proxmox.py", line 1248, in create_lxc_instance
File "/tmp/ansible-venv/lib/python3.11/site-packages/proxmoxer/core.py", line 179, in create
return self.post(*args, **data)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/tmp/ansible-venv/lib/python3.11/site-packages/proxmoxer/core.py", line 170, in post
return self(args)._request("POST", data=data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/tmp/ansible-venv/lib/python3.11/site-packages/proxmoxer/core.py", line 147, in _request
raise ResourceException(
fatal: [mox-node01 -> localhost]: FAILED! => {
"changed": false,
"invocation": {
"module_args": {
"api_host": ".....",
"api_password": null,
"api_port": null,
"api_token_id": "....",
"api_token_secret": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"api_user": ".....",
"clone": null,
"clone_type": "opportunistic",
"cores": 4,
"cpus": null,
"cpuunits": null,
"description": "Test Container",
"disk": "local-lvm:8",
"disk_volume": null,
"features": null,
"force": true,
"hookscript": null,
"hostname": "test",
"ip_address": null,
"memory": 4000,
"mount_volumes": null,
"mounts": null,
"nameserver": null,
"netif": {},
"node": "mox-node01",
"onboot": null,
"ostemplate": "mox-storage:vztmpl/pve-yugabyte_2024.2.4.0.tar.gz",
"ostype": "auto",
"password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"pool": null,
"pubkey": "",
"purge": false,
"searchdomain": null,
"startup": null,
"state": "present",
"storage": "local",
"swap": 512,
"tags": [
"yugabyte",
"postgres",
"cassandra",
"intranet",
"nodenet",
"storage",
"db",
"ydb-master"
],
"timeout": 30,
"timezone": null,
"unprivileged": true,
"update": true,
"validate_certs": false,
"vmid": 104
}
},
"msg": "An error occurred: 500 Internal Server Error: CT 104 already exists on node 'mox-node01'"
}
```