Hi all,
I have an issue with the ansible copy module.
I have a very fat binary (51Mo) that I need transfered from the ansible node manager to target nodes.
Unfortunately the copy occurs each time, even if the content is identical.
While browsing both documentation and user group I found out that ansible is supposed to be smarter: it uses an md5 checksum to decide wether it’s needed to copy or skip when content is identical.
Can anybody confirm that behavior and under which condition it applies ? It will help me figure out what I’m doing wrong.
Thank you for your assistance.
Louis
Yes, Ansible does a md5/sha1 of the files and compares them to figure
out if it needs to copy the file or not.
To point out any flaws in your play ...we would need to see your play.
Hi Brian,
Thank you for the reply. The incriminating playbook is here: https://github.com/lgueye/digitalocean-cluster/blob/master/roles/app1/tasks/main.yml
Unfortunately it’s a really huge playbook and I don’t know how to tests it in isolation …
It requires quite some steps before being able to run :(:
- spinup digitalocean instances
- manually configure inventory
- run the configure-cluster playbook
Anyway thank you for proposing the help
so I'm guessing the problem is the jar and not the init script you
copy, are you sure you are not rebuilding the jar? some ways of
building them can change the file and it's md5/sha1 even when there is
no change in code.
Well actually I’m rebuilding it each time but I did not expect its sha1/md5 to change each time.
But it makes sense after some thought: some config properties, based on the timestamp, are included in the jar thus generating a new checksum each time.
Now, I will base my verification on the git sha1.
Thank you a lot really.