That is not true and has nothing to do with Python version, and everything with
a little luck and a fast computer.
The test case is flawed for your Ansible controller, if it had more load and/or
more hosts you would get a completely different result.
The reason why this i not reliable is that Ansible is default running forks of
5, that mean that if you had 5 or more hosts it will run/fork out 5 commands.
This 5 commands will try to edit the same file at the same time.
Reason it works for you is that the first fork is finished before the second
has time to start.
This is how Ansible work, and now Python version is going to alter that.
And I'm going to prove it.
Since my machine doesn't have Python 3.8 I created a VM, with 2 cores.
$ ansible-playbook --version | awk 'NR==1; END{print}'
ansible-playbook 2.9.7
python version = 3.8.2 (default, Mar 24 2020, 03:08:36) [GCC 9.3.0]
So when I run my playbook with two host I get this
$ for i in {1..100}; do >/tmp/txt; ansible-playbook test.yml &>/dev/null; md5sum /tmp/txt; done | sort | uniq -c | sort -n
1 1597a5a9948014489de663c8fb4438db /tmp/txt
39 317f53fd9236220d0ab65e4aac4b3c5a /tmp/txt
60 ac34af7b876f793d09a2225e23f43088 /tmp/txt
and the next time
$ for i in {1..100}; do >/tmp/txt; ansible-playbook test.yml &>/dev/null; md5sum /tmp/txt; done | sort | uniq -c | sort -n
1 1597a5a9948014489de663c8fb4438db /tmp/txt
43 317f53fd9236220d0ab65e4aac4b3c5a /tmp/txt
56 ac34af7b876f793d09a2225e23f43088 /tmp/txt
As you can see it fails i 1 % of the cases.
Lets up the stake and go for 6 hosts, the playbook is the same, but since 6 hosts
make potensial a lot of combination, I sort the file before I run md5sum.
So if everything is working OK I should only get one line with one hash like this
100 aae4eb078da77ff549495a60be1a52c2
$ for i in {1..100}; do >/tmp/txt; ansible-playbook test.yml &>/dev/null; sort /tmp/txt | md5sum; done | sort | uniq -c | sort -n
1 2465fc0f03254f2bf915a04c762c9d49 -
2 254316531e34aa3ce563c22cf636ea01 -
2 31f07322fb0283ee5475e5acc74de059 -
2 ab8ecb5209d15e9852be208b79f309eb -
3 7abbd5c371c5e4ccb8bc965105148605 -
90 aae4eb078da77ff549495a60be1a52c2 -
Not even close, it fails in 10% of the runs.
So lets make the machine even more busy.
To do that I shutdown the VM and reduced the cores count from 2 to 1.
Now we have 5 forks at the same time "fighting" for this 1 core and trying to
write to the same file at the same time.
This is the result.
1 254316531e34aa3ce563c22cf636ea01 -
1 6c5ffba74557139441576f9f6536c536 -
1 8da4fee8643e840ae60169eac2d60afd -
1 9a7861beaa38064dd7179cf66b90173f -
1 aae4eb078da77ff549495a60be1a52c2 -
1 d5fad8b2b8f31c4d201309e2261ee362 -
2 2465fc0f03254f2bf915a04c762c9d49 -
2 2ea77912754b2a786046ece007bfd0a7 -
2 6503321b7879267a6a5eb06411e09b5e -
2 9ce72c8f1a3f059db908868df22c2618 -
2 ab8ecb5209d15e9852be208b79f309eb -
2 adc72709b7b7cdc8aaad85c01b66e38d -
2 c4b05365f19a397fc9b2dda817219d49 -
2 cf2b918385c6d1f75d7a0630d8881422 -
2 d678deb816d9433ff1c61d60ec1073f1 -
3 83758fc71d9927b3141c325692534add -
3 989fe3db59b18149e58657daab4721dc -
3 a421781f0d1512c9c66e9f7714bed570 -
4 7bc2be35864ae017608466e62dd680aa -
5 83ba567764a618ec0a0b405d49de8fa9 -
5 c98ae97d4888d870d9a05fe7f9b339a3 -
5 d336136909e4067c35858990dd4f9211 -
6 31f07322fb0283ee5475e5acc74de059 -
6 76aabdf63a1f041f6572fb51fb496283 -
6 cce45b22f8bc7806e3dc1128980f33f6 -
6 cce83e09262e3efced931d81c85ac809 -
6 decd8e81f11339ba387cb7cf61a28f34 -
7 aed3d7850c9c5478d84bf78fdc922328 -
11 d8ee7367ba9763647a58ab23a6f01c5f -
Yeah, it fails miserably, in 99% of the cases.
The correct md5sum is this one
$ cat sorted.txt
a1
a2
a3
a4
a5
a6
$ md5sum sorted.txt
aae4eb078da77ff549495a60be1a52c2 sorted.txt
So no combination of Ptyhon and Ansible released will alter this,
it's just how it work.
That is why we have serial, fork and throttle to tune the behavior to avoid race
conditions.