So I’ve done a lot of yum updates with ansible and several times I get issues where partial updates are run and it says it’s been completed, however, I reboot and I’m in a kernel panic because the new kernel didn’t update properly so I have to revert to the previous kernel and remove the newest and reinstall it again. I also often wind up with transactions that haven’t been completed. Has anyone experienced this before?
I've seen this also. When the new kernel installs there is a post
transaction scriplet to build the initrd. In some cases it doesn't
build correctly (I see it fail maybe 1-5% of the time) and you'll get
a panic when the system reboots. I don't know that I've seen enough
to blame it on Ansible, though.
I had a similar issue before with centos but, it was more like a limit ( for example 4 ) of kernels that could be install at the moment but, the current running kernel was uninstalled and the new one did not install correctly so the system hanged during booting.
I know this is an ansible group but you could attach dmesg output if you get one after the kernel fails to boot. Check /var/log/boot-(date) /var/log/dmesg /var/log/yum.log …you can start from there.
Also I don’t know if you’re logging ansible actions every time, but you could find answers there.