Is there a faster alternative to update lines in a CSV file other than using lineinfile, which is too slow.
I have one large CSV files with MAC addresses. I have an ansible job that runs across multiple inventories and switches. It needs to lookup the MAC address in the CSV and then update some fields of that line. lineinfile can do that, but it is horribly slow.
The only alternative that i see is reading in the CSV in a variable and then change in memory and in the end write out the CSV to file, however, this is memory consuming.
yes a DB is a possibility.
However,i think i have solved it in another way. I copy the existing CSV to old and create a new CSV during the playbook run. the new CSV is fast created by using a jinja template.
I then “merge” both CSV together using pandas in a python script from Ansible where i give “priority” to the last created CSV.
Combine them, giving priority to b (second CSV)
b rows will overwrite a rows where keys are duplicated