We have no idea. You usually use the condition FNR == 1 to cause the associated action to be executed on the first line read from each input file. Is there any reason why you need to care about which file contained an input record?
Start by telling us what you are trying to accomplish. Then tell us what is wrong with the output being produced by the code you've shown us in post #1. Then, maybe, we can suggest ways to fix your code to get what you want.
The code you have shown us seems at first glance to be a slightly complicated way of removing all lines from a set of files that contain a duplicate field #1 value in the set of files you provide as input files to your awk script preserving the order in which those non-duplicated values were seen that uses more memory to get the job done than is needed.
How do you know that the output you have received is not correct? What would make the output convincing?