Removing duplicate records in a file based on single column explanation

I was reading this thread. It looks like a simpler way to say this is to only keep uniq lines based on field or column 1.

Can someone explain this command please? How are there no errors from using the same filename twice?

awk -F"," 'NR == FNR {  cnt[$1] ++} NR != FNR {  if (cnt[$1] == 1) print $0 }' filer.txt filer.txt

When NR==FNR , awk is reading the file for the first time and then it sets the counter. At the second pass (when NR > FNR) it prints only those records with a unique field 1. The command could be shortened to:

awk -F, 'NR==FNR{C[$1]++; next} C[$1]==1' infile infile

Why would there be errors reading the same file twice? After awk is done reading the file the first time it closes it and then reopens it.

Can you please elaborate on this? I still don't understand what is going on.

I didn't know that you could pass to 2 files to awk. What are some other purposes of passing 2 files to awk?