Find line number of bad data in large file

Hi Forum.

I was trying to search the following scenario on the forum but was not able to.

Let's say that I have a very large file that has some bad data in it (for ex: 0.0015 in the 12th column) and I would like to find the line number and remove that particular line.

What's the easiest way to do so?

For a smaller file, I could use the "vi editor", edit the file, search for the bad data in the specific column and then just delete the row.

But for a larger file where using the "vi editor" is out of the question.

Cannot really use the grep -v "0.0015" option since "0.0015" value could be valid for other rows and which is not in the 12th column.

I do not know what is the line number where the bad data resides.

Thanks.

Take a look at this similar request:

Assuming your columns are space/tab separated

nawk '$12 != 0.0015' myFile
awk '$12 != 0.0015' inputFile