Script to pull hashes out of large text file

I am attempting to write a script that will pull out NTLM hashes from a text file that contains about 500,000 lines of data. Not all accounts contain hashes and I only need the ones that do contain hashes.

Here is a sample of what the data looks like:

There are thousands of other lines in the file, but that particular line is what I need taken out of the file. There are about 100 lines that apply to that and I do not want to manually go through the entire file searching for that.

Is there an easy script or something I can run in Linux to pull lines that follow that pattern out of the file?

Thank you!

What is it about that line that meets your criteria for removal?
Also, probably helpful to see some 'good' lines, to make sure a rule does not touch those records inadvertently.

I just want to pull out that data from the file. By "pull out" I do not mean cut from from the file, but just copy it and export it into a different file.

I need all the hashed from the txt file into a separate file. The line I provided is the 'good' line I need copied out into another file. There are about 100 additional lines with the same syntax as that one in the file that I need copied out.

Is this bad?

#Mango Chango A (a):$NT$547e2494658ca345d3847c36cf1fsef8:::

This worked:

awk -F: '($2 ~ /\$NT\$/)'filename

It would have been very helpful, and that was the information joey was trying to get from you, what the identifier for the wanted lines is. So it is $NT$ . That's all we needed to know :wink: Maybe next time.

No need for the parentheses:

awk -F: '$2 ~ /\$NT\$/' infile