Delete duplicate lines... with a twist!

Hi, I'm sorry I'm no coder so I came here, counting on your free time and good will to beg for spoonfeeding some good code. I'll try to be quick and concise!

Got file with 50k lines like this:

"Heh, heh. Those darn ninjas. They're _____."*wacky
The "canebrake", "timber" & "pygmy" are types of what?*rattlesnakes
Science : The second space shuttle was named ------*challenger

Problem is that somewhere (anywhere) in file may appear a similar line (but usually not exactly the same), which needs to be recognized as duplicate and deleted!

My example - of what could be found and should be recognized (and deleted) as duplicate:

the 'canebrake', 'timber' & 'pygmy' are types of what*rattleSNAKES
SCIENCE::: the;second;space;shuttle;was;named ??????*challenger

So I guess algorithm should basically do this:

  1. from each line read only letters [a-z], [A-Z] and numbers [0-9] and disregard any possible spacing or special characters or punctuation

  2. compare with every other line (in same manner a-Z, 0-9) and if same arrangement of letters and numbers is found (ignoring spacing, case, special chars...) delete one of the lines (doesn't matter which one)

Scripting language doesn't matter... perl, python, ruby, vi, awk, sed... anything goes =) (using archlinux box)

Much appreaciated!

awk '{s=tolower($0);gsub("[^a-z]","",s);x=$0} END {for(i in x) print x}' file
1 Like

Thanks, it worked.

But slight observation: I had some 200 lines in file that would differentiate only by numbers and this code would (incorrectly) count them as duplicate.

Not sure what you mean...can you post a sample of how that input file looks like...

$
$ cat f42
"Heh, heh. Those darn ninjas. They're _____."*wacky
The "canebrake", "timber" & "pygmy" are types of what?*rattlesnakes
Science : The second space shuttle was named ------*challenger
the quick brown 123 fox jumps over the lazy ?@! dog
the 456 quick brown fox jumps over the ~*%# lazy dog
123 the quick brown @%#$!^ fox jumps over the lazy ~()& dog
$
$
$
$ perl -lne '$h=$_; s/[^\w]|_//g; tr/A-Z/a-z/; s/(.)(?=.*?\1)//g;
             $_=join "",sort split "";
             print $h if not defined $x{$_}; $x{$_}++
            ' f42
"Heh, heh. Those darn ninjas. They're _____."*wacky
The "canebrake", "timber" & "pygmy" are types of what?*rattlesnakes
Science : The second space shuttle was named ------*challenger
the quick brown 123 fox jumps over the lazy ?@! dog
the 456 quick brown fox jumps over the ~*%# lazy dog
$
$
$

tyler_durden

1 Like

Sure, It is 5mb compilation of trivia questions. One question per row with * for separator from answer (file will be used by irc trivia bot). Aim is to weed out automatically as much duplicate questions as possible. There is sample in my first post but here is bigger chunk of file: sample trivia - Pastebin.com which also shows entries that get selected as duplicates and deleted with your code - these are the ones starting with "Algebra : "

thx, tyler_durden, will try this perl code in moment

edit:
tyler_durden's perl code shrunk questions from 55983 lines to 20915
shamrock's awk code shrunk questions from 55983 lines to 40724

I have yet to compare in detail (manually? :<) but I think perl code ate too much 'duplicates'. Can't believe its more then half, but don't know yet, I may be wrong, have to confirm.

Is * the only non alphanumeric character in the input file as that makes it easy...but is that really the case as your original post had others...so if you define it clearly a better awk solution can be given...

1 Like

Sorry, mentioning of '*' in my last post is irrelevant for problem/solution.

Again, I believe your code is way to go, its only problem is that when searches for duplicates it uses only a-z for its criteria instead a-z and 0-9.

I've been trying to improve your code on my own and I came up with:

awk '{s=tolower($0);gsub("[^[:alnum:]]","",s);x=$0} END {for(i in x) print x}' file

which reduces my question file from 55983 lines to 40907 (it doesn't delete Algebra: and similar entries from file) and I'm quite happy with it.

edit: tyler's perl code just eats too much lines, have no idea why...