Help with Perl script for identifying dupes in column1

Dear all,
I have a large dictionary database which has the following structure

source word=target word
e.g.
book=livre

Since the database is very large in spite of all the care taken, it so happens that at times the source word is repeated

e.g.
book=livre
book=tome

Since I want to keep only unique words in the database and remove all dupes, I wrote the following perl script to solve the problem. The script accesses a database and writes to a separate file the singletons followed by the dupes.

#!/usr/bin/perl

$dupes = $singletons = "";		# This goes at the head of the file

do {
    $dupefound = 0;			# These go at the head of the loop
    $text = $line = $prevline = $name = $prevname = "";
    do {
	$line = <>;
	$line =~ /^(.+)\=.+$/ and $name = $1;
	$prevline =~ /^(.+)\=.+$/ and $prevname = $1;
	if ($name eq $prevname) { $dupefound += 1 }
	$text .= $line;
	$prevline = $line;
    } until ($dupefound > 0 and $text !~ /^(.+?)\=.*?\n(?:\1=.*?\n)+\z/m) or eof;
    if ($text =~ s/(^(.+?)\=.*?\n(?:\2=.*?\n)+)//m) { $dupes .= $1 }
    $singletons .= $text;
} until eof;
print "SINGLETONS\n$singletons\n\DUPES\n$dupes";

While this works well for a small sized database, the script does not identify all dupes in one pass and I have to repeat the passes. All tweaks to the script have not solved the problem.
Could someone please suggest what error has gone in the script? And also please explain the correction made.
Many thanks in advance for your help and all good wishes for the New year, since this is my first post for 2015

Did you really need to do this in perl. Have you tried using the sort commands? You would sort uniquely based on the key (column field) that is suppose to be unique. It not clear by what you means are singletons or dups. In your example, you show

book=livre
book=tome

what is the output suppose to look like in the case of this example?

I am sorry for responding so late but my router was down and did not have access to the net.
Basically my main aim was as under:

  1. I have a large data base (a dictionary) of around 200,000 entries.
  2. Each entry has as I mentioned in my post a structure as under
Source language=target language
  1. Since at times an entry (word or expression) in the source language maps to more than one glosses in the target language as in the French example which I provided, the entry on the left hand side (source language) is repeated twice.
  2. I wrote the script to identify such duplicate entries. The script reads through the database and spews out a file which is divided into tw0 header:
Singletons and Dupes

However when I run it on such a voluminous database the script does not identify all such dupes.
The Singletons contain in fact dupes, showing that the script is not functioning positively.
I wanted to know where I goofed up and how the script can be modified to make it perform as it should in one single run.
I hope I have explained the situation clearly and once more excuses for the delay in responding.

I wont be able to help in perl, maybe this will help

awk -F"=" '{print $1}'  dictionary_file | sort | uniq -c | awk '{print $2 >> "numrepeats_"$1"_list"}'

This will spit out as many files depending on the number of repeats of your left side of =.

All the singletons will be in the file numrepeats_1_list, all the dupes will be in numrepeats_2_list ,, triplets in numrepeats_3_list and so on...

you can use paste command on all these output files to have a single file with multiple columns...

Alternatively, if you want to find only duplicate entries , as a list

awk -F"=" '{print $1}'  dictionary_file | sort | uniq -d

If sorting is an issue, try to find dupes by

awk -F"="  '
{s[$1]++}
END {
  for(i in s) {
    if(s>1) {
      print i
    }
  }
}' dictionary_file
1 Like

Many thanks. I will try it out and get back to you.

---------- Post updated at 11:30 PM ---------- Previous update was at 11:22 PM ----------

It worked. Many thanks. I had to modify the script slightly since I work in a Windows environment
Many thanks. However I am still curious why my perl script failed.

Try also

awk -F= '$1 in DUP      {next}

         $1 in SNG      {DUP[$1]
                         delete SNG[$1]
                         next}

                        {SNG[$1]}

         END            {print "SINGLES"
                         for (i in SNG) print i
                         print "DUPLICATES"
                         for (i in DUP) print i}
        ' file
1 Like

In Perl...

#! /usr/bin/perl
use strict;
use warnings;
open (my $words, '<', $ARGV[0]);
my (%seen,@unique,@dupe);
while($words){
        if (/^(\w+)=\w+$/){
                if ($seen{$1}<1){
                        $seen{$1}++;
                        push (@unique,$_);
                }
                else{
                        push @dupe, $_;
                }
        }
        else{
                print "Not a word definition:$_"
        }
}
print "UNIQUE WORDS\n\n", join ('',@unique),
        "DUPLICATES\n\n", join ('',@dupe);

1 Like

Both the Awk and PERL solutions work. Many thanks