Script for identifying and deleting dupes in a line

I am compiling a synonym dictionary which has the following structure
Headword=Synonym1,Synonym2 and so on, with each synonym separated by a comma.
As is usual in such cases manual preparation of synonyms results in repeating the synonym which results in dupes as in the example below:

arrogance=affectation,affected manners,airs,array,boastfulness,boasting,bombast,braggadocio,bravado,brazenness,bumptiousness,conceit,contempt,contemptuousness,contumeliousness,contumely,coxcombry,crowing,dandyism,dash,disdain,disdainfulness,display,egotism,fanfare,fanfaronade,fatuousness,flourish,foppery,foppishness,frills and furbelows,frippery,gall,getting on one's high horse,glitter,gloating,haughtiness,hauteur,high notions,highfalutin' ways,loftiness,nerve,ostentation,overconfidence,pageantry,panache,parade,pomp,pomposity,pompousness,presumption,presumptuousness,pretension,pretentiousness,pride,putting on the dog,putting one's nose in the air,scorn,scornfulness,self-importance,shamelessness,show,showiness,affected manners,airs,array,snobbery,snobbishness,superciliousness,swagger,vainglory,vanity,affected manners

As can be seen
affected manners
is repeated and so are quite a few other synonyms.
I had written a script which basically does the following:
places each synonym on a line by replacing the comma by a CR/LF
sorting the synonym set
replacing the sorted unique synonyms in the structure Headword=syn1,syn2 etc.
Although it works, it is expensive and time consuming considering that the number of synonym sets is around 100,00
A perl or awk script which does the job faster would be really appreciated. Please note that a given headword can admit upto 100 synonyms, each separated by a comma.
Many thanks for a faster solution.

This should do it:

#!/usr/bin/perl -w

use strict;
use warnings;

while ( my $line = <> ) {
    chomp $line;
    my ( $key, $value ) = ( $line =~ /^(.*?)=(.*)$/g );
    my %hash = map { $_ => 1 } split( /,/, $value );
    print $key, "=", join( ',', sort keys %hash ), "\n";
}

Run as /path/to/script synonym.in > synonym.out

1 Like

Many thanks. It worked like a charm. Handled over 100,000 synsets in just 12 seconds on my machine running VISTA under windows

Hello,
I wonder if it would be possible to add to Gimley's program. I had written a perl script to identify duplicates in a large file which has a structure similar to Gimley's.

where Word is the headword and word1, word2, word3 are all equivalents of the word.
It so happens that some times two entries for the same headword can be present.

I have written a program in PERL which identifies such dupes and spews them out in a file where singletons and dupes are clearly identified.
However I have not been able to add to it the added functionality of merging the duplicates into one single entry.
Thus the dupes mentioned above should merge to one single entry:

Any help given would be greatly appreciated.

#!/usr/bin/perl

$dupes = $singletons = "";		# This goes at the head of the file

do {
    $dupefound = 0;			# These go at the head of the loop
    $text = $line = $prevline = $name = $prevname = "";
    do {
	$line = <>;
	$line =~ /^(.+)\=.+$/ and $name = $1;
	$prevline =~ /^(.+)\=.+$/ and $prevname = $1;
	if ($name eq $prevname) { $dupefound += 1 }
	$text .= $line;
	$prevline = $line;
    } until ($dupefound > 0 and $text !~ /^(.+?)\=.*?\n(?:\1=.*?\n)+\z/m) or eof;
    if ($text =~ s/(^(.+?)\=.*?\n(?:\2=.*?\n)+)//m) { $dupes .= $1 }
    $singletons .= $text;
} until eof;
print "SINGLETONS\n$singletons\n\DUPES\n$dupes";