Filtering duplicate lines

Does anybody know a command that filters duplicate lines out of a file. Similar to the uniq command but can handle duplicate lines no matter where they occur in a file?

Check the man page on sort (sort -u )

Thanks that does almost want I want. However, is there a way I can do it wile preserving the original order of the data?

What is the original order of the file? Is it order or chaos?

If the file has an order (by date time, by nodename, by some field) then you can also sort by that field (check the man page)

If it is chaos - meaning no specific order (it just came that way!) then I believe you would need to write a script (Perl ) or a program (your preference of lanuage) to get what you are trying to do.

The order is indeed chaos. The information is to be ploted out. If the input order is lost then the plot loses meaning.

VELSTK1621-45
' ' 3031487.7 379165.3
VELSTK1621-45
' ' 3032181.8 379848.9
VELSTK1629-45
' ' 3005331.9 348245.4
VELSTK1629-45
' ' 3006027.4 348927.5
VELSTK1629-45
' ' 3006724.5 349610.6
VELSTK1629-45
' ' 3007420.4 350291.5
VELSTK1629-45
' ' 3008116.8 350974.5

I only need the first instance of a line begining with "VEL", however, if I sort it the attached information becomes jumbled.

Cheers

If the file is ordered by VEL(some number) and then the plots, an even simplier script could be written to read each line. Save the VEL info into a variable
Read the first line - if it has a VEL in it
compare it to the VEL variable.
If it is different , write it to the new file and
save it into the VEL variable
if it is the same read the next line.
if it has no VEL in it, write it to the new file

Unless there is something else in the file that would mess with this, it should work.

Cheers, I think this is the inevitable conlusion/solution. I was hoping to get away with a ready made unix command. Uniq showed such promise.

I have a few other things to be doing till I have to cross this particular bridge again.

Thanks again for the ideas.

I agree with hoghunter. Here is that logic with awk:

#!/bin/sh
awk '{
if (substr($0,1,3)!="VEL")
   print
else
  if ($1!=currVEL)
      {print
       currVEL=$1}
}' myfile
exit 0

That code produces:

VELSTK1621-45
' ' 3031487.7 379165.3
' ' 3032181.8 379848.9
VELSTK1629-45
' ' 3005331.9 348245.4
' ' 3006027.4 348927.5
' ' 3006724.5 349610.6
' ' 3007420.4 350291.5
' ' 3008116.8 350974.5

Thanks for that. That's very handy, and has saved me much time.

Thanks again Jimbo & Hoghunter

This is a little late, but here is some Perl code that will also do what you want for <most> any file:

#!/usr/bin/perl

# RemoveDupes.pl
# Auswipe 21 Feb 2002
# Auswipe sez: "Hey, no guarantees!"
# Usage:
#
#	RemoveDupes.pl -file someTextFile

use Getopt::Long;
GetOptions("file=s");

my %dataHash    = ();
my $currentLine = 0;

if ($opt_file) {
  open(INPUTFILE, "$opt_file") || die "Error: $!";

  while ($logEntry = <INPUTFILE>) {
    chomp($logEntry);

    if (!exists($dataHash{$logEntry})) {
      $dataHash{$logEntry} = $currentLine;
    };

    $currentLine++;
  };
  
  close($opt_file);

} else {
  print STDOUT "You didn't select a file!\n";
};

foreach $logOutput (sort { $dataHash{$a} <=> $dataHash{$b} } (keys(%dataHash))) {
  print STDOUT "$logOutput\n";
};