How to remove duplicated lines?

Hi, if i have a file like this:
Query=1
a
a
b
c
c
c
d
Query=2
b
b
b
c
c
e
.
.
.

How could i remove the duplicated lines under each query with the duplicated numbers like this:
Query=1
a 2
b 1
c 3
d 1
Query=2
b 3
c 2
e 1

Thanks!!!

An awk approach:

awk '
        /Query/ {
                for ( k in A )
                        print k, A[k]
                split ( "", A )
                print $0
        }
        !/Query/ {
                A[$1]++
        }
        END {
                for ( k in A )
                        print k, A[k]
        }
' file

uniq -c file will return the same kind of results as long as the list is sorted per Query:

$ uniq -c file
   1 Query=1
   2 a
   1 b
   3 c
   1 d
   1 Query=2
   3 b
   2 c
   1 e

Sorry. Actually my file is not sorted.
The input should like this:

Query=1
a
c
a
d
c
b
c
Query=2
...

How should i do then?

Thanks!!

You can just use Yoda's approach, should work fine.

Thanks.
Could i ask another question?
If i have file1:
a
c
d
b

and file 2:
a 33
b 55
c 66
d 77

How could i replace file1 according to file2 and get the output like this:
33
66
77
55

?

Thanks.

I think we've missed the point with the file note being wrapped in CODE tags. If this is one file, then I'm assuming that you want the count for each block. The output requested has the values for b and c in each section.

Not the prettiest solution, but this might work:-

#!/bin/ksh

mkdir /tmp/$$
while read line
do
   if [ "$line" != "${line#Query=}" ]
   then
      Section="${line#Query=}"
   else
      echo "$line" >> /tmp/$$/$Section
   fi
done < file_name

for file in /tmp/$$/*
do
   echo "Query=$file"
   uniq -c /tmp/$$/$file
done

Does that get you any closer? The output is the wrong way round for the counts, but that could be handled thus:-

....

for file in /tmp/$$/*
do
   echo "Query=$file"
   uniq -c /tmp/$$/$file | while read col1 col2
   do
      echo "$col2 $col1"
   done
done

I hope that this helps.

Robin
Liverpool/Blackburn
UK

---------- Post updated at 03:55 PM ---------- Previous update was at 03:52 PM ----------

Oh, an update before I posted!

Okay, well addressing the post number 6, try something like:-

#!/bin/ksh

while read line
do
   grep "^$line" file2 | cut -f2- -d" "
done < file1

Does that do it?

Robin

Or

awk 'FNR==NR {T[$1]=$2; next} {print T[$1]}' file2 file1
33
66
77
55