Shell script to remove duplicates lines in a file

Hi,

I am writing a shell script that needs to remove duplicate lines within a file by category.
example:
section a
a
c
b
a
section b
a
b
a
c

I need to remove the duplicates within th category with out removing the duplicates from the 2 different sections (one of the a's in section a, and one of the a's in section b).

I wanted to use uniq but i have to sort the file first...which takes the lines out of the sections they are supposed to be in and sorts the entire file which I don't want.....but I wouldn't mind sorting it by category if possible.

any help is appreciated.

awk '/section/ { section = $2 } !x[section,$0]++' "$FILE"