How to remove duplicate ID's?

HI

I have file contains 1000'f of duplicate id's with (upper and lower first character) as below
i/p:

a411532A411532a508661A508661c411532C411532

Requirement: But i need to ignore lowercase id's and need only below id's

o/p:

A411532
A508661
C411532
egrep -i 'A411532|A508661|C411532' in_file | sort -fut: -k1,1 > out_file

PLEASE use code tags as advised!

Your specification is a bit faint. Will your input file be one line only, will your IDs always be seven alphanumeric chars, and the like. Nevertheless, try

$ sed 's:.\{7\}:&\n:g' file | tr 'a-z' 'A-Z' | sort -u
A411532
A508661
C411532

Thanks to Rudic and DGPickett its working.

Buzzme-

try also (Using RudiC's solution):

sed 's:.\{7\}:\U&\n:g; s/\n$//' file | sort -u

although some sed versions might not support this solution.

Yes, the layer for chopping the 7 byre sections into lines:

sed '
  s/.\{7\}/&\
/g
  s/\n$//
 ' in_file | . . . .

Ignore lowercase id's:

sed 's/[a-z]....../\
/g' file
awk -F'[a-z]......' '{$1=$1}1' OFS='\n' file

@Scrutinizer: would this allow for and remove duplicate upper case IDs?

Hi, no it would need to be like in the sample.

A missing part of the requirement was whether the data could be case-converted to all lower or all upper. The sort options can ignore case but preserve it. So, if a duplicated ID is always uppercase, it is still uppercase on the output.