How to show one occurence of duplicate date?

Iam having one variable with a value

REPORT_MISSING=$(grep -i 'Unable to find an Entry*' cognos_env01_env13_2012-12-20-0111.log | sed 's|Unable to find an Entry for the report \(.*\) in the Security Matrix|\1|g')

This give me value as

Direct Authorization Error Listing.xml
Health Zipcode HMO plans.xml
Health Zipcode PPO plans.xml
Health Zipcode State CSU plans.xml
Direct Authorization Error Listing.xml
Health Zipcode HMO plans.xml
Health Zipcode PPO plans.xml
Health Zipcode State CSU plans.xml

It is a big log file and will have duplicate entries like it during run time and i just want to get the first occurence of these .xml values.

Required output

Direct Authorization Error Listing.xml
Health Zipcode HMO plans.xml
Health Zipcode PPO plans.xml
Health Zipcode State CSU plans.xml

I have tried

uniq

command at the end but its not working.Could someone please advise me how can i proceed with this.

Try this :

sort file | uniq 

Have you tried

sort -u
1 Like

For uniq file needs to be sorted.

Try at the end...:slight_smile:

sort | uniq

OR

sort -u

OR

awk '!X[$0]++'
1 Like

Thanks guys..I was using the

sort

and

uniq

in wrong order.It working fine now.