sort on fixed length files

Hi

How to sort a fixed length file on a given char range and just display the duplicates.

I did search for man sort to find any option but could find any.,something similar to cut -c 1-5,25-35.

I have alternate way of doing this by using combination of cut,awk. but this
creates extra temp files.

any suggestions will be helpful without need to create temp files.

Thanks
Sach.

show a sample of your fixed length file and the desired output.

try using sort,cut,uniq.

sample input
515341|515310000047363758741
515341|515330100047363758741
515342|515360000020063758742
515349|515370000047363758749

desired output

515310000047363758741
515330100047363758741

i need to display all the duplicate records.

Thank You!

Try...

$ cat file1
515341|515310000047363758741
515341|515330100047363758741
515342|515360000020063758742
515349|515370000047363758749

$ awk -F '|' 'FNR==NR{a[$1]++;next}a[$1]>1{print $2}' file1 file1
515310000047363758741
515330100047363758741

Thanks igor for the solution., but only works if the file doesn't have "|" in the data.

since its fixed lenght file, there is no guarantee that i will only get data without "|" in the data.

sorry my bad, i didn't give the correct example., just modification,

51|341|5153100@|0047363758741
51|341|51533010@0|47363758741
515342|515360000020063758742
515349|51537|0|0047363758749

output should be
5153100@|0047363758741
51533010@0|47363758741

Thanks
Sachin.

a slight modification to Ygor's solution:

nawk -F '|' 'FNR==NR{a[substr($0,1,6)]++;next}a[substr($0,1,6)]>1{print substr($0,8)}' myFile myFile