Increase sed performance

I'm using sed to do find and replace. But since the file is huge and i have more than 1000 files to be searched, the script is taking a lot of time. Can somebody help me with a better sed command. Below is the details.
Input:

1
1
2
3
3
4
5
5

Here I know the file is sorted.

My cross reference file will be

1A
2B
3C
4D
5E
6F

here also the file will be sorted on the first character.

The output I want is

1A
1A
2B
3C
3C
4D
5E
5E
 awk 'NR==FNR{a[substr($0,1,1)]=substr($0,2,1);next}a[$0]{printf "%s%s\n",$0,a[$0]}' file2 file1

---------- Post updated at 06:05 PM ---------- Previous update was at 05:59 PM ----------

And if your awk version accepts empty FS:

 awk -v FS="" -v OFS="" 'NR==FNR{a[$1]=$2;next}a[$0]{print $0,a[$0]}' file2 file1

Just a hint:
If you experience performance problems then you should probably forget about shell at this point and use some programming language. I/O can be far superior to what you can get with shell scripts.

Hi

I need further help on the issue posted by gpaulose. I need awk command to append a string at the end of the lines in a text file where a certain string match is found. I am using grep to find a match in the file and then appending.
I am using sed command at present. But the file size is huge and there are 100s of such files. So sed is taking too much time.

Can someone tell me the awk command to append a string at the end of line where a certain string match s found. Performance is very important here.

Thanks

Welcome to the forums ! :slight_smile:

Please read the rules of the forum.
This should have been a new post.

Please post us what you had tried so far.