How to test many texts generated by many commands ?

Hello, I have hundreds of text files to be tested.They are generated by many commands. I should check whether a word in specific position is right, or whether a word in file A is equal to a word in file B. All the file are not the same. I decide to use shell to test every files, so I had to write hundreds of test shell scripts. My God!
Can anyone give me a handy way to do all the tests. Or is there any tool or language is for doing these tests ?
For example:

CLS_SBB => log_display
system operation number:19
No.      Type    Time    Ele type        Ele ID          Event type      Attribute
0        02      459     07      00      92      00      00      00      01
1        02      00      07      00      83      01      cb      ce      08
2        02      00      07      00      83      01      cb      ce      08
3        02      00      07      00      82      01      80      00      00

CLS_SBB => log_show
------------------------------------------------ Local Event Log------------------------------------------------------------------------
No.    Time            Element Name                      Event Type                 Event Dir        Event Range       Event Data              Severity
0001   00:00:18:33     Leopard-H Canister A              Log Repository Cleared     N/A              N/A               N/A                     Info
0002   00:00:00:00     Leopard-H Canister A              Firmware Configuration Invalid              N/A               N/A                     Warning
0003   00:00:00:00     Leopard-H Canister A              Firmware Configuration Invalid              N/A               N/A                     Warning
0004   00:00:00:00     Leopard-H Canister A              POST Error                 N/A              N/A               POST step ID 128        Warning

Compile rule
Row : No.
The corresponding value should be equal.e.g. 0=0001
Row : Time:
e.g. 459=18*60+33
Row : Ele Type
e.g. 07 should be Leopard-H Canister. . and other device id is other device.
....
I should do the work for many files, it's hard for me.
Thanks!

Not much to go on specifically here, so I will give general advice. First, try "awk" which splits lines into words (you can define what a word is). It's handy and can solve many complex problems, but not everything. If awk is not enough, perl will handle the job.

They may not all be the same, but those two do at least look similar: The first three lines are to be ignored.

The third line is column information.

They're all separated either by tabs, or several spaces in a row.

Do they have anything else in common?

Yes, you are right. every column has it's relationship with one column in another file. The compare work is too much.

So..... right about which things?

You said they were all different, too... In what ways are they different?

The format of every file is similar, each column stands for one meaning,and can be compared with the another column in another file.

yeah, learn to use awk.

:wall:

How similar?

Try this:

#!/bin/sh

CELL1=`awk -v FS="  " -v R=5 C=17 'NR==(R+3) { print $C ; exit } file1'
CELL2=`awk -v FS="  " -v R=3 C=4 'NR==(R+3) { print $C ; exit } file2'

[ "$CELL1" = "$CELL2" ] || echo "Cells differ"