Wanted best way to validate delimited file records

actually i post about this issue before but many folkz miss-understood with my quesion,
We are checking for the delimited file records validation
Delimited file will have data like this:

Aaaa|sdfhxfgh|sdgjhxfgjh|sdgjsdg|sgdjsg|
Aaaa|sdfhxfgh|sdgjhxfgjh|sdgjsdg|sgdjsg|
Aaaa|sdfhxfgh|sdgjhxfgjh|sdgjsdg|sgdjsg|

So we are checking for where the files of records we got is having validating length or not
NOTE:the structer of the file will be present in the teradata db,we will fetch the structer of the file then we will validate
in the teradata configured table we wil get details about
column name,oder number will be order of column in tht table�.it will be 1 2 3�.like tht,length of the column
for ex. if we are checking for file of 3 columns then in the table we wil hav 3 columns of size varchar(5),so now every feild in the file should have lenght <=5,

script which i wrote:

#------------------------------------------
#  Reading through the file and checking for the column length
#----------------------------------------------------
                logNote "Reading through the temp file and and checking for the column length"
 
                while read col_nm col_order_num col_len
                do
                                typeset -i col_len
                                typeset -i col_len_good
 
                                col_len_good=`expr $col_len + 1`
 
                                logNote "col_nm : $col_nm"
                                logNote "col_order_num : $col_order_num"
                                logNote "col_len : $col_len"
                                logNote "col_len_good : $col_len_good"
 
                                awk 'BEGIN{col_ord='$col_order_num';col_l='$col_len'}{FS="|"}{if (length($col_ord) > col_l) print $0;}'  $Src_File >> $Src_File.bad
 
                                awk 'BEGIN{col_ord='$col_order_num';col_l='$col_len_good'}{FS="|"}{if (length($col_ord) < col_l) print $0;}'  $Src_File > $Src_File.temp
 
                                rm -f $Src_File
                                mv $Src_File.temp $Src_File
 
                done <$RPT_FILE
==============================================================

In the script col_nm, col_order_num ,col_len we will fetch from table
col_nm =column name
col_order_num =oder number will be order of column in tht table�.it will be 1 2 3�.like tht
col_len=length of the column

its working fine bi=ut we had performance issue.
can any come up with some better solution
mostly using with awk,awk array might be easy i guess
thankz in advance
[/FONT][/FONT][/FONT][/FONT]

If you just want to validate the general form of records, sed gives you regex and demux:

sed '
  /^[A-Z][a-z]\{3\}|[a-z]\{8\}|[a-z]\{10\}|[a-z]\{7\}|[a-z]\{6\}|$/{
    w good_recs
    d
    }
 ' in_file >bad_recs

You just need to generate the regex from your file specs.

For bulk processing speed, write a simple C program that reads lines, checks the line length, locates all pipes (offsets in a big array of integers), which can be checked against field count and field length command line arguments, and any other field filters you desire can be added integer, decimal, float, text, no white space, upper case, etc.). Of course, you could have standard regex for each field type you care to filter.

thanxz for reply but i want in ksh
can any one help me out with this plzs.

i am having this following code fr the same but i dont knw hw it works d i tried this but it show error in last line

awk '
  NR==FNR{                                 # When the first file is being read (only then are FNR and NR equal)
    W[$2]=$3                               # create an (associative) array element for the column widths with the second
                                           # field as the index using the Field separator (see below)
    next                                   # Proceed to the next record
  }
  {
    for(i in W)                            # for every line in the second file, for every column in array W
      if(length($i)>W){                 # if the length of the corresponding field is more than the max column width then
        print > "file.bad"                 # print that record of the second file to "file.bad"
        next                               # Proceed to the next record
      }
  }
  1                                        # If there are no fields with more characters than the max column width then print the record..
' FS='[^0-9]*' colwidthfile FS=\| file     # Set FS to any sequence of non-digits for the first file. Set it to "|" for the second file.

can can one help in this!!!

The shells can parse fields if you change to $IFS to include the right field separators, preferable in a subshell so life is not severly bent for the rest of the shell. In place if ths space and tab separators in $IFS, put your simple delimiters, and then they are separated in terms of 'read', 'for myvar in' or arguments on a command line or shell function call. It's pretty simple, really. Something like "while read f1 f2 f3 f4 f5 f6 f7 f8;do . . . done" suggests itself. You can also subdivide fields in shell using ${varname%} or %%, #, ##. Substrings are a bit more work in ksh, but bash has this more gracefully built in: Unix shell - View topic - How to get a substring in ksh