How to check Null values in a file column by column if columns are Not NULLs

Hi All,
I have a table with 10 columns. Some columns(2nd,4th,5th,7th,8th and 10th) are Not Null columns. I'll get a tab-delimited file and want to check col by col and generate seperate error code for each col eg:102 if 2nd col value is NULL and 104 if 4th col value is NULL so on... I am a newbie to Unix and any help will be appreciated.

Regards,
Mandab

awk -F"|" ' { 
       if ( $2 ~ /^ *$/ ) printf("102 ")
       if ( $4 ~ /^ *$/ ) printf("104")
       printf("\n")
} ' file

I tried with the sample data but it is not giving the exact output:
The sample data is :
545689512<tab>20070424<tab>20070414<tab>456.25<tab>20061121<tab>pqr
<tab>20060726<tab>20060524<tab>800.12<tab><tab>abc
24<tab><tab>05242006<tab>22.15<tab>20050815<tab>xyz
57<tab>20040425<tab>20041214<tab><tab>20040628<tab>stv

Data will be from 3rd row in the file, the following is the script I tried:

#!/bin/ksh

awk -F"|" 'NR>=3 { ## data is from 3rd row onwards
if ( $2 ~ /^ *$/ ) printf("102")
if ( $4 ~ /^ *$/ ) printf("104")
printf("/n")
}' $1 ## filename

The out put I got is:
$ test6.ksh test1.txt
102104/n102104/n102104/n102104/n$

Mandab,

Try replacing the -F"|" with -F"<tab>" where <tab> is the tab character.
Also printf("/n") should be printf("\n")

Thank you for your quick response, I tried replacing as you said but still I am getting errors. The script is :
#!/bin/ksh

awk -F"<tab>" 'NR>=3 {
if ( $2 ~ /^ *$/ ) printf("102")
if ( $4 ~ /^ *$/ ) printf("104)
printf("\n")
}' $1

Errors I am getting:
$ test7 test1.txt
awk: newline in string near line 3
awk: syntax error near line 4
awk: illegal statement near line 4

I had intended that <tab> be replaced literally by the tab character - CTNL(H).

You are missing a close " after 104.

I am confused. Is your file delimited by the tab character or by set of characters "<tab>" shown in your post?

Excellent !! its working fine now.
But there is one small problem, If there is an error in field one and also field two then I should have only error to be printed as "102". but not both "102" and "104" to be printed, basing on the serial order of fields like field1, field2. Also can I use the error value in a variable so that I can use it for different purpose in my script(outside awk)?

Here is the script:
#!/bin/ksh

awk -F"," 'NR>=3 {
if (length($2)!= 8 || $2 ~ /[^0-9]/ || $2 ~ /^ *$/ ) {var1="102"}
if ( $4 ~ /^ *$/ ) {var1="104"}
{printf var1}
printf("\n")
}' $1

The output:

102
104

The error level is set only upon exiting a process, therefore you would have to exit after experiencing the first error. This would let all the other errors go undetected unless you restart your script. Bottom line: it is possible but is it really what you want?

If you still want it: replace the 'printf("...")' with "exit n" where "n" is the error level you want to set. By convention an error level of 0 means successful completion and every other value means some erroneous condition. You can query the shell variable "$?" to get the error level of the last command executed. As every command (re-)sets this value you will have to query it immediately after the command in question and you will have troubles if the command is inside a pipeline. Error levels are, btw., limited to unsigned short integer ("byte") values: 0-255.

Here is a short script you can experiment with to get comfortable with the mechanism:

#! /bin/ksh

awk 'BEGIN {exit 2}'    # change this value to get different error levels
print - "The errorlevel is: $?"
print - "Now you will see the error level of the last print statement: $?"

I hope this helps.

bakunin