Unable to catch the redirection error when the disk is full

Hi Experts,

Problem summary :
I am facing the below problem on huge files when the disk is getting full on the half way through the execution.
If the disk was already full , the commands fail & everything is fine.

Sample Code :

head_rec_data_file=`head -1 sample_file.txt`
cat sample_file.txt | grep -v "$head_rec_data_file" > sample_file.load
 

Description : We are creating a load file to sqlldr by removing the header record. We have the above logic to achieve it.

Here when the disk is already full , any attempt to run the script fails & it is as per expectations.
But if the disk gets full half way through the execution , the command is not failing , not returning any non-zero code but completing by creating an incomplete file.

I have tried all other logics : tail +3 , perl , awk , sed . All commands behave in the same way.

Please excuse my stupidity , but i am unable to zero-in on the problem here.

Have you tried leaving out cat and the pipe? For example:

sed 1d sample_file.txt > sample_file.load

Hi Scru,
Yes done that too , but even sed created an incomplete file.

I have other options like "taking the count of the resultant file & comparing with original file". needless to say this is a huge overhead on bigfiles , when we have 100's of jobs running parallely.

What system are you using. The grep utility should exit with a non-zero exit status if any write to the output file fails (whether due to ENOSPC or any other error condition). The cat is not needed. You could use:

grep -Fv "$(head -1 sample_file.txt)" sample_file.txt > sample_file.load

instead of what you had, but it shouldn't affect the exit status of the grep command.

How are you checking the exit status? Is it the creation of sample_file.load that is failing, or is sqlldr failing to load sample_file.load into a database after the grep completes successfully?

Thanks Don,
I am using
SunOS 5.10 Generic_147440-19 sun4u sparc SUNW,SPARC-Enterprise system.

The problem is : Since the redirections arent failing , we are having an incomplete file to load.
The sqlldr does error since ** incomplete file ** means integrity of data is already lost(broken record , etc) .

Let me try the method you have posted.

Thanks for the information, but you didn't answer the key question: How are you checking the exit status of the grep command?

I am using $?

Please show us the exact code you are using. I.e., everything from the grep up to the call to sqlldr.

Try running the command from within /usr/xpg4/bin/sh instead of /bin/sh . The classic Bourne shell on Solaris can be a bit funny with redirects sometimes...

--
Otherwise, is there a difference without a shell redirect? for example:

/usr/xpg4/bin/awk 'NR>1{print > "sample_file.load"}' sample_file.txt

or

sed -n '1d;wsample_file.load' sample_file.txt

Not the problem here, but

awk 'NR==1 {header=$0} index($0,header)!=1{print}' sample_file.txt > sample_file.load

would be more efficient.
And the following ensures it matches the whole line:

awk 'NR==1 {header=$0 RS} index($0 RS,header)!=1{print}' sample_file.txt > sample_file.load

Back to the problem:
Mr. Scru made suggestion to find the root cause!
Solaris sed needs a space here:

sed -n '1d;w sample_file.load' sample_file.txt

Please share the output of the following:

df sample_file.load