Assistance pls - pipe error: Too many open files in system

When I run a bash script in the customer system, it throws the warning and script exits

Exec '/root/sample.sh' @ hostname-- OK
(warn) /root/sample.sh: pipe error: Too many open files in system
/root/sample.sh: n + : syntax error: operand expected (error token is " ")

Exec '/root/sample.sh' @ hostname -- OK
(warn) /root/sample.sh: fork: Cannot allocate memory
/root/sample.sh: n + : syntax error: operand expected (error token is " ")

In the script - variable n is used in this below statement.Where as variable n is not initialised to zero.

 n=$(( n + $(find $j -type f | wc -l) )) 

whether the variables need to be explicity initialized to zero beforing using them.

does this cause below warnings Too many open files in system / cannot allocate memory :confused:

Please anyone clarify me

I thought you must exceed your limits allocated by the systems ,
check using ulimit command.

Not clear about waht exactly you are trying to do.

Anyway, Check the ulimit settings

It isn't giving the error because of not initializing the variable.It is related to some pipe errors.You may open lot of files in your system.Check it out.In bash script,you can use the variables without initializing it.

script scans all the files in a specific directory and checks whether they are unzippable. all the monitoring information like how many files are scanned /corrupted are logged in a log file. I dint get any problem when I tested in my lab.

---------- Post updated at 02:59 PM ---------- Previous update was at 02:54 PM ----------

this is my code

for i in /store/archive/TP_* ; do
# Checks for the errors in compressed bitfiles.
      for j in `find $i -name "L2*" -type d`; do
           n=$(( n + $(find $j -type f | wc -l) ))
            unzip -qT $j/\* 2>>/dev/null;
                if [ $? -ne 0 ]; then
                     for k in `find $j -type f`; do
                               unzip -qT $k 2>>/dev/null;
                               if [ $? -ne 0 ]; then
                                       echo `date -r $k` $k >> $bfile;
                                     corruptedBitfiles=`expr $corruptedBitfiles + 1`;
                               fi
                                # Counting bitfiles in each subdirectory
                                bitfileCounter=`expr $bitfileCounter + 1`;
     # Keeping track of the progress of every 5k files in each sub-directory
    let batchCount=$bitfileCounter%$increment
    if [ $bitfileCounter -ge $increment -a $batchCount -eq 0 ]; then
    fi
   done
   bitfileCounter=0;
   corruptedTPDirCount=`expr $corruptedTPDirCount + $corruptedBitfiles`;
   corruptedBitfiles=0;
   fi
#Summing up the count of all sub-directories under main TP directory
  tpDirCount=`expr $tpDirCount + $n`;
  n=0;
  done
  tpDirCount=0;
  corruptedTPDirCount=0;
 
 

The maximum number of open files per userid has been exceeded. This is a kernel configuration parameter.
Check if the same login is used concurrently.
Check if files are not being closed when the process is finished with them
Re-link the kernel with a higher value.
ulimit refers to the maximum size of a single file.