While loop a file containing list of file names until the files are found?

Hi, I have a control file which will contain all filenames(300) files. Loop through all the file names in the control files and check the existence of this file in another directory(same server). I need to infinitely(2 hrs) run this while loop until all the files are found. Once a file is found, there is no need to look for that file again. If a file is not found I need to continue the loop for 2 hours to keep checking for files and if still not found then exit the script with error code( other than 0).

Control file(source files.txt)

PAS.txt
ESA.txt
HRA.txt

......

while read line
do
   test -e /data/project/US/$line
done < /home/user/ctlfile.txt

What operating system and shell (including version) are you using. It would be easy with some shells to create an array of unfound files.

With other shells, you'll need to use a file to keep track of unfound files.

Thank you.

I,m using Red Hat Enterprise Linux 6.5 and Korn shell.

Hi, it can be done with arrays, like Don Cragun suggested. One way would be to use associative arrays (it could also be done with regular arrays). For example:

#!/bin/ksh
typeset -A files

while read f
do
  files["$f"]=
done < /home/user/ctlfile.txt

until [ ${#files[@]} -eq 0 ]
do
  for f in "${!files[@]}"
  do
    if [ -e "/data/project/US/$f" ]; then
      unset files["$f"]
    fi
  done 
  sleep 1
done

printf "%s\n" "All files are present."

This is just a rudimentary example you could try, checks would need to be added and some extra code to break it off after two hours ...

--
Note that the method you are proposing has a potential flaw, if it can happen that a file is placed in the directory and suequently removed then it will still be wrongfully reported as present...
So it may be advisable to have an extra final check for all files that need to be present..

Could I throw in a totally different suggestion? If you have a WANTED file with the required names and all the files should be in a single directory, could you follow this logic?:-

  • List all files in the directory
  • Use your WANTED file with grep to generate a list of all files that do exist.
  • Use grep again to check those still missing.

Something like this perhaps:-

WANTED=/tmp/my_file_list
cd $Directory
missing=1
until [ $missing -eq 0 ]
do
   ls -1 | grep -f $WANTED > /tmp/current
   missing=`grep -cvf /tmp/current $WANTED`
   sleep 7200
done

printf "All files have arrived."

It might mean fewer reads of the directory, although it does generate an intermediate file, so maybe, maybe not. There also might be confusion if your files are too similarly named, e.g. FileA arrives, but FileA1 is still missing, and this might not be spotted. You might need to mark the beginning and end of each line in the WANTED file with ^ & $ respectively.

Just an alternate option to theorise on perhaps.

Robin

Robin, while fgrep -x eliminates the problems with grep , the sleep must be 1 in order to finish early. And then there is great overhead compared to the array solution (that is even optimized by removing once matched elements).

1 Like