close open files before remove

Hi
I have a script to remove log files when it reaches %70 usage in order to descending order of dates. But sometimes it happens to remove open - being processes files , so makes them unlinked from the directors and results them disappeared.
How can I solve this problem , simple scenario I thought that first close file and then remove. But is there any unix utility to do this??

For example: I looked for file-close option in "rm" command, but I could not find. ?

Any advise ?

I thing something wrong with your script. It doesn't seem as rm problem

here is my code:

#!/usr/bin/sh
dir=/cbmdata/00/gdd

 function ()
 {
   while [ 1 ] 
   do
   df -k | grep /cbmdata/00/gdd | tr -d '%' | \
   read a b c d e other
   line=`find /cbmdata/00/gdd -name "LOGS*" |sort -nr |tail -1`

   if ![ -s $line ]
   then
    break
    elif (( "$e" >= "$ref" ))
     then
      rm -f $line
       else
        exit 0
  fi
done
   
 }

  for file in $dir/*
  do
     [ -s /cbmdata/00/gdd/* ] && function $file
  done

p.s. If I restart the related process hidden file becomes visible,this is why I called the rpoblem like that.

first of all I didn't like an infinete loop and remove in a function. I thing same thing become to me before. rm is a bit slower so it can do next line of code before finishing remove.
Are you sure that this code is running why ;

df -k | grep /cbmdata/00/gdd | tr -d '%' | \
read a b c d e other /*does nothing must be **/

e=`df -k |grep $dir |awk '{print $4}' |tr -d '%'`

what is ref no input in the code.I think you mean %70 or else

where is the ascending order of date ? You must change all code

for example

dir=/cbmdata/00/gdd
function ()
{

if [ `df -k |grep $dir |awk '{print $4}' |tr -d '%'` -gt $ref ]
then
rm -f $file
fi
}

ref=$1
for file in $dir/*
do
[ -s $dir/* ] && function $file
done

if you mean LOG01 is older date than LOG02 ?
than I will write that code tomorrow. I must live now

When you do find a file that matches your criteria, first do a fuser -fu on the file to check if any process is using it. If yes, you can let it alone, or if the contents of the file are not required, just truncate the file (using >).

I sent the wrong code, sorry to trouble you:

correct one (working one is that)

#!/usr/bin/ksh
ref=90

while [ 1 ] 
do
  df -k | grep /cbmdata/00/gdd | tr -d '%' | \
  read a b c d e other
  if (( "$e" >= "$ref" ))
  then
    line=`find /cbmdata/00/gdd -name "LOGS*" |sort -nr |tail -1`
    # echo $line
    rm -f $line
  else
    exit 0
  fi
done

blowtorch

yes , file contents is not so important, truncate means closing this file ?
How can I do it? does it belong fuser as option?
, could you please give me an example

Sorry for the delayed reply. Truncate does not mean closing the file, the file will still be open and the process will still be writing to it. But the size of the file will be reduced to zero bytes (it will increase again when the process that is writing to the files does a write).

could you please give me an example with truncate to make it clear, because I am not sure how to use it ?

besides, is there any way to close a file that is open ?

thanks.

I wrote a code on Sun Solaris Version 5.8 Netra server:

to check if file is open:

lsof <file name>

to close (if) open file:

fuser -k <file name>

-k means somehow a kill of course.. I am still looking a better solution...
:cool: