Hi all,
does any one know ,if there is any limitation on rm command
limitation referes here as a size .
Ex:when my script try to rum rm command which have size of nearly 20-22 GB ..CPU load gets high ?
if anyone know the relation of CPU load and limitation of rm command .
Are you talking about rm
in combination with the -r
option?
i am using rm -f $FILE where $File having files which have nearly 20-22 GB size and Number of files=52603
So the variable $FILE contains a wild card ( *, ? and such )? What is the content of that variable?
ok here the full script part :
for file in $FILELIST;
do
rm -f $file && echo "Successfully deleted $file"
test $? -ne 0 && echo "Error while deleting files" && exit 6
done
===============================
and
FILELIST=`find $D -type f -print`
(D= /a/b/c/*)
the count of
find $D -type f -print | wc -l
=52603
and size of D=/a/b/c/* => du -sh => 20 GB
when my script reaches this deleting part the CPU load get high
and i have to kill my script at deleting step
this is the complete scenario
rm -f
always has return code 0 (see man rm)
for file in $FILELIST
will not work for files with spaces in the name, and so the script will try to remove the part before and after a space...
It would be better to use:
find /a/b/c -type f |
while read file
do
You could also let find execute the rm part.
1 Like
Thanks for the information !
but in my case the file name contains no space in name.
here what i want to ask does FOR loop in script for these many files containing the size of 20GB ...affect the CPU load . if yes then how ?
If these files are highly fragmented, the OS may need to do a whole lot more work to delete them. Otherwise, deleting large files shouldn't be an incredible burden.
there is always some memory / cpu cycles limitation with shell scripts ,try perl .