I have a text file called "file_list.txt". Its content is as follows.
111.tmp
112.tmp
113.tmp
114.tmp
These files will present in "workdir" directory. It has many files. But only the files present in file_list.txt has to be deleted from the workdir directory. We have the code as follows.
while read line
do
rm -f $workdir/${line}
done < file_list.txt
But we are facing performance issue in using while loop. In production, we will have more than 100000 files list in the text file. Can anyone help us to provide a code to improve the performance.
I'm afraid deleting 100000 files from a directory will take its time, esp. on a production system where other things are going on as well. cd ing to the $workdir would not improve performance significantly; you might want to give
while read A; read B; read C; do echo rm -f $A $B $C; done < file
a try, as it saves two thirds of process creations. Or even
There is a small change in the given requirement. Actually the file names will not have .tmp in the "file_list.txt". While deleting I need to append the filename with ".tmp" while deleting.
RudiC,
It doesn't appear your 1st suggestion will process the last 3 records/file names from input file if total number of records/file names on file are not evenly divisible by 3.
Thank you RudiC and mjf. The performance statistics is as follows.
using while loop
**** Script started - 2015-02-07 02:18:52 ****
**** Script ended at 2015-02-07 02:19:45 ****
Duration: 53 seconds to delete 5400 files
using the syntax given by mjf/RudiC
**** Script started - 2015-02-07 02:05:24 ****
**** Script ended at 2015-02-07 02:05:33 ****
Duration: 9 seconds to delete 5400 files