All,
I have an application that is not working properly and the company is 'in the process' of fixing it. In the meantime, I want to write a bash script work-around. However, what I thought was going to be simple is seemingly not.
Need:
- Move files from one directory to another in chunks.
If I move all the files at once the system resources are heavily consumed and the application suffers. The source directory ($dir1) will contain any number of files (100's or 1000's) at any given time. So, I'd like to do something like:
if [ "$(ls -A $dir1) ]; then
find $dir1 -maxdepth 1 -type f > output.txt
<loop through output.txt moving x amount of files at a time until directory empty>
mv `head -100 output.txt` $dir2
else
echo "no files to move.."
fi
I know I can accomplish what I need with a brute force creation of many files to read from but there must be a more sophisticated way to do it.
Is there way to drop the list of files into an array and do something like:
mv $files[1-100] $dir2
mv $files[101-200] $dir2
I need to find a usable number of files to move at a time (could be 100, 5000, 10000) while not knowing how many files could be in $dir1 at any given time.
Any guidance is greatly appreciated.
TIA,
Herb