Unless the disk is not actively used, the results of maximizing (completely filling) the drive can be very unpredictable. Since this is a production environment, I will assume that others read/write to the drive, and what any program may do when confronted with an inability to get space is simply unknown.
That said, there are some simple processes to fill a disk. However, I would like to know more about why first.
Nothing sinister... Like I said, this is for some automated resilience testing. I need to prove that a service can cope when it has insufficient disk space available.
The service is supposed to load data files into a data store and should cope by telling the source server to add the files to the retry queue because it doesn't have the space to accept the copy of the file.
Perhaps something in a simple loop will do the job:-
#!/bin/ksh
RC=0
typeset -Z20 counter=0
if [ ! -d hog ]
then
mkdir hog
fi
until [ $RC -ne 0 ]
do
((counter=$counter+1))
echo "Hello" > hog/hogfile_counter
RC=$?
done
This will write vast numbers of files until either the space of the i-nodes are exhausted and no new files can be created. You can simply delete the entire hog directory to clean up.
If you want to ensure that you fill space, you could:-
You can then delete hogfile and hogtemp? to free the space.
Both will take quite a while as the IO is probably expensive.
To set a limit, perhaps you could change the loop not to check the return code, but to do a df and get the current value. You can then test if it is over a predetermined limit.
Thanks but I am hoping to perhaps calculate the size in bytes and then create a file in a directory about that size... Perhaps using something like dd?
I wouldn't want the automation suite hanging up for a long time while it generated the files.
This might work though if I create a file that is MB's in size and then copy it until I run out of space.
The idea though, is to leave so little space that the files being loaded would fail to copy across from the source server. So some sort of calculation of size will be required.
Calculating it in K might be accurate enough though. If I just leave a couple of K free then the files will certainly be too large.
Brad,
With the level of detail you're providing, we're just not comfortable suggesting ways to turn your system into an expensive door stop.
Running as root and filling up all of the available space on your root filesystem or the filesystem that contains your system's logs or the filesystem that contains the system's user's home directories could easily leave you in a position where you can't remove the files you created.