How to consume all available space on partition?

Hi

I'm doing some resilience testing and need to write a script to consume all of the available disk space on a partition and then to free it up again.

This would need to be -

Safe
Dynamic, in that it calculates the free space prior to consuming it.
I might want to go on to consume a configurable percentage of it instead...

This would be on a Centos Linux box

Anyone have any experience in this area?

Brad

Unless the disk is not actively used, the results of maximizing (completely filling) the drive can be very unpredictable. Since this is a production environment, I will assume that others read/write to the drive, and what any program may do when confronted with an inability to get space is simply unknown.

That said, there are some simple processes to fill a disk. However, I would like to know more about why first.

:slight_smile:

Nothing sinister... Like I said, this is for some automated resilience testing. I need to prove that a service can cope when it has insufficient disk space available.

The service is supposed to load data files into a data store and should cope by telling the source server to add the files to the retry queue because it doesn't have the space to accept the copy of the file.

For how long? And so it will retry again and again? dont you think the system will complain also and start generating loads of line in system log?

Well that is exactly what I am trying to achieve. Stress the system and look for problems.

But to your specific question, no the logging should not get out of hand. It should just log that it has added it to the retry queue.

Perhaps something in a simple loop will do the job:-

#!/bin/ksh

RC=0
typeset -Z20 counter=0

if [ ! -d hog ]
then
   mkdir hog
fi

until [ $RC -ne 0 ]
do
   ((counter=$counter+1))
   echo "Hello" > hog/hogfile_counter
   RC=$?
done

This will write vast numbers of files until either the space of the i-nodes are exhausted and no new files can be created. You can simply delete the entire hog directory to clean up.

If you want to ensure that you fill space, you could:-

#!/bin/ksh

RC=0
typeset -Z20 counter=0

until [ $RC -ne 0 ]
do
   ((counter=$counter+1))
   echo "$counter" >> hogfile
   cp hogfile hogtempA
   cp hogfile hogtempB
   cp hogfile hogtempC
   cp hogfile hogtempD
   cp hogfile hogtempE
   cat hogtemp? > hogfile   
   RC=$?
done

You can then delete hogfile and hogtemp? to free the space.

Both will take quite a while as the IO is probably expensive.

To set a limit, perhaps you could change the loop not to check the return code, but to do a df and get the current value. You can then test if it is over a predetermined limit.

I hope that this helps,
Robin

Hi Robin

Thanks but I am hoping to perhaps calculate the size in bytes and then create a file in a directory about that size... Perhaps using something like dd?

I wouldn't want the automation suite hanging up for a long time while it generated the files.

This might work though if I create a file that is MB's in size and then copy it until I run out of space.

The idea though, is to leave so little space that the files being loaded would fail to copy across from the source server. So some sort of calculation of size will be required.

Calculating it in K might be accurate enough though. If I just leave a couple of K free then the files will certainly be too large.

Cheers

Brad

Brad,
With the level of detail you're providing, we're just not comfortable suggesting ways to turn your system into an expensive door stop.

Running as root and filling up all of the available space on your root filesystem or the filesystem that contains your system's logs or the filesystem that contains the system's user's home directories could easily leave you in a position where you can't remove the files you created.