Loop questions

hi all,
i'm trying to create a script inorder to put load test on hard drives.
first i create 30GB file

dd if=/dev/zero of=50gb.img bs=5gb count=10

then i'd like to create 100 loops continuous until it fills up.
once its done delete file.

this is where i'm stuck for for loops.
can anyone pls. point me easy way for this....
sorry i'm not good at script...

thanks

I don't think you want to use a 5 gigabyte block size! That may require dd to allocate 5 gigabytes of memory :eek:

I'd also worry about the system not being able to delete the file once the disk is full. Filling up a disk, especially / , can have bad side-effects.

Third, disk caching will mess up your plan by storing most of your writes in memory until it decides it really needs to get rid of them. If you just want to act like a big application this may be realistic, but if you just want to force the disk to be used all the time this may not be ideal.

#!/bin/sh

# We use a trick to guarantee the file is deleted when this program
# ends.  We open the file, then delete it while still open.
# It will take up space but not show in ls, and be automatically
# removed when closed -- i.e. when this script quits, or
# even if killed by ctrl-C.

exec 5>/path/to/hugefile # Open the file
rm /path/to/hugefile # Delete the file -- but keep it open

# Loop until dd returns error, i.e. can't write
while dd if=/dev/zero bs=1M count=1024 >&5 # Write another gig to hugefile
do
        true
done
1 Like

This should work in most shells (not csh):

i=1
while [ $i -lt 100 ]
do
    dd if=/dev/zero of=50gb_$i.img bs=5gb count=10
    let i=i+1
done

Edit: Nice solution Corona688, and I agree filling the root filesystem is not advised.
Perhaps sync could be placed in the loop instead of true to get the disk cache flushed.

1 Like

thank you both.
but in regard to Corona688 respond,
previously i used to copy all the contents from my server..like i've data folder copied over to test machine and make it like 50Gb of file named vol.
then loop those vol file as vol1 vol2 vol3.....vol100 making size upto 5TB just to fill the hard drive.
then it will erase after it finish 100 loop.
same time it was showing the $date & size of the folder just for comparison.
in that case can i use dd to create file instead of copying from the network which causes network lag if i run few machines?

/dev/random might provide better testing data than /dev/zero

Thanks Chubler,
isn't your code keep creating files using dd.
actually, i'd like to create at fist just to get 5GB of file.
then loop it creating filename like file1 file2....file100 of each file of 5GB.
after that either it can be deleted or leave as is.
same time i'd like to hv $date and $size of the file in the output while looping.
sorry if i make any confusion.
thanks

Try this,

i=1
DT=`date +%Y%m%d`
while [ $i -lt 100 ]
do
    dd if=/dev/random of=5gb_${DT}_$i.img bs=128M count=40
    let i=i+1
done

its creating only 700bytes sometimes 300bytes each *.img files.
total copied is only 37MB in that folder.
has already created upto 4 *.img files.

Strange, perhaps EOF chars in the random input are truncating the output file
try conv=binary on the dd

i=1
DT=`date +%Y%m%d`
while [ $i -lt 100 ]
do
    dd if=/dev/random conv=binary of=5gb_${DT}_$i.img bs=128M count=40
    let i=i+1
done

dd: invalid conversion 'binary'

this is what i get error message..

Does it do the same sort of thing using /dev/zero?

yes same with /dev/zero
thnx

---------- Post updated at 07:31 PM ---------- Previous update was at 06:12 PM ----------

what if i just copy files from network to folder named /test called file instead of dd.

then create for loop inorder to run 100 loops creating filename as file1 file2...file100

displaying output with total files copied, $date. $size.

thanks

That should work, but I still find it weird that dd isn't doing what it does for me here.

[quote=chubler_xl;302767965]
That should work, but I still find it weird that dd isn't doing what it does for me

Any other alternate u could think of

Thnx

---------- Post updated at 03:08 PM ---------- Previous update was at 01:55 AM ----------

#!/bin/bash
#create 2GB test file first.
dd if=/dev/zero of=/data1/test bs=128M count=200

#start to create test1 test2...test100
i=1
while [ $i -lt 100 ]
do
#copy
   cp -r /data1/test_$date_$i
   let i=i+1
done

.................
somewhere in the copy section i'm getting error.
actually, i'd like to create test1 test2 test3..test100 file of 2gb continue copying ...
can anyone pls. provide me suggestion in
#copy line.

thanks

You have a few issues with your posted script.

  • $date is not set - put date=`date +%Y%m%d` near the top
  • cp requires at least two path/file args (source and dest) so that is why your cp command is throwing errors, try cp /data1/test /data1/test_${date}_$i
#!/bin/bash

dd if=/dev/urandom of=/data/test bs=128M count=40

for ((i=1;i<=200;i++));
do

   cp /data/test /data/$i
   echo "loop $i"
   date 
done

rm -rf /data/$i /data/test 

thanks this is what i did and it seems running fine so far.
but my question here is it creates only 5GB of file...what you think if i'd like to make it 20Gb or 30GB file.
if i do dd will be memory hog to create such big file ?

also i'd like to create a file called file1, file2...file200 instead of 1 2 3...200.

would be glad to hear.

thanks

If you increase the count instead of bs, memory usage will be fine (note 40x128Mb=5G 160x128Mb=20G)

For you filename request, change to cp /data/test /data/file$i and adjust rm command at end (if you want to delete all created files use rm /data/file* /data/test

got it thanks much for your help.
last one...
lets say i've 50TB data storage...above will do 4TB only..sometimes i've above 50TB storage...
is there something i can add a line or two like

if it fills up the drive then rm -rf /data1/file*
if space available lets say after 5TB then continue until fill up..

which loop would be best to use.

thanks alot.

You could keep doing cp command until it fails (something like Corona688 suggested back in post #2)

#!/bin/bash

dd if=/dev/urandom of=/data/test bs=128M count=40

i=1
while cp /data/test /data/file_$i
do
   echo "`date`: Loop $i"
   let i=i+1
done

printf "Removing files now..."
rm -f /data/file_* /data/test
printf "\n"