I don't think you want to use a 5 gigabyte block size! That may require dd to allocate 5 gigabytes of memory
I'd also worry about the system not being able to delete the file once the disk is full. Filling up a disk, especially / , can have bad side-effects.
Third, disk caching will mess up your plan by storing most of your writes in memory until it decides it really needs to get rid of them. If you just want to act like a big application this may be realistic, but if you just want to force the disk to be used all the time this may not be ideal.
#!/bin/sh
# We use a trick to guarantee the file is deleted when this program
# ends. We open the file, then delete it while still open.
# It will take up space but not show in ls, and be automatically
# removed when closed -- i.e. when this script quits, or
# even if killed by ctrl-C.
exec 5>/path/to/hugefile # Open the file
rm /path/to/hugefile # Delete the file -- but keep it open
# Loop until dd returns error, i.e. can't write
while dd if=/dev/zero bs=1M count=1024 >&5 # Write another gig to hugefile
do
true
done
i=1
while [ $i -lt 100 ]
do
dd if=/dev/zero of=50gb_$i.img bs=5gb count=10
let i=i+1
done
Edit: Nice solution Corona688, and I agree filling the root filesystem is not advised.
Perhaps sync could be placed in the loop instead of true to get the disk cache flushed.
thank you both.
but in regard to Corona688 respond,
previously i used to copy all the contents from my server..like i've data folder copied over to test machine and make it like 50Gb of file named vol.
then loop those vol file as vol1 vol2 vol3.....vol100 making size upto 5TB just to fill the hard drive.
then it will erase after it finish 100 loop.
same time it was showing the $date & size of the folder just for comparison.
in that case can i use dd to create file instead of copying from the network which causes network lag if i run few machines?
Thanks Chubler,
isn't your code keep creating files using dd.
actually, i'd like to create at fist just to get 5GB of file.
then loop it creating filename like file1 file2....file100 of each file of 5GB.
after that either it can be deleted or leave as is.
same time i'd like to hv $date and $size of the file in the output while looping.
sorry if i make any confusion.
thanks
[quote=chubler_xl;302767965]
That should work, but I still find it weird that dd isn't doing what it does for me
Any other alternate u could think of
Thnx
---------- Post updated at 03:08 PM ---------- Previous update was at 01:55 AM ----------
#!/bin/bash
#create 2GB test file first.
dd if=/dev/zero of=/data1/test bs=128M count=200
#start to create test1 test2...test100
i=1
while [ $i -lt 100 ]
do
#copy
cp -r /data1/test_$date_$i
let i=i+1
done
.................
somewhere in the copy section i'm getting error.
actually, i'd like to create test1 test2 test3..test100 file of 2gb continue copying ...
can anyone pls. provide me suggestion in #copy line.
#!/bin/bash
dd if=/dev/urandom of=/data/test bs=128M count=40
for ((i=1;i<=200;i++));
do
cp /data/test /data/$i
echo "loop $i"
date
done
rm -rf /data/$i /data/test
thanks this is what i did and it seems running fine so far.
but my question here is it creates only 5GB of file...what you think if i'd like to make it 20Gb or 30GB file.
if i do dd will be memory hog to create such big file ?
also i'd like to create a file called file1, file2...file200 instead of 1 2 3...200.
If you increase the count instead of bs, memory usage will be fine (note 40x128Mb=5G 160x128Mb=20G)
For you filename request, change to cp /data/test /data/file$i and adjust rm command at end (if you want to delete all created files use rm /data/file* /data/test
got it thanks much for your help.
last one...
lets say i've 50TB data storage...above will do 4TB only..sometimes i've above 50TB storage...
is there something i can add a line or two like
if it fills up the drive then rm -rf /data1/file*
if space available lets say after 5TB then continue until fill up..