Batch download

.............

try this (but you must be sure that domains are like registered and avaliable)

# mylink=$(curl -s "http://www.domain.com/play.php?key=uu67j567jj6y"|sed '/<div class/s/.*>\(.*\)<.*/\1/')
# wget -O /tmp/mytest.flv "$link"

.............

did you test this for single URL?
what about the source code?
"div class" is uniq in the web source code ?
i have no idea ,contents of that URLs, because of your URLS are like seem invalid.

---------- Post updated at 05:06 PM ---------- Previous update was at 04:58 PM ----------

if you dont test try this :wink:

#!/bin/bash
DESTFOLD=flvvideos ; mkdir $DESTFOLD # change your it with your dest folder
cd $DESTFOLD ; while read -r URL ; do
getlnk=$(curl -s "$URL"|grep ".flv"|sed '/<div class/s/.*>\(.*\)<.*/\1/')
wget $getlink ; done<URLfile

...........

just let try like that

#!/bin/bash
DESTFOLD=flvvideos 
mkdir $DESTFOLD # change your it with your dest folder
cd $DESTFOLD ; while read -r URL ; do
getlnk=$(curl -s "$URL"|sed -n '/.flv/s/<[^>]*>//gp')
wget $getlnk ; done<url.txt

.........

what is your system?
did you use bash or sh?
can you check the possible typos?

and you can define the full path your URLfile

...
.....
done</full/path/url.txt
#!/bin/bash
while read -r URL ; do
getlnk=$(curl -s "$URL"|sed -n '/.flv/s/<[^>]*>//gp')
wget $getlnk 
done</full/path/url.txt

regards
ygemici

...........

#!/bin/bash
while read i
do
       wget ${i}

done < <(grep -oP '(?<=>)\w.*(?=</)' urlfile)