2 questions regarding the WGET command

I'm using the "wget" command to get the date from Yahoo.com. So this is what I use on Solaris:

/usr/sfw/bin/wget --timeout=3 -S Yahoo!

This works well when my computer is linked to the Net. But when it's not, this command just hangs. I thought putting the timemout = 3 will make this command stop in 3 seconds, but it doesn't. Does anyone know what the exact command is to make this command stop after a few seconds if it can't make a connection to the Net?

Also, when this command runs successfully, it creates a file called "index.html" What do I need to do so that it doesn't automatically create the file "index.html" in my local directory? Thanks.

I guess I have to ask the question:
"Why are you getting the date from Yahoo this way? Why do you need to get the date from Yahoo, period?"

Your answer to that may lead us to get you a better(tm) way to do what you are intending.

As far as 'wget' goes, on some *NIX variants I have used, there was a --timeout=XX option. For example:

wget --timeout=30 mysite.com

would wait 30 seconds and SIGTERM.

The --timeout switch should usually work. If you're really sure that it doesn't, you can always send the command to the background and kill it after some time, eg.

wget --timeout=3 -S http://www.yahoo.com/ &
sleep 30 ; kill -TERM $!

Still the question remains, why use Yahoo (or any other website) to get the date/time if there's such a nice thing as NTP?