I am trying to capture screenshots from a huge list of URLs. I am able to manually capture images of individual pages; that is, I simply run the following command to get a screenshot of Foo.com
$ python /path/to/screencapture.sh http://www.foo.com
I want to modify the script so that instead of manually entering the URL, I create a file with a unique URL on every line, and the script loops through until it captures a screenshot of every URL.
For example, the file would look something like this:
foofile
http:://www.google.com
http:://www.yahoo.com
http:://www.espn.com