Need to Parse XML from bash script

I am completely new to bash scripting and now need to write a bash script that would parse a XML file and take out values from specific tags.

I tried using xsltproc, xml_grep commands. But the issue is that the XML i am trying to parse is not UTF 8. so those commands are unable to parse my XML's or i am unable to find a woraround for that in xsltproc or xml_grep

Can some one help me in writing a direct script code (something other than utilities like xsltroc or xml_grep) that would pull me the value of the <url></url> tag irrespective the xml being well formed or not

This is the sample xml below

<site>
 <form>
        <url>http://www.bankoamerica.com/state.cgi?section=signin</url>
        <method>GET</method>
    </form>
</site>

Note: Use CODE-tags when displaying code, data or logs for better readability and to keep formatting like indention etc., ty.

something like this :

sed -n 's|<url>\(.*\)</url>|\1|p'  file_name.xml

Hi panyam
Thanks. This is working fine. The XML that i gave is a small chunk of a very big one. Can this sed command be changed so that it will give me only URL's that starts with http, https or www. I have some places where this url tag occurs and having diffeent values.

<site>
<form>
<url>http://www.bankofamerica.com</url>
<url>https://www.bankofamerica.com</url>
<url>www.bankofamerica.com</url>
<url>sitekey.bankofamerica.com</url
</form>
</site>

In the above example i just want the first 3 as a result and not the last one. Can you please help

sed -n 's|<url>\(.*\)</url>|\1|p' your_file.xml | egrep "^http|^www"

Thanks Panyam.