Remove external urls from .html file

Hi everyone. I have an html file with lines like so:
link href="localFolder/...">
link href="htp://...">
img src="localFolder/...">
img src="htp://...">

I want to remove the links with http in the href and imgs with http in its src. I'm having trouble removing them because there could be multiple attributes in the tags.
It is possible to have multiple and on the same line in the html file. So I can't just remove the entire line.
1) Bash
2) Linux Ubuntu
Thanks!

Make your question easier by displaying part of the input data and desired output.

Sorry, it's a little hard because the site doesn't want me to include html tags.

Input: I mistyped link & img so that the forum would allow me to put tags
<linkk rel="stylesheet" type="text/css" href="localFolder/my.css"> <linkk rel="stylesheet" type="text/css" href="http://www.noob.com">
<imgg src="localFolder/sad.jpg"> <imgg src="htp://www.noob.com/sad.jpg">
<aa href="http://www.google.com">

Output:
<linkk rel="stylesheet" type="text/css" href="localFolder/my.css">
<imgg src="localFolder/sad.jpg">
<aa href="http://www.google.com">

----------
Essentially, I want to remove everything between < > if there is an http inside the < >, except for <a href>

See if this works for you:

sed '/aa href/!s/ <.*http.*>//' input_file

Hmmm. Two ways to deal with this that I see:

  1. Using a full-fledged HTML parser. A good starting point would be perl's HTML::Parser module. You could load in the HTML file, hunt the tree of tags for things you want changed, alter, write back out. This is the proper way.
  2. Fold, spindle, and mutilate the HTML into something that can be processed line by line.

This is very quick and dirty, highly inefficient, and most decidedly not a full-fledged HTML parser, and while it works for my test cases, it does have limitations. URLs containing ' or " will confuse it. Some fancy meta-tags may confuse it. If any step in the process produces lines longer than sed or your shell can handle, it may explode in a giant firey ball.

#!/bin/bash

# Add %@% to the end of every line, turn newline into space,
# and add newlines at the very beginning and end of each HTML tag.
# Then we'll get each tag on one line while we READ line by line.
sed 's/$/%@%/' | tr '\n' ' ' | sed 's/</\n</g;s/>/>\n/g' |
while IFS="" read LINE
do
        case "${LINE:0:2}" in
        "</")
#               echo "<!-- Close tag -->"
                echo "${LINE}"
                ;;
        "<"*)
                read TAGTYPE G <<< "${LINE:1}"

                # Feed different things into sed depending on what tag we got
                case "${TAGTYPE}" in
                [iI][mM][gG])
                        REPLACE="[sS][rR][cC]"
                        WITH="src"
                        ;;
                [aA])
                        REPLACE="[hH][rR][eE][fF]"
                        WITH="href"
                        ;;
                *)
                        REPLACE=""
                        ;;
                esac

                if [ -z "${REPLACE}" ]
                then
                        #echo "<!-- No substitution -->"
                        echo "${LINE}"
                else
                        echo "${LINE}" | sed "s#${REPLACE}=['\"][^'\"]*['\"]#${WITH}=''#"
                fi

                ;;
        *)
                #echo "<!-- Raw text -->"
                echo "${LINE}"
                ;;
        esac
# Delete all newlines, then change %@% back into newlines
done | tr -d '\n' | sed 's/%@%/\n/g'

It reads on stdin and writes to stdout.

Neither method really ends up being very easy. I suspect there's a whole new language waiting to be made to deal with this.

---------- Post updated at 01:52 PM ---------- Previous update was at 01:41 PM ----------

That will strip out all url's No it won't, but it will also strip them out from incorrect places inside those tags, should they have a title containing a URL or something..