AWK script to detect webpages from file

Hi guys I'm very new to unix and I have to create an awk script that detects webpage addresses from a file/webpage and outputs how many times each webpage was detected.e.g. if my file was:

(Note: The symbol " was added to stop them being created into links)

"www.google.com"
"www.facebook.com"
"www.google.com"

the output should be:

"www.google.com" x2
"www.facebook.com" x1

However I cannot get this to work at the moment, can somebody please tell me what is wrong with my code:

BEGIN {print "Running script on the file",ARGV[1]} 

/www*/{for(i=1;i<=NF;i++)
    {if($i ~ /^www/)
        link[NR] = $i
        occurs[NR] = 1
    }

}
{for(i = 1; i < NR; i++) 
    {if(link[NR] == link)
        occurs++
        link[NR] = "repeat"    
    }
}
{print link[NR],"occurs",occurs[NR],"times"}
END {print "Ending"}

Thank you!

Double post, continued here, thread closed.