AWK script to detect webpages from file

Hi guys I'm very new to unix and I have to create an awk script that detects webpage addresses from a file/webpage and outputs how many times each webpage was detected.e.g. if my file was:

the output should be:

www.google.com x2
www.facebook.com x1

However I cannot get this to work at the moment, can somebody please tell me what is wrong with my code:

BEGIN {print "Running script on the file",ARGV[1]} 

/www*/{for(i=1;i<=NF;i++)
    {if($i ~ /^www/)
        link[NR] = $i
        occurs[NR] = 1
    }

}
{for(i = 1; i < NR; i++) 
    {if(link[NR] == link)
        occurs++
        link[NR] = "repeat"    
    }
}
{print link[NR],"occurs",occurs[NR],"times"}
END {print "Ending"}

Thank you!

awk '{count[$0]++}END{for(c in count) print c" X"count[c]}' infile

Or...

sort infile | uniq -c

I managed to fix it before your post, but thanks anyway, it's a much shorter solution.