how many unique lines in a file

I have a file, test.txt with approx 12,000 lines. Each line is a single word that looks like a hex address. There are many repeats. Over half of the lines are the same. I want to count how many UNIQUE lines there are.

#>more test.txt

0x123456
0x56AF23
0x99ABC1
0x123456
0x123456
0x99ABC1
0xADDE77
0x123456
0x123456
0x99ABC1

in this case there are 4 UNIQUE lines... How do I script this?
In other words, I need a script that only prints UNIQUE lines, then I can just wc (word count) the result.
Thank you very much!

It also might help to know that each line is a fixed amount of hex characters...the actual file that I need to count has 10 characters per line, for example 0x123456FF (so the above example isnt exact) but each line does begin with "0x" and then 8 hex characters.

Something like this?

sort test.txt|uniq|wc -l

With my version of sort, you can use the -u option instead of piping the sort into uniq.

sort -u file |wc -l

Only awk

awk '{a[$0]++}END{print length(a)}' file