How to extract just a word from a File in Shell?

Hello Friends,

I have a txt file which has data like this

 
TNS Ping Utility for Solaris: Version 10.2.0.3.0 - Production on 23-MAR-2010 15:38:42
Copyright (c) 1997, 2006, Oracle.  All rights reserved.
Used parameter files:

Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP) (HOST = ab1078uk.server.com) (PORT = 1521))) (CONNECT_DATA = (SID = ab1078uk)))
OK (0 msec)

From this file i want to grep for just "ab1078uk.server.com"

Now the above text is repeated in the whole file serveral times and each time the name "ab1078uk" will be different and ".server.com" will be same and i want to extract all these names How can i do it....Hope i am clear with my requirement

Is this
(HOST = ab1078uk.server.com)
what you are searching for?
And, is it always in this format/segment? HOST = xxx.server.com

Thanks for replying
Yes the format
(HOST = ab1078uk.server.com)
Will always be the same...only the word "ab1078uk" will keep changing

I will be searching for ".server.com" and it should return me "ab1078uk.server.com"

something like

cat file_name|grep *.server.com

I know this wont work...just trying to make things clear

Assuming the sample you showed is stored as nj.txt, you could try:

>grep HOST <nj.txt | sed 's/(HOST/(~HOST/' | tr "()" "\n" | grep ~HOST
~HOST = ab1078uk.server.com
~HOST = ab1079us.server.com

If you get that far, then the rest is simple to find the websites.

also assuming nj.txt is your filename, and assuming text within is as shown.

awk -F\( '$6 ~ "server.com" { print $6 }' nj.txt  | awk -F= '{ print $2 }' | tr -d \)

Thanks worked

---------- Post updated at 12:39 PM ---------- Previous update was at 12:39 PM ----------

Thanks worked

awk -F"[=)]" '{for(i=1;i<=NF;i++){if ($i~/[A-Za-z0-9]+\.server\.com/) print $i}}' FILE 
sed -n '/HOST/{s/.*HOST[ ]*= //;s/).*//p}' file