Hello everyone
Sorry I have to add another sed question. I am searching a log file and need only the first 2 occurances of text which comes after (note the space) "string " and before a ",". I have tried
sed -n 's/.*string \([^ ]*\),.*/\1/p' file
with some, but limited success. This gives out all the occurances of only one particular string, when in the file the first 2 strings will always be different. Using this command, sed has given every 3rd instance I think.
I would then like to append these obtained strings to a file, each on a new line, prefixed by another "string "
I am scanning for IP addresses from a log file. The logfile being scanned contains the sets of IPs and the correct delimiters on the same line, but that line is never in a set place.
line x
line y
aaabbb ccc-ddd string ipaddress,eee string ipaddress2,fff string ipaddress3,ggg
line z
This line is then repeated further on. Its not needed really but is scanning the lines in reverse order possible?
The results then need to be output in the form of "string2 ipaddress", taking a new line each time with 2 or 3 in total. A script will be necessary there won't it?
Currently I am getting all the instances of ipaddress3 in the log file
Thanks again
Verdepollo's awk command nearly works. I have altered it slightly. I was not specific enough in that there are also commas throughout the file where they are not needed. This works, but I don't think it is optimal. The log file may sometimes be quite large and it is to be run on an embedded device. Can cpu time be reduced by finding away around the grep command? Thanks again
---------- Post updated at 09:51 PM ---------- Previous update was at 07:49 PM ----------
It seems as though I am having problems. I need to implement this as soft coded configuration, echo'ed into a script. When I do this I lose the quotation marks around "\n" and "string2". This was the main reason I initially preferred a sed command, as it may possibly avoid problems like this. Can anyone give advice on getting round this problem with quotations? To give an example
Thanks Ahamed. Can you be a little more precise with adding extra parts to the echo/awk command? The current commands are very similar as they are. I am just being introduced to awk and sed so please forgive my lack of knowledge.
Here is some sample data;
Wed Dec 14 04:41:36 2011 SENT CONTROL [server]: 'PUSH_REQUEST' (status=1)
Wed Dec 14 04:41:36 2011 PUSH: Received control message: 'PUSH_REPLY,route-gateway 197.73.129.164,redirect-gateway def1,dhcp-option DNS 208.67.220.220,dhcp-option DNS 8.8.8.8,dhcp-option DNS 208.67.222.222,route-gateway 197.73.129.164,ping 10,ping-restart 30,ifconfig 197.73.129.170 255.255.255.240'
Wed Dec 14 04:41:36 2011 OPTIONS IMPORT: timers and/or timeouts modified
Wed Dec 14 04:41:36 2011 OPTIONS IMPORT: --ifconfig/up options modified
Wed Dec 14 04:41:36 2011 OPTIONS IMPORT: route options modified
Wed Dec 14 04:41:36 2011 OPTIONS IMPORT: route-related options modified
Wed Dec 14 04:41:36 2011 OPTIONS IMPORT: --ip-win32 and/or --dhcp-option options modified
It is the DNS server IP's that need to be obtained.
It is part of a generic DD-WRT/openwrt script I am creating to allow users to more easily connect to an openvpn server.
The script is stored in the nvram of a DD-WRT router, and when extracted it is executed line by line. There is only one script area available in the http input area, but multiple are files needed, so I need to use echo to create those files, each in the form.
The script is executed everytime the openvpn client successfully connects or reconnects
However then I realised; the command after echo is still not working. Note that the ' marks have been taken away. to get around this "\'" must be used.
echo awk -F, \''/DNS/{for(i=NF;i>1;i--){if($i~"DNS"){gsub(".* ","nameserver ",$i);print $i}}}'\' l
og
Note that I changed the for loop to search backwards from the end.
Finally, although not as great an issue; Can awk stop after it gets a specific number of results? There are many lines like this containing DNS values, only the 3 are needed. The file may also get rather long so cutting on cpu time and stopping, once the information is obtained, is preferable.