Hi,
I have two files A and B and would like to use A as a filter. Like this:
File A.txt: Contains a list of IP addresses, each one occurring only once and in order:
10.0.0.1
10.0.0.2
10.0.0.4
10.0.0.8
File B.txt: Contains the same IP addresses with a corresponding ping time each (in fact, file A.txt is generated with a simple sort -u command from file B.txt). B.txt could consist of thousands of rows but I have sorted all addresses and ping times in ascending order like this with another filter:
10.0.0.1 2
10.0.0.1 18
10.0.0.2 6
10.0.0.4 8
10.0.0.4 14
10.0.0.8 18
For each IP address in file A, I want to calculate the min/med/max ping time and the packet delay variation (difference in time between min/max) and store that in a separate file. So the result should look like this:
IP Min Ave Max PDV
10.0.0.1 2 10 18 16
10.0.0.2 6 6 6 0
10.0.0.4 8 11 14 6
10.0.0.8 18 18 18 0
Have tried quite a few different ways to solve this. One obvious way would probably be to loop through file A, and for each IP address I do a grep in file B and direct the result into a new file which is analysed separately. Something like this:
while read -r LINE ; do cat B.txt | grep $LINE > file$line.txt ; done < A.txt
I know the syntax won't do the trick (not any other similar syntax either, have tried them all), but if I could get one file per IP address I could figure out how to analyze the data.
After monitoring forums, I found out that a better way to do this on such big text files would be to use awk. Have found many examples in this forum, but none that takes entries from one file as input and uses that to define the scope for another file.
I have tried to combine 'awk' and 'read' but as the rookie I am I only manage to get some grey hair. Even this simple one (only trying to read A.txt and see if I can get the IP address out and match it with the left column in B.txt) fails:
while read -r LINE ; do awk '{if ($LINE[1]==$1) print $1}' B.txt ; done < A.txt
Just wonder if someone could give some advise here? Have really scanned after solutions in other threads on many forums but just can't pull it together.
As side note: Would be very interesting to understand if all of this could be done in a single command in some way also (including my first steps where A.txt is created and B.txt is sorted). Generating new files all the time means writing to the flash over and over and also add code to clean up those files as I get new files all the time. Would like to avoid that if possible.
Thanks!
Z