Remove lines that are subsets of other lines in File

Hello everyone,

Although it seems easy, I've been stuck with this problem for a moment now and I can't figure out a way to get it done.

My problem is the following:

I have a file where each line is a sequence of IP addresses, example :

10.0.0.1 10.0.0.2 
10.0.0.5 10.0.0.1 10.0.0.2
...

What I'd like to do, is to remove lines that are completely matched in other lines. In the previous example, "Line 1" would be deleted as it is contained in "Line 2".

So far, I've worked with python and set() objects to get the job done but I've got more than 100K lines and sets lookups are becoming time consuming as the program goes :confused:

Thanks for you help

Testing / set lookups WILL become increasingly lengthy with more and more data sets. If you have a working solution, it might be very challenging to squeeze a few percent performance out of it.

Hello RudiC,

Well, I managed to trim down the dataset with "sort -u input > output", but this will only remove pure duplicates. But still, running my script on this filtered dataset will take ages :confused:

I presume 'sed' could help me but I can't figure out what regex I should feed him..

Assuming that the 1st line from:

10.0.0.2 10.0.0.1
10.0.0.5 10.0.0.1 10.0.0.2

should also be removed as a subset of the 2nd line and that the 1st line from:

10.0.0.2
10.0.0.21

should not be removed as a subset of the 2nd line; I can't see how sed would be a good tool for attacking this problem. The periods in IP addresses also need to be handled specially in sed since they have a special meaning in regular expressions. At first glance, awk would seem to me to be much more suited for this problem than sed . I haven't done enough with python to compare awk and python for a problem like this.

And, while sort -u will get rid of duplicate lines, it won't reduce the problem you need to solve with input like:

10.0.0.5 10.0.0.1 10.0.0.2
10.0.0.5 10.0.0.2 10.0.0.1
10.0.0.1 10.0.0.5 10.0.0.2
10.0.0.1 10.0.0.2 10.0.0.5
10.0.0.2 10.0.0.1 10.0.0.5
10.0.0.2 10.0.0.5 10.0.0.1
10.0.0.1 10.0.0.2
10.0.0.1 10.0.0.5
10.0.0.2 10.0.0.1
10.0.0.2 10.0.0.5
10.0.0.5 10.0.0.1
10.0.0.5 10.0.0.2
10.0.0.1
10.0.0.2
10.0.0.5

which I assume should be reduced to a single output line that matches one of the 1st six lines above.

But, if you have a lot of duplicate lines in your input, using sort -u as a preprocessing step may reduce your overall run time. Without a much deeper understanding of how many duplicates there are and how you process lines while looking for duplicates, we would just be making wild guesses as to whether using sort -u would increase or decrease run times.

Preprocessing your input so that all of the IP addresses in each line are in sorted order might also be an advantage. Whether or not it would help or not again depends on what your input looks like and how you are looking for duplicates. It might also speed up processing if your input was sorted with the longest sequences coming first.

Is this a one-time problem; or is this something you're going to be doing on a regular schedule? Getting a correct result is crucial. Getting a correct result as quickly as possible might not be important if you only need to do this once.

If you show us your code, we have a MUCH better chance of helping you correct or optimize what you're doing.

Giving us more details about your input could also make a huge difference in the way some of us would approach a problem like this. What are the minimum and maximum number of IP addresses in your sequences? Do all of the IP addresses in a single sequence have the same 1st component (i.e. 10.0.0.1 , 10.0.0.2 , and 10.0.0.5 )? Everything you know about your data could provide hints that could optimize code used to solve this problem.

You have more than 100K input sequences. How much "more than"? What OS are you using? What hardware are you using? How much memory do you have?

Given this input file:

10.0.0.1 10.0.0.2 10.0.0.3
10.0.0.3 10.0.0.4
10.0.0.1 10.0.0.2 10.0.0.3 10.0.0.4 10.0.0.5 10.0.0.6
10.0.0.1 10.0.0.2
10.0.0.4 10.0.0.5 10.0.0.6
10.0.0.5 10.0.0.6

Is solution #1 below acceptable, or must it be further optimized to solution 2?

Solution 1
10.0.0.1 10.0.0.2 10.0.0.3
10.0.0.3 10.0.0.4
10.0.0.1 10.0.0.2 10.0.0.3 10.0.0.4 10.0.0.5 10.0.0.6

Solution 2
10.0.0.1 10.0.0.2 10.0.0.3 10.0.0.4 10.0.0.5 10.0.0.6

This solution keeps the lines with most IPs first although not optimal it should give fairly more concise answers than just processing file top to bottom.

I tried it on a data files with 170K lines and it took approx 3 seconds to process.

awk '
function have(ln,ip,num) {
  ret=1
  num=split(ln,ips)
  for(ip=1;ip<=num;ip++)
     if(!(ips[ip] in havelist)) {
         havelist[ips[ip]]
         unique++
         ret=0
      }
  return ret
}
{ L[NR]=$0;D[NF]=D[NF] " " NR; max=NF>max?NF:max }
END {
   for(c=max;c;c--)
      if(D[c]) {
         lns=split(D[c],v)
         for(i=1;i<=lns;i++)
           if(!have(L[v])) print L[v]
      }
   printf "Data file contains %'\''d unique IPs\n", unique > "/dev/stderr"
}' infile

For anyone who wants to test possible solutions I used the script to make my test file:

for ((i=0;i<170000;i++))
do
    printf "10.0.%d.%d" $((RANDOM%256)) $((RANDOM%256))
    ips=$((RANDOM%8))
    while ((--ips > 0))
    do
       printf " 10.0.%d.%d" $((RANDOM%256)) $((RANDOM%256))
    done
    printf "\n"
done