Fast algorithm to compare an IP address against a list of IP sections?

I have two files:

file1:
41.138.128.0    41.138.159.255  location
41.138.160.0    41.138.191.255  location
41.138.192.0    41.138.207.255  location
41.138.208.0    41.138.223.255  location
41.138.224.0    41.138.239.255  location
41.138.240.0    41.138.255.255  location
41.138.32.0     41.138.63.255   location
41.138.64.0     41.138.71.255   location
41.138.72.0     41.138.79.255   location
41.138.80.0     41.138.87.255   location
.....



file2:
41.138.208.3    information
41.138.211.23    information
.....

file1, containing IP section information, has about 10,000 rows, and file2 containing an IP and other information, has 30,000 rows and is growing.
Now I want to fetch the "location" field from file1 based on the IP field from file2, and combine "location" field with "information" field from these two files, I know I can convert all this IPs to an unsinged integer, and compare the IP field from file2 against the IP section in file1, but this kind of comparison is rather inefficient, I need a FAST algorithm to do this, AWK would be favored.

Anyone has an idea?
Thank you in advance.

What is the field2 from file1 used for?

A straightforward approach using awk would be to ( if second field from file1 is not being used ) create an associative array with field1 and field3, then parse through the second file and check if the entry in the second file is there in the associative array, if there print out the value from the associative array.

Are IP addresses in file2 always Legal IP?

So shouldn't have IP like: 41.138.208.356

Sorry, I didn't elaborate on my problem. field1 and field2 from file1 are Legal IP addresses forming a section(e.g. from 111.111.111.0 to 111.111.111.255). I want to get the 'location' filed from file1 given an IP address(the first field from file2) falling within the section.
The IP sections in file1 are sorted.

Not very efficient still need go through file1 every time and only save time by break function, if find it.

$ cat file1
41.138.128.0    41.138.159.255  location1
41.138.160.0    41.138.191.255  location2
41.138.192.0    41.138.207.255  location3
41.138.208.0    41.138.223.255  location4
41.138.224.0    41.138.239.255  location5
41.138.240.0    41.138.255.255  location6
41.138.32.0     41.138.63.255   location7
41.138.64.0     41.138.71.255   location8
41.138.72.0     41.138.79.255   location9
41.138.80.0     41.138.87.255   location10

$ cat file2
41.138.208.3    information
41.138.80.23    information
41.138.11.23    information
11.138.11.23    information

awk '
NR==FNR{split($1,s,".");split($2,e,".");a[NR]=$3;b[NR]=s[1] FS s[2];c[NR]=s[3];d[NR]=e[3];i=NR;next}
{  split($1,ip,".")
   for (j=1;j<=i;j++) 
      if (ip[1] FS ip[2]==b[j] && ip[3]>=c[j] && ip[3]<=d[j]) { print $0 FS a[j];break}
}' file1 file2

41.138.208.3    information location4
41.138.80.23    information location10

This is not efficient.
Thank you for your time anyway.

Can't you keep file1 exclusively as integer?

I don't know if it's efficient enough:

perl -MSocket -lane'  
    push @r, [ 
      unpack ("N", inet_aton $F[0]), 
      unpack ("N", inet_aton $F[1]), 
      $F[2] 
      ] and next if @ARGV;

    $n = unpack "N", inet_aton $F[0];
    $c = $_;

    do {
      print "$c\t$_->[2]" and last 
        if $_->[0] <= $n && $n <= $_->[1]
        }  for @r
        ' file1 file2