Remove duplicate lines from file based on fields

Dear community,
I have to remove duplicate lines from a file contains a very big ammount of rows (milions?) based on 1st and 3rd columns

The data are like this:

Region           23/11/2014 09:11:36 41752
Medio           23/11/2014 03:11:38 4132
Info                 23/11/2014 05:11:09 4323
Test                 23/11/2014 05:11:14 4323
Info                 23/11/2014 07:11:09 4323
Test2                 23/11/2014 08:11:14 4323

In that case I need to remove one of the line who contains "Info" and "4323". So the output will be:

Region           23/11/2014 09:11:36 41752
Medio           23/11/2014 03:11:38 4132
Info                 23/11/2014 05:11:09 4323
Test                 23/11/2014 05:11:14 4323
Test2                 23/11/2014 08:11:14 4325

Thanks
Lucas

$ awk '!_[$1 $4]++' infile
Region           23/11/2014 09:11:36 41752
Medio           23/11/2014 03:11:38 4132
Info                 23/11/2014 05:11:09 4323
Test                 23/11/2014 05:11:14 4323
Test2                 23/11/2014 08:11:14 4323
1 Like

Thanks Zaxxon...
Works perfect and runs very fast on a file with 6M lines!!! :b: