Delete duplicate row based on criteria

Hi,

I have an input file as shown below:

20140102;13:30;FR-AUD-LIBOR-1W;2.495
20140103;13:30;FR-AUD-LIBOR-1W;2.475
20140106;13:30;FR-AUD-LIBOR-1W;2.495
20140107;13:30;FR-AUD-LIBOR-1W;2.475
20140108;13:30;FR-AUD-LIBOR-1W;2.475
20140109;13:30;FR-AUD-LIBOR-1W;2.475
20140110;13:30;FR-AUD-LIBOR-2W;2.475
20140113;13:30;FR-AUD-LIBOR-2W;2.605
20140114;13:30;FR-AUD-LIBOR-2W;2.605

I need to remove the duplicates based on the last column value however the criteria is tricky as I need to retain history if the column value changes between dates. The expected output is as follows:

20140102;13:30;FR-AUD-LIBOR-1W;2.495
20140103;13:30;FR-AUD-LIBOR-1W;2.475
20140106;13:30;FR-AUD-LIBOR-1W;2.495
20140107;13:30;FR-AUD-LIBOR-1W;2.475
20140110;13:30;FR-AUD-LIBOR-2W;2.475
20140113;13:30;FR-AUD-LIBOR-2W;2.605

Thanks
Shash

awk -F";" '{if ($3"|"$4 != prev ) {print }prev=$3"|"$4}' filename
1 Like

Thanks Pravin27 it worked.