Deleting duplicated chunks in a file using awk/sed

Hi all,

I'd always appreciate all helps from this site.

I would like to delete duplicated chunks of strings on the same row(?).

One chunk is comprised of four lines such as:
path name
starting point
ending point
voltage number

I would like to delete duplicated chunks on the same row(?) if "ending point" is duplicated.
For example, ending points of the first and the second chunk are same in the first row and I would like to only keep the first chunk. Therefore, the second chunk is removed on the first row.

In the second row, ending points of the first and the third chunk are same and keep the first chunk.

input.txt:

path_sparc_ffu_dp_out_1885  path_sparc_ffu_dp_out_2759  path_sparc_ffu_dp_out_3115
R_1545/Q    R_1541/Q    R_1545/Q
dp_ctl_synd_out_low[6]  dp_ctl_synd_out_low[6]  dp_ctl_synd_out_low[2]
0.926208    0.910592    0.905082
path_sparc_ffu_dp_out_699   path_sparc_ffu_dp_out_712   path_sparc_ffu_dp_out_819
R_1053/Q    R_1053/Q    R_1053/Q
dp_ctl_synd_out_low[2]  dp_ctl_synd_out_low[6]  dp_ctl_synd_out_low[2]
0.945436    0.945436    0.9435
path_sparc_ffu_dp_in_686
frf_dp_data[42]
dp_ctl_synd_out_high[6]
0.812538

Expected_output.txt:

path_sparc_ffu_dp_out_1885  path_sparc_ffu_dp_out_3115
R_1545/Q        R_1545/Q
dp_ctl_synd_out_low[6]      dp_ctl_synd_out_low[2]
0.926208        0.905082
path_sparc_ffu_dp_out_699   path_sparc_ffu_dp_out_712   
R_1053/Q    R_1053/Q    
dp_ctl_synd_out_low[2]  dp_ctl_synd_out_low[6]  
0.945436    0.945436 
path_sparc_ffu_dp_in_686
frf_dp_data[42]
dp_ctl_synd_out_high[6]
0.81253

The number of columns can be up to 20 in a file.

Actually, I have posted the same question on other website to get a help, and somebody posted replies, but did not work correctly. Any help is appreciated.

Best,

Jaeyoung

Instead of trying to get multiple websites to act as your unpaid programming staff, why don't you show us how you have tried to solved this problem on your own? If you can show us what you have tried, maybe we can help you fix it.

We have helped you with 8 other awk scripts in the last six months. Can't you use the examples provided by those scripts to get a good start on what you need here?

First of all, I am sorry about my bad attitude.

I tried 'uniq' that only show uniq strings first, but don't know how to show uniq chunks. It only works for lines. And then I tried sed.

sed -r 's/(dp_ctl_synd_out_low\[[0-9]\])(.+)(\1)/\1 \2 -/g' input.txt

So now I could find the duplicated one and replaced with "-", but failed to remove the chunk.

If my post is not appropriate, I will remove it soon.

Best,

Jaeyoung

Save as chunks.pl
Run as perl chunks.pl chunks.data

#!/usr/bin/perl
#
use strict;
use warnings;

my @chunks;
my $lines = 0;
while(<>){
    my @parts = split;
    push @{$chunks[$lines++]}, @parts;
    if ($lines == 4) {
        my %seen;
        my $count = 0;
        my @keep;
        for my $i (@{$chunks[2]}) {
            !$seen{$i}++ and push @keep, $count;
            ++$count;
        }

        for my $i (@chunks){
            my @returns;
            for my $j (@keep) {
                push @returns, @{$i}[$j];
            }
            print "@returns\n";
        }

        clean();
    }
}

sub clean {
    @chunks = ();
    $lines = 0;
}

Output:

path_sparc_ffu_dp_out_1885 path_sparc_ffu_dp_out_3115
R_1545/Q R_1545/Q
dp_ctl_synd_out_low[6] dp_ctl_synd_out_low[2]
0.926208 0.905082
path_sparc_ffu_dp_out_699 path_sparc_ffu_dp_out_712
R_1053/Q R_1053/Q
dp_ctl_synd_out_low[2] dp_ctl_synd_out_low[6]
0.945436 0.945436
path_sparc_ffu_dp_in_686
frf_dp_data[42]
dp_ctl_synd_out_high[6]
0.812538
1 Like

Maybe something more like:

awk '
{	for(i = 1; i <= NF; i++) {
		f[NR % 4, i] = $i
	}
}
!(NR % 4) {
	ocnt = 0
	for(i = 1; i <= NF; i++)
		if(!(f[3, i] in of)) {
			of[f[3, i]]
			spot[++ocnt] = i
		}
	for(i = 1; i <= ocnt; i++)
		for(j = 1; j <= 4; j++) {
			ol[j] = ol[j] f[j % 4, spot] ((i == ocnt) ? "" : "\t")
		}
	for(i = 1; i <= 4; i++) {
		print ol
		delete ol
	}
	for(i in of)
		delete of
}' input.txt

would work better for you. This uses a single tab character as the output field separator instead of a seemingly random number of spaces (but you can easily change it to a fixed number of spaces if you want to).

From the sed command you're using, I assume that you're not running this on a Solaris system, but if someone else wants to try the above code on a Solaris/SunOS system, change awk to /usr/xpg4/bin/awk or nawk .

1 Like

Thank you, Don.

Your code is perfect for me. I will be careful in the next time before I will post.

Jaeyoung