Concatenate and sort to remove duplicates

Following is the input. 1st and 3rd block are same(block starts here with '*' and ends before blank line) , 2nd and 4th blocks are also the same:

cat <file>
* Wed Feb 24  2016 Tariq Saeed <tariq.x.saeed@mail.com> 2.0.7-1.0.7
- add vmcore dump support for ocfs2 [bug: 22822573]

* Mon Jun 8 2015 Brian Maly <brian.maly@mail.com> 2.0.7-1.0.3
- Fix stall on failure in kdump init script [bug: 21111440]
- kexec-tools: fix fail to find mem hole failure on i386  [bug: 21111440]

* Wed Feb 24  2016 Tariq Saeed <tariq.x.saeed@mail.com> 2.0.7-1.0.7
- add vmcore dump support for ocfs2 [bug: 22822573]

* Mon Jun 8 2015 Brian Maly <brian.maly@mail.com> 2.0.7-1.0.3
- Fix stall on failure in kdump init script [bug: 21111440]
- kexec-tools: fix fail to find mem hole failure on i386  [bug: 21111440]

Expected Output:

* Wed Feb 24  2016 Tariq Saeed <tariq.x.saeed@mail.com> 2.0.7-1.0.7
- add vmcore dump support for ocfs2 [bug: 22822573]

* Mon Jun 8 2015 Brian Maly <brian.maly@mail.com> 2.0.7-1.0.3
- Fix stall on failure in kdump init script [bug: 21111440]
- kexec-tools: fix fail to find mem hole failure on i386  [bug: 21111440]

I have picked only four blocks of the file. There are myriad entries with duplicates.

I thought of combining lines and run uniq command but some blocks have two lines but some blocks have 3 lines or more.

cat | paste -d - - | uniq

(This may not work for files more than 2 lines)

Could someone tell how to achieve the desired output?

Good idea, but, as you said, cat and paste aren't too flexible. How about this sed / sort approach that collects lines up to an empty one, replaces <newline> with a token (here: <CR> = \r), sorts the output, and undoes the replacement? Be aware that uniq also needs sorted input to work correctly.

sed -n '/^ *$/ !{H; $!b;}; {x; s/^\n//; s/\n/\r/g; s/$/\r/p;}; ' file3  | sort -u | sed 's/\r/\n/g'
* Mon Jun 8 2015 Brian Maly <brian.maly@mail.com> 2.0.7-1.0.3
- Fix stall on failure in kdump init script [bug: 21111440]
- kexec-tools: fix fail to find mem hole failure on i386  [bug: 21111440]

* Wed Feb 24  2016 Tariq Saeed <tariq.x.saeed@mail.com> 2.0.7-1.0.7
- add vmcore dump support for ocfs2 [bug: 22822573]

 

. The original order is lost, though, which might not be a problem because the multiple entries seem randomly distributed. To get to something like an order by date, you could try (given your sort version provides the -M option)

sort -uM -k5,5 -k3,4

instead.

How about using sed instead of paste to pre-process the file? We would first create single lines of the blocks, then transform it back into blocks again after processing it through uniq . Here is a naive try which might need refinement:

Transform the blocks to lines:
(edited - see RudiC's post, the same idea. Basically you replace all newline characters inside a block with a temporary replacement character to get one line, RudiC used "\r", but you can use any other string as well.)

or, even simpler, using fmt ("1000" is a number higher than the number of characters a resulting line could grow, replace it with a higher number if it does not suffice). Notice, though, that transforming this back into blocks is a bit more effort because there is no replacement character for the newlines:

fmt -1000 /path/to/file > newfile

Transform the lines back to blocks (enter the ^M literally as an <ENTER>):

sed s/<replacement-for-newline>/^M/g' /path/to/newfile > file

I hope this helps.

bakunin

1 Like

I can't understand. Why not to make it easy?

awk '!t[$0]++' file

--- Post updated at 19:08 ---

I think I understood. Blocks can be separated in different ways

sed -rz 's/\n([^\n])/\1/g' file
1 Like

I partially agree with nezabudka, awk is the way to go here; but we need to make each block a record, not each line. By setting RS to an empty string, we can tell awk that records are separated by sequences of a <newline> followed by one or more blank lines. Given this fact, the following should work:

awk '
BEGIN {	RS = ""
}
!($0 in seen) {
	seen[$0]
	printf("%s%s\n", (NR == 1) ? "" : "\n", $0)
}' file

which will print the first occurrence of each duplicated record found in the file named file to its output. Note that the above code does not print an empty line before the 1st output record or after the last output record. The code could be simplified if you always want to print an empty line after each output record.

If you want to try this on a Solaris/SunOS system, change awk to /usr/xpg4/bin/awk or nawk .