Require compare command to compare 4 files

I have four files, I need to compare these files together.
As such i know "sdiff and comm" commands but these commands compare 2 files together. If I use sdiff command then i have to compare each file with other which will increase the codes.

Please suggest if you know some commands whcih can compare more than 2 files

Hi.

Perhaps:

NAME
       diff3 - compare three files line by line

could be generalized to handle n files.

However, you'd need to look over the source.

You haven't said much about what you'd like to see. Do you want simply an indication that the files are the same or different? Do you need to know details on a line-by-line basis? Can the file content be reordered?

One could look at the lengths of the files. If they differ in length, then they cannot be the same. If the length is the same, then you could use a check-summing code, say md5sum, to calculate a numeric signature for the files, and then compare all the signatures.

If you want detailed attempts from responders here, then post some representative samples of 4 files, along with how you think the output should be displayed.

Best wishes ... cheers, drl

Actually my need is that any line in the four files should not be same
e.g.
cat File1
neha
pg

cat file2
aru
arin

cat file3
sample
apple

cat file4
apple
neha
juice

Now a line of file4 is same with file3 and file1. Here i need that command should report that line of file4 is same in file3 and file1
As i want every file should be unique

Thanks

Now the request is clear.

below is the "not-clever" way for you. If you need simplify add some loops on it.

grep -f fileA fileB ;echo "----------"
grep -f fileA fileC;echo "----------"
grep -f fileA fileD;echo "----------"
grep -f fileB fileC;echo "----------"
grep -f fileB fileD;echo "----------"
grep -f fileC fileD;echo "----------"

Hi.

Here is a solution from radoulov in thread Get common lines from multiple files using your data in files "file1" .. "file4":

#!/usr/bin/env bash

# @(#) s3	Demonstrate identification of common lines.
# Adapted from: radoulov
# http://www.unix.com/shell-programming-scripting/
# 140390-get-common-lines-multiple-files.html
# Post 8

pe() { for i;do printf "%s" "$i";done; printf "\n"; }
pl() { pe;pe "-----" ;pe "$*"; }

pl " Results of finding common lines with awk."
pe

awk 'END {
  for (R in rec) {
    n = split(rec[R], t, "/")
    if (n > 1) 
      dup[n] = dup[n] ? dup[n] RS sprintf("\t%-20s -->\t%s", rec[R], R) : \
        sprintf("\t%-20s -->\t%s", rec[R], R)
    }
  for (D in dup) {
    printf "records found in %d files:\n\n", D
    printf "%s\n\n", dup[D]
    }  
  }
{  
  rec[$0] = rec[$0] ? rec[$0] "/" FILENAME : FILENAME
  }' file?

exit 0

producing:

% ./s3

-----
 Results of finding common lines with awk.

records found in 2 files:

	file3/file4          -->	apple
	file1/file4          -->	neha

cheers, drl

Thanks drl,

Its working fine when my files are named with file.
Can you tell me the procedure/code from where i can give the name of my files as arguments?

thanks once again
priyanka

Hi.

Near the end there is a string

file?

that gets expanded by the shell to everything in the current directory that begins with file, like file1, file2, file-jump, etc.

So you can replace file? with your list of files. You can try it with some test filenames first, of course.

Best wishes ... cheers, drl