Compare multiple files, identify common records and combine unique values into one file

Good morning all,

I have a problem that is one step beyond a standard awk compare.

I would like to compare three files which have several thousand records against a fourth file. All of them have a value in each row that is identical, and one value in each of those rows which may be duplicated in the tree files vis a vis the fourth

What I want to see is:

1) The number of records that is unique in each of three (not in any of the others),
2) The number of records that is not unique in each of three,
3) the number of records in the fourth that is NOT in any of the other three;
4) An output file with the full row of each unique record across all the files

These are all text files.

What have you tried so far?
Can you provide with example input and how you like the output to be?