distinct values of all the fields

I am a beginner to scripting, please help me in this regard.

How do I create a script that provides a count of distinct values of all the fields in the pipe delimited file ? I have 20 different files with multiple columns in each file. I needed to write a generic script where I give the number of columns as a parameter to the script or the script by itself should be able to recognize the number of columns in the file based on the delimiter. The script needs to generate the output as below.

Sample data

Field1|Field2|Field3|Field4
AAA|BBB|CCC|DDD
111|222|333|777
AAA|EEE|ZZZ|EEE
111|555|333|444
AAA|EEE|CCC|DDD
111|222|555|444

For the above file, the result I am looking for would be:

Field1
AAA(3)
111(3)

Field2
BBB(1)
222(2)
EEE(2)
555(1)

Field3
ccc(2)
333(2)
zzz(1)
555(1)

Field4
DDD(2)
777(1)
EEE(1)
444(2)

Thank you in advance for your assistance.

Probably not too efficient, especially for large files, but it is straight forward:

awk -F '|' '
        {
                for( i = 1; i <= NF; i++ )
                {
                        count[i " " $(i)]++;    # count by field number and field value
                        uniq[$(i)] = 1;         # save a list of unique strings
                }
                if( NF > fields )
                        fields = NF;            # in case a variable number in file; capture max
        }
        END {
                for( i = 1; i <= fields; i++ )
                {
                        printf( "field %d\n", i );
                        for( x in uniq )
                                if( count[i " " x] )
                                        printf( "%s (%d)\n", x, count[i " " x] );  # print by field and value
                        printf( "\n" );
                }
        }
' <input-filename
awk -F \| '{for (i=1;i<=NF;i++) a[i FS $i]++}END {for (i in a) print i,a |"sort -n" }  ' infile

1|111 3
1|AAA 3
2|222 2
2|555 1
2|BBB 1
2|EEE 2
3|333 2
3|555 1
3|CCC 2
3|ZZZ 1
4|444 2
4|777 1
4|DDD 2
4|EEE 1