Would it be fair to say that you want to split the original files into N new files, one file for each unique line (pattern) of the original? If so, you do realize that each file will contain one or more copies of the same line (pattern). Perhaps:
sort file | uniq -c
might be useful? However, if you have to split the files:
sort -u ${FILE} | while read pattern; do
grep "^${pattern}$" ${FILE} > "${pattern}.csv"
done
(untested)
which seems clunky since the file is being read N times, one for each pattern. How about:
sort ${FILE} | uniq -c | while read N pattern; do
while [[ 0 -lt ${N} ]]; do
echo "${pattern}"
(( N = N - 1 ))
done > "${pattern}.csv"
done
Kekanap, Have you soved it?
what i didn't got it, is where are you searching for the paterns ? on another file or on several other files ? grep output will be different !
if you are searching for exact same lines in two different files (as the pattern you get from the full line from file1) it might be much easier to just merge the 2 files and look for duplicated lines. that's 1 liner code.