HI All,
I have a CSV file of 30 columns separated by ,. I want to get a count of all unique rows written to a flat file. The CSV file is around 5000 rows
The first column is a time stamp and I need to exclude while counting unique
Thanks,
Ravi
HI All,
I have a CSV file of 30 columns separated by ,. I want to get a count of all unique rows written to a flat file. The CSV file is around 5000 rows
The first column is a time stamp and I need to exclude while counting unique
Thanks,
Ravi
To have a difinitive solution, please display a sample of the data.
It is important to see the exact format of your time stamp.
To have an unique data:
sort -u File
Thanks for the reply
Here is the Sample data:
[10/17/11 20:48:45:213 EDT] , String 1 , String 2 , String 3, .....,String 29
[10/17/11 20:48:46:223 EDT] , String 1 , String 2 , String 3, .....,String 29
....
.
......
....
Row 5000
$
$
$ cat f35
[10/17/11 20:48:45:213 EDT] , String 1 , String 2 , String 3, .....,String 29
[10/17/11 20:48:46:223 EDT] , String 1 , String 2 , String 3, .....,String 29
[10/17/11 20:48:46:224 EDT] , String 1a, String 2a, String 3a,.....,String 29a
$
$ cut -d, -f2- f35 | sort -u | wc -l
2
$
$
tyler_durden
Thanks for the response Tyler. This script gives only unique rows. From the above example the output should be
2 String1, Strin2, .....
1 String1a,String2a....