Aggregation of Huge files

Hi Friends !!

I am facing a hash total issue while performing over a set of files of huge volume:

Command used:

tail -n +2 <File_Name> |nawk -F"|" -v '%.2f' qq='"' '{gsub(qq,"");sa+=($156<0)?-$156:$156}END{print sa}' OFMT='%.5f'

Pipe delimited file and 156 column is for hash totalling.

File 1:

Record count is 254368

Absolute Sum in DB is 23840949436509.39

Absolute Sum using above script is 23840949436510.18750

File 2:

Record count is 2580100

Absolute Sum in DB is 7305817400402102.5619993295

Absolute Sum using above script is 7305817400403184.00000

Kindly help me in resolving this issue and do suggest me if any better way to do absolute hash totalling for huge volume.

Thanks in advance,
Ravi

awk uses 32-bit floating point numbers which do not have infinite precision -- they have at best 9 decimal digits precision. If you want infinite precision like a database will do, try the bc utility.

Hi Corona..

Can you help me with bc utility for this scenario ? I am just new to these functions !

Regards,
Ravi

Assuming that you're using the tail in the command line:

tail -n +2 <File_Name> |nawk -F"|" -v '%.2f' qq='"' '{gsub(qq,"");sa+=($156<0)?-$156:$156}END{print sa}' OFMT='%.5f'

to discard the 1st two lines of your input file because they contain headers that you don't want included in your output, that the '%.2f' didn't really appear in the command line you executed (since that would be a syntax error for nawk ), that you don't really want the output rounded to five digits after the decimal point in the output (as would be done in your command line by OFMT='%.5f' , and assuming that field #156 in the other lines in your input file contains a double quoted string containing a string of digits with no more than one period and with an optional leading minus sign (which you want to be ignored), you could try something like:

nawk -F'|' -v dqANDms='["-]' '
BEGIN { f=156
        printf("0")
}       
NR > 2 {gsub(dqANDms, "", $f) 
        printf("+%s", $f)
}
END {   printf("\n")
}' <File_Name> | bc
1 Like

Depends what your scenario is, I don't know yet, all I have is a program which doesn't do what you want...

Hi Don !

Thanks for the work around solution and it is working fine for small files, but when I execute large files..facing below error:

 
0705-001: bundling space exceeded on line 1 stdin

Kindly help me in this regard.

Regards,
Ravichander

Making the assumption that that error code is coming from bc, you could try:

awk -F'|' -v dqANDms='["-]' '
BEGIN { f=156
        printf("s=0\n")
}
NR > 2 {gsub(dqANDms, "", $f)
        printf("s+=%s\n",  $f)
}
END {   printf("s\n")
}' file | bc
1 Like

Hi Don !

Thanks for your help in a quick time ! :slight_smile: It works fine for few but not for all ! :frowning: Here is the analysis I done for 3 different files:

Hash total using script and in header has been provided below:

File 1:

Script - 23840949434129.13
Header - 23840949436509.39

File 2:

Script - 7305817379402102.5619993295
Header - 7305817400402102.5619993295

File 3:

Script - 23558431740937.266
Header - 23558431741074.536

Surprsied to see for higher precision..works fine...but not with others !

Kindly share your thoughts !

Regards,
Ravichander

My thoughts mirror those stated by Corona688 before... You have shown us an awk script that doesn't work. You have not shown us any input nor even given us a verbal description of the format of that input. I guessed at what your input looks like based on your non-working awk script. Obviously I guessed wrong.

If your original program wasn't able to correctly identify the data to be processed, the code I provided won't either. If you can't show us some sample input so we can figure out how to get what you want out of field 156 in your input, and show us at least one line of input that the script processes incorrectly, there isn't much we can do to guess at what might be going wrong when processing your huge data files.

Hi Don !

 
TR|XXX|2010-11-30|254367|23840949436509.39
XXX|ACTUAL01|2013|365|XXX|2013-11-30|XXX|XXXR|XX|TR|BAXXXNLBB|XXX9012|23000000|| | |950|289278|999|110|7245| ||| |||||| |||400204828| | |CE SCBUK|66482|||||664| ||800|||||||| || |||||||||||||||||| |110|7245|||| || | |HKD| |HKD|0||0||||||||||||||||||||||||TA|||289278|7245|664|800| |110|950|EEEEEE|7245|664|800| |950|EEEEEE|7245|664|800| |950|HKD|||0|-2380.26|||||||-307.03|-307.03|-307.03|-307.03|-307.03|-307.03|-2380.26|-2380.26|-2380.26|-2380.26|-2380.26|-2380.26||-307.02|-307.02|-307.02|-307.02|-307.02|-307.02|||||||||||||||||||||||||||||||||||||||||||||||||PCP|PTH| | | |L1|||||NB| |400204828|400454430| | | ||| || | ||| | | || | ||||| | ||||I|||| |N|PT| | | |||||||||||||||||||||||||||

This is the header and first record, and the total number of records is 254368.

Any further details needed Don ?

In your original (non-working) code:

tail -n +2 <File_Name> |nawk -F"|" -v '%.2f' qq='"' '{gsub(qq,"");sa+=($156<0)?-$156:$156}END{print sa}' OFMT='%.5f'

why did you bother adding code to remove all of the " characters from your input when there aren't any double-quote characters in your input file? Please explain in English what the format is for this file and please explain what the format is for the numbers that will be processed by this code. Do some fields sometimes have double quoted strings containing pipe symbols ( | )?

Please explain what algorithm is supposed to be used to compute the result that is printed at the end of processing.

In my last message I asks you to:

Is this single data line processed incorrectly? (Or is the correct result from processing this line 2380.26?)

I assume that you're using a Solaris system. What is the length (in bytes) of the longest line in your 254368 line file?

Hi Don !

The below is the requirement for my side to work with unix scripting:

  1. The number of records may vary from 200000 to 4500000.
  2. The 156th column needs to be calculated for sum which has a decimal range of (38,10)
  3. The file will be pipe de-limited and for now, the double quotes won't appear but it may come in future. So, currently we can take it like only pipe delimited.
  4. While performing aggregation, we need to take absolute sum of the 156th column.
  5. The maximum precision is of 38,10 is expected and on normally, the 156th column length coming as 24,10.

If the code which ever I have used/provided is erroneous or not suiting the requirement, kindly help me in arriving at a command to perform the above stated requirements.

I am finding quite difficult to find the reason as such that is causing this difference !

Regards,
Ravichander

You didn't answer my question about the length of the longest line in your file! If you have any lines longer than 2048 bytes (including the terminating newline character), nawk may fail.

  1. The number of records doesn't matter for this script.
  2. The code that you provided did NOT calculate the sum of the numbers in the 156th field; it calculated the sum of the absolute values of the numbers in the 156h field!
  3. The quote removal slows down the processing, but doesn't affect the results unless there is a pipe symbol ( | ) between quotes that is not to be treated as a field separator. If there is any possibility that a | between double quotes ( " ) should not be treated as a field separator, this awk script will not work! If there will never be a | between " characters and there will never be " characters in the 156th field, the script should ignore " characters completely.
  4. The absolute value of the sum is not the same as the sum of the absolute values!!! You need to clearly describe the calculation to be performed!
  5. Using bc to calculate the sum of a set of numbers can easily handle sums with a hundred digits before and after the radix character with no loss of precision.

The script assumes that the contents of field 156 will be a string of digits with an optional leading minus sign ( - ) and no more than one decimal point character ( . ). If there is a decimal point character and a minus sign, the minus sign must still be the 1st character in the string. If the contents of field 156 contains more than one minus sign, more than one decimal point, or contains any other non-numeric characters, the results are unspecified.

When extracting data from your database, are you absolutely sure that you are getting the records and the sum that you have in your header in a single transaction? If you are getting the data in one transaction and the sum in another transaction, changes to your database between those two transactions could easily cause the differences you are seeing.

1 Like

Hi Don !

Thanks for your valubale time and analysis ! I have made the requirement simple :

I have extracted the amount column alone into a seperate file and the data pattern of the same will be like the one shown below:

 
18781426.84
-2010820
-668398.44
-285369
-253957.7
-272.88
-2732931.94

The maximum amount value in the file is :

 
-90005467876809.567342220989

Now, I need to take the absolute of the amount and then I need to sum it up. The total number of records will be around 7 million.

Kindly help me with a code to fulfill the above requirement.

Thanks
Ravichander

I have supplied code that meets all of your requirements. You claim that my code doesn't work but have given absolutely no evidence that it does not work.

Please create a small database (with a dozen records instead of 7 million records) and show us the actual values in ALL of the records, the results produced by my code, and the results produced by your database. Without data that we can verify, there is nothing more we can do for you.