Is there a way I can extract my data faster. You know my data is 1.2 GB text file with 8Million rows with 38 columns/fields. Imagine how huge this is.
How I can optimized the data extraction using perl. That is why I'm creating a script to filter only those informations that I need. Is there any modules available or any way to speed up the process of extraction? Tnx in advance.
The fastest way would be to use a C program that can read each line into a single buffer, do the determination without any memory allocation/deallocation, then print the required sections again without memory allocation/deallocation.
Isn't there any size restriction on the program buffer, kernel buffer ?
If there is a feasibility to have single buffer to hold the contents whatever be the size, there could be just one flush that could do the job ( this is purely subjective )