Read/Write a fairly large amount of data to a file as fast as possible

Hi,

I'm trying to figure out the best solution to the following problem, and I'm not
yet that much experienced like you. :slight_smile:

Basically I have to read a fairly large file, composed of "messages" , in order
to display all of them through an user interface (made with QT).

The messages that I write into the file, comes all at once from a socket, so
in oder to write it quickly without loosing any of them I plan to do the following:

  • Create a list of preallocated pages (3-4 by default but the list grows if needed)
  • Write the data that comes from the socket to the preallocated buffer
  • Once a page is full, schedule a write with aio_write (AIO - Asyncronous IO).
  • On callback schedule another one if any is full.
    And so on..

This is the best I could come up with in the writing part, but if any of you have
a better idea, please let me know.

Now the problem comes when I have to read the file, at a later time, and display all
the messages in order to analyze them as fast as possible.

I first thought of mmap'ing the file in order to copy the data only once, from the file
to the kernel cache (if I understood correctly how mmap works internally) and then
accessing it from the application. But I'm not sure this can be done, or convenient, as
the file might be pretty big (2,3 Giga bytes, although I'm not sure about the magnitude).
Beside the kernel could unload the pages and many page faults could occurred.
So I discarded this idea.

I also thought about the opposite of what I do for writing but I'm not sure is a good idea.

The main problem is that I have to decode the messages before displaying them, as they are of different type and variable length. So reading the whole file at once and then decoding them to copy to another memory location seem time consuming to me. As it
requires 4 copies (disk -> kernel -> user space -> user space after decoding).

Anyway, now it's your turn. :slight_smile:
Any help would be appreciated.

Thanks.

First off, have you already proven that conventional I/O (read/write or stdio) is simply not adequate for your files? Buffering is your friend.

You might want to read Steven's 'Advanced Programming in the Unix Environment' -
the chapter (Chap 8, I think) with the table on the effect of buffering on I/O....

Rochkind's 'Advanced Unix Programming' has some examples of high-performance read/write routines using conventional syscalls, including mmap().

You should consider that pitching programmatically simpler methodology for more complex methodology is never always a given. What you gain in speed may not be worth the extra programming time and maintenance time. Is say, 100 extra hours of your time worth a 10% gain in performance? Your manager might say 'No'.

What kind of socket do you have that your hard drive cannot keep up with? Normal read/write calls are not slow. Seeking is slow, if you're going to be seeking randomly all over the place then mmap-ing it might be better. But keep in mind that, on 32-bit machines at least, you're limited in how big an area you can map, a gig is a big chunk of a process' 4-gig address space. 64-bit's limit is much, much higher.

What you might also find useful is cache-hinting, being able to tell the kernel 'OK, I am done with this area of the file for the foreseeable future' in order to let it purge data from cache earlier than it might otherwise have, or do read-ahead differently, etc. It gives you some of the advantages of raw I/O without the problems. See fadvise and madvise.

Corona said it much better.... must be a terabit line....