Best way to dump metadata to file: when and by who?

Hi,

my application (actually library) indexes a file of many GB producing tables (arrays of offset and length of the data indexed) for later reuse. The tables produced are pretty big too, so big that I ran out of memory in my process (3GB limit), when indexing more than 8GB of file or so. Although I could fork another process to work around the memory limit size, this would not fix the problem, so I'd like to dump the tables to a file in order to free the memory, and avoid to re-index the same file more than once.

Bear in mind that currently, the tables produced are kept in memory in a single-linked list, shared with another thread that use it to produce another list of filtered data. So I'd rather not change this schema. The other thread only access the list once the whole file has been indexed.

Now, the questions I'm asking myself are:

  • When and how it's best time to dump the tables to a file?

Dumping a table as it gets full doesn't sound very efficient to me. Would I keep nothing in memory? The linked list would always be empty? If I decide to keep N tables in memory, and dump every N, how do I avoid making a check for how many tables I have
in memory at every cycle ?

  • Who should dump the metadata produced to file? Different thread? Same thread that index the data? I also wouldn't like to produce metadata files when the file processed is less then a giga (small file case), but at the same time I wouldn't want to complex the code of the indexer, that right now is pretty simply: parse, find the data, create an entry table, add it. If the table is full, create another one and add it to the linked list.

  • Let's say I figured out (thanks to you) the best way (in my case) to dump the metadata. What policy should I use to load the data in order to let the other thread
    filtering the index data without radically changing the way it works now (e.g. through the linked list) ?

One solution that come to my mind, that would avoid a drastical change in my schema is to create a "list manager" that would provide an interface to add and retrieve element from the list. This entity (either a thread or a process) would take care of keeping some data in memory (linked list) and some other in the file.

Please share with me your skill and experience! :slight_smile:

Thanks in advance.

Regards,
S.

Wow, what a question. Are you re-engineering a database system?

On slightly-less than gigabyte boundaries. Actually, 256 kB blocks also work very well.

If it's in a different thread, what's the point? You can't just free the memory if the other thread still has a lock on it.

I don't think that's answerable unless one really knows your existing software architecture.

Nope. I'm just trying to write an application as efficient as possible, that needs to dump indexes table, and I'd like to learn as much as possible from this experience.

Do you mean to execute an fwrite of a 256KB buffer? Currently I have a list where every element (table) is an array of N entry, for a total size of 4KB per array, and I dump every table at once with a single fwrite.

Basically one thread (A) indexes the file, while another thread (B) waits for it to finish, in order to use the produced tables (which I used to keep in memory) to process the data in the file. The problem is that the file indexed are huge (~30GB) and produce more than 4GB of data, which I can't keep in memory (limit of 3GB per process) so, at one point or another I have to dump the data produced in a file in order to free the memory.

The other thread (B), based on a flag, either read the tables from the file or the list in memory.

Thanks for your help,
S.

I cannot help other than to quote an old software design maxim:

You mean I should use a database for holding the tables, like sqlite ?

Which database primarily depends on how you many indexable and unique columns you have, on the ratio of readers to writers. sqlite? LOL. I was thinking more along the lines of MySQL or BerkelyDB/SleepyCat DB .

That's why I wouldn't want to use a database. The work involved, and the dependency produced, is not worth it in my case (IMHO).

I only have one writer, and one reader.

Data are written sequentially, and never modified. Write once, read many.

An ad-hoc solution I thought would be my best way to go.

I appreciate your thought on this.

Thanks,
S.

In that case, it might make sense for you to use the OS as your database. Divide your tables/rows in some manner and create a hashed directory scheme to save them. So for instance, tables with a 64-bit index that in hex looks something like: "0a924b233f2917fa" would be stored in the directory:

0a/92/4b/23/3f/29/17/fa

with the fields as files. Or the entire table can be a file (the last path element).

Either way, table management becomes a bit easier, and the OS handles the efficient management of data location and juggling. All you have to do is keep track of how many files can in be in memory at once. But again, can let the OS do that for you, since it will buffer files very efficiently.

To be honest, I don't understand what you mean and how I should apply your suggestion to my case (for dumping the tables).

If possible, can you point to any interesting lecture with actual examples of what you are talking about? Or at least the keywords to use with Google? :slight_smile:

Thanks again for your time and your help!

Regards,
S.

I spent a few minutes on Google. Maybe this will help:

On Hashed directories: Hashed directory creation for storing thousands of images on file system | drupal.org and Directory Hashing Algorithm

On Flat files and various implementations: Flat file database - Wikipedia, the free encyclopedia

Discussions on using flat-file databases for real-world applications:

Storing Data with PHP � Flat File or Database? - For Dummies

Viaweb FAQ

Do You Believe in a Flat-File Driven Content Management System?

https://salempress.com/Store/samples/encyclopedia\_of\_genetics\_rev/encyclopedia\_of\_genetics\_rev_bioinformatics.htm