Help optimizing sort of large files

I'm doing a hobby project that has me sorting huge files with sort of monotonous keys. It's very slow -- the current file is about 300 GB and has been sorting for a day. I know that sort has this --batch-size and --buffer-size parameters, but I'd like a jump start if possible to limit the number of days I have to fool around finding what works. The man page doesn't even tell me what the default values are.

I have a 4-core 64-bit AMD processor, 32 GB RAM and plenty of hard drive space.

I would normally do a preliminary radix sort on files this size, but it's a bit awkward with these keys. The keys represent positions in a game and consist of 64 bytes of just 3 values: x, o and -, always beginning with x. They tend to form clusters that are similar at the beginning, and vary from run to run. It's not clear how to form the radix. Here are records from various spots in a typical run....

xo-xoooox---x-------ooox------------xxo--x----------o-xx-------- -45 20 -1 1
xo-xo-oox---x-------ooox------------xxo--x----------o-xx-------- +45 19 05 1
xo-xo-oox---x-------ooox------------x-o--x----------o-xx-------- -45 18 37 2
xo-xo-o-x---x-------ooox------------x-o--x----------o-xx-------- +45 17 07 3
xo-xo-o-x---x-------ooox--------------o--x----------o-xx-------- -45 16 36 4

xxxooxx-oooxo-------ooo--xo-----------o--xxo-----------x----xoxx -31 26 23 8
xxxooxx-oooxo--------oo--xo-----------o--xxo-----------x----xoxx +31 25 20 9
xxxoo-x-oooxo--------oo--xo-----------o--xxo-----------x----xoxx +35 24 20 457
xxxoo-x-oooxo--------o---xo-----------o--xxo-----------x----xoxx -35 23 22 458
xx-oo-x-oooxo--------ox--xo-----------o--xxo-------o---x---xxoxx -35 25 02 0


xx-ooxx-oooxo--------o---x------------o--xxo-------------------x +34 18 55 5344
xx-ooxx-oooxo--------o---x---------------xxo-------------------x -34 17 38 5345
xx-ooxx-ooo-o--------o---x---------------xxo-------------------x +34 16 11 5346
xx-ooxx-oo--o--------o---x---------------xxo-------------------x -34 15 10 5347
xx-oo-x-oo--o--------o---x---------------xxo-------------------x +34 14 26 1237242

Can you post what you've tried and a sample of the input and output...

All I've tried is a plain old call to sort(1). I'm unhappy with the time it takes.

It doesn't make sense to post the inputs or outputs, as they are over 300 GB in size, and are much like the little snippets in the original posting.

The sort time is roughly one day. I have hundreds of these to do. Any improvement will be well worth the effort, but I'm hoping for some expert advice to get me started. Otherwise, I'll just try some stuff.

One obvious way to use all your CPUs is to split the file into N hunks, use a sort on each and pipe their output to "sort -m <pipe1> . . . <pipeN>". It helps if the file is a regular array, or can be made one, not line delimited, but it is not hard to write or find a tool to start after N bytes and a linefeed, using a seek to start, and end after M bytes at a linefeed. The bash/ksh construct <(...) is great for managing the pipes. N probably should be 2-3x the count of CPU cores, assuming a lot of i/o block time.

Reducing the data by encoding will also help. "xo-" is worth 6 bits, not 24. "1237242" is worth 4 bytes not 7.

Putting the data into an RDBMS means you can maintain indices on the keys, allowing sorted random access or sorted storage.

Faster storage is also key to high performance in big data. Distributing the data over many spindles and controllers gives higher bandwidth.

1 Like

I wasn't asking you to post the entire input and output...just a representative sample i.e. about 10 lines of input and 10 lines of output...

Being the impatient sort, I tried a few things.

It appears that, on my machine at least, the default --buffer-size is 4 GB, which yields 1.8GB temporary files. Don't ask me why the roughly half-sized results. I surmise that pass 1 uses qsort on a full buffer to produce the initial temporaries.

It appears that, on my machine at least, the default --batch-size is 16, which for some reason forms the next level of temporaries by merging 15 small ones into a bigger one. Again, don't ask my why the disparity of -1.

As I fooled with things, the execution times I got were in discrete steps, approximate multiples of each other.

I surmise that the execution time is determined largely by the number of passes over the data. I am able to get that down to two passes on most of my datasets, and 3 at most. I did this by upping the limit on open files to the hard limit of 4096, and setting the batch size to 1000, plus setting the buffer size to 10 GB. I'm still experimenting to find a sweet spot.

I think staging multiple sorts might be counter-productive, since if I'm right, 2 is the minimum number of passes -- one to create temporaries, one to write the final result.

The idea of encoding the data is interesting. I'll explore it when I'm sure everything else works. I really like having readable data when debugging, which I am still doing.

BTW, you already have more than 10 lines of input. You can sort them to see the output. I sort on the whole record for this testing phase. Nothing to see here, folks.

My bash code now looks something like this:

ulimit -Sn hard
export TMPDIR=/mnt/h2/tmp
export LC_ALL=C
export SORT="--buffer-size=1000 --buffer-size=10g"

sort $SORT <infile> --output <outfile>

Did you perhaps intend to use:

export SORT="--batch-size=1000 --buffer-size=10g"

instead of setting the buffer size twice?

2 Likes

There is no way for me to know if this will help at all, but I had some fun playing. All of this was on a 16 core 16GB desktop running Cygwin. Linux often performs better on some things because Cygwin is really sitting on top of the Windows runtime library.

As an extrapolation from my testing below:
Try -

sort --parallel 4  --batchsize=256

Since you have four cores - this will improve throughput. It means run four independent threads of sorting - one on each core, break the large file into 256 chunks, sort and then merge 256 chunks (NMERGE in the documentation).

Your data presents problems.

  1. The key is the entire line. Using a tag sort (some kind of key compression on the first 64 bytes to turn the data into an integer or a set of 3 integers) speeds up comparison times by an order or magnitude. To make a numeric comparison on the first radix key (if you like the term) use
-k1n -k2n -k3n

. This will have sort use three 64 bit words, using 3bit translation ( 3 values * 64) for comparison which is three compare ops. versus having to compare 64 characters.

  1. Locality. Line 1301010 may really need to be after to line 14 in the final output.
    By getting NMERGE to reasonable value ( 300GB/256 = 1.17GB) This creates a first
    pass scenario where locality is less of an issue since all of 1.17 will easily fit in memory.

  2. sort uses disk intensively. make sure TMPDIR is assigned to a filesystem that is a separate disk with lots of CONTIGUOUS free space. i.e., Pretty much wiped empty.
    An SSD will make a huge difference in performance.

I dummied up a file with a 27183337 lines == almost exactly 2GB. I ran it on Cygwin.
Box has 16 cores 16 GB memory.

$ ulimit -a
core file size          (blocks, -c) unlimited
data seg size           (kbytes, -d) unlimited
file size               (blocks, -f) unlimited
open files                      (-n) 256
pipe size            (512 bytes, -p) 8
stack size              (kbytes, -s) 2025
cpu time               (seconds, -t) unlimited
max user processes              (-u) 256
virtual memory          (kbytes, -v) unlimited

Run using 7200 rpm drive not on SSD

TMPDIR=/cygdrive/e/tmp # not the SSD drive
time sort --parallel 16 --batchsize=32 -o ./sorted bigfile
real    3m2.016s
user    4m24.744s
sys     3m1.015s

Notice the user + sys time adds up to more than elapsed time. Why? parallel.

TMPDIR=/cygdrive/f/tmp #  this is SSD drive
time sort --parallel 16 --batchsize=32 -o ./sorted bigfile
real     1m18.223s
user    4m19.007s
sys     2m19.016s

This is about 40% of the wall time compared with a slow disk. sys is better due to shorter I/O request queues . And appears to me to be the limiting factor. The number of comparisons was the same as was a lot of the other overhead so user is about the same.

1 Like

Indeed I did. In fact, I thought I had edited that typo, but clearly I didn't.

I wrote a "locality of reference" sort once: mmap64() the file and start sorting by making 2 lists of one line, then a linked list of 2 lines sorted, twice, and then merge them, then again, then merge the two sets of 4, etc. A binary item counter tells you what is needed next. Since it works with lists of length 1 1 1 1 2 2 4 1 1 1 1 2 2 4 8 1 1 1 1 2 2 4 1 1 1 1 2 2 4 8 16 ..., it optimizes the levels of cache, ram, VM in use.m Short lists sort in L1 cache, really long lists sort in vm at disk speed, and in between, in between. The pretext is that the speed problem is bandwidth, not CPU speed.

1 Like

Very interesting.

I had completely missed the --parallel option in reading the man page. I use this machine for everything, so I'm not sure I'll always want to swamp the cores, but I'll surely use some. I think this will help just the first pass -- when the temp files are first created -- because I'm going to try really hard to tweak parameters to make the second pass the last one where there's a single output file.

I'm just starting, and my largest input file is 1.3 TB. Therefore, SSD drives are too expensive for me; the only way to use just the one I already own and is not cluttered with my stuff already would be to chop the sort into many segments -- and then the merge phase would have to be all on spinning drives anyway. Maybe in 10 years.

I may compress the records eventually, but I'm running tests now that show that user time is at most 25% of the total time. That limits the effectiveness. Plus, each run is a one-time thing, and I worry about the overhead of the compression since it has to be a separate process.

I've got a script that is testing combinations of batch-size, buffer-size, compress-program (with various levels of compressions) and I'll add --parallel to it. I've been curious about sort for some years, at this seems to be the moment when I really want to investigate.

---------- Post updated at 02:07 PM ---------- Previous update was at 11:54 AM ----------

An excellent idea for a cache-aware in-memory sort. I have never before seen this idea extended to VM like this. However, it won't work for my TB-scale files without TB's of swap space. I'm also not quite ready to rewrite GNU sort to use the idea.

Thinking of alternatives, I read somewhere that phase 1 of the existing sort used qsort on a full buffer, and spits it out as the result if it's small and as temporaries otherwise. I know qsort is a favorite because it's unusually good in it's average performance, and is in-place. I once wrote a sort that used a heapsort algorithm in phase 1 to produce longer temporaries -- averaging twice the size of the working storage, with guaranteed n-log-n performance. Too bad it wouldn't be a good general-purpose replacement -- it incurs memory fragmentation when using variable-length records. The qsort implementation doesn't because it cleans out the workspace on every buffer, and the heapsort method can't do that and still generate the long outputs.

---------- Post updated at 02:13 PM ---------- Previous update was at 02:07 PM ----------

I'm now running some tests with a "short" 14 GB file. Some extra eyes on the script would be appreciated, as I've already been caught in one error already in this thread.

#!/bin/bash
# find times for sorting as compared to copying time.

ulimit -Sn hard
export TMPDIR=/mnt/tmptmp
export DATA=s2740.posn
export DEST=/mnt/h2/tmp/junk
export TEMP=/mnt/tmptmp/junk
export LC_ALL=C

cd /mnt/h1/posn
echo prep
df -h .
df -h /mnt/tmptmp
df -h /mnt/h2/tmp

# do something to make the cache state consistent
rm -f $TEMP $DEST
bash -c 'cp $DATA $TEMP; cp $TEMP $DEST'
echo

echo -n first copy then second copy
rm -f $TEMP $DEST
time bash -c 'cp $DATA $TEMP'
time bash -c 'cp $TEMP $DEST'
echo

echo -n copy both
rm -f $TEMP $DEST
time bash -c 'cp $DATA $TEMP; cp $TEMP $DEST'
echo

echo -n sort with default params
rm -f $TEMP $DEST
time sort $DATA --output=$DEST
echo

echo -n sort with n=16, buffer=4g
rm -f $TEMP $DEST
time sort --buffer-size=4g --batch-size=16 $DATA --output=$DEST
echo


buf=1
while [ $buf -lt 16 ]
do
  bat=10
  while [ $bat -lt 101 ]
  do
    for par in 1 2 4
    do
      echo -n sort with parallel=$par buffer=${buf}g batch=$bat and gzip from none to 9
      rm -f $TEMP $DEST
      time sort --parallel=$par --buffer-size=${buf}g --batch-size=$bat $DATA --output=$DEST
      export GZIP='-1'
      rm -f $TEMP $DEST
      time sort --parallel=$par --buffer-size=${buf}g --batch-size=$bat $DATA --output=$DEST
      export GZIP='-2'
      rm -f $TEMP $DEST
      time sort --parallel=$par --buffer-size=${buf}g --batch-size=$bat $DATA --output=$DEST
      export GZIP='-3'
      rm -f $TEMP $DEST
      time sort --parallel=$par --buffer-size=${buf}g --batch-size=$bat $DATA --output=$DEST
      export GZIP='-4'
      rm -f $TEMP $DEST
      time sort --parallel=$par --buffer-size=${buf}g --batch-size=$bat $DATA --output=$DEST
      export GZIP='-5'
      rm -f $TEMP $DEST
      time sort --parallel=$par --buffer-size=${buf}g --batch-size=$bat $DATA --output=$DEST
      export GZIP='-6'
      rm -f $TEMP $DEST
      time sort --parallel=$par --buffer-size=${buf}g --batch-size=$bat $DATA --output=$DEST
      export GZIP='-7'
      rm -f $TEMP $DEST
      time sort --parallel=$par --buffer-size=${buf}g --batch-size=$bat $DATA --output=$DEST
      export GZIP='-8'
      rm -f $TEMP $DEST
      time sort --parallel=$par --buffer-size=${buf}g --batch-size=$bat $DATA --output=$DEST
      export GZIP='-9'
      rm -f $TEMP $DEST
      time sort --parallel=$par --buffer-size=${buf}g --batch-size=$bat $DATA --output=$DEST
      bat=$(( bat + 10 ))
      echo
    done
  done
  buf=$(( buf + 10 ))
done

---------- Post updated at 03:02 PM ---------- Previous update was at 02:13 PM ----------

Oops. I can already see I omitted the --compress-program parameter from the ones I wanted to use gzip. Consider it fixed.

There is no swap space use with mmap() unless you choose the more exotic options. The basic mmap() swaps in file pages directly to RAM, giving you a way to work huge files with all of RAM as a cache. If you overmap a file, you can write pages. Empty pages can exist, supporting sparse matrices. This is a very powerful door into application use of VM and full exploitation of RAM.

Sorting small into larger in steps allows the problem to periodically expand to the next level of storage: from the caches to the RAM to the disk. The only flaw is that it might be good to merge more than 2 at a time, so when you get to the disk size merges, there is just one or at least fewer. The merge could use some sort of comparison tournament tree to do the merge, sized to fit in L1 cache, where each replacement item needs to be fit into the tree before using the smallest from the tree. Note that each item will take a whole page of RAM, or two, and N-N+2 lines of L1 cache, depending on the cache and item sizes and page/line offsets. Trying to fill L1 cache is fraught with difficulty, as many items may demand the same cache bucket, so it pays to shoot a bit low, like 1/4 - 1/2 L1 cache size.

You don't need to use an SSD for temp files - you can use it as the swap device.

Although you probably wouldn't like the results if your GUI gets swapped out, it should speed up the sort by a huge factor.

Since you have 300 GB of data, you are better off loading this into a database. Oracle would work well, especially if you used the Advanced Compression for OLTP with 11gR2 or possibly 12c. You can load it in a table with each row in the file as one row in the database and each character of the in the first half of the line as an individual CHAR(1). You could then create an index with the columns that you want to sort by and you would already have it sorted. You can also store the data as an index organised table. If this is not for profit you probably don't need an Oracle license. You just need to know how to install Oracle and create a database. You can also use mySQL to get nearly the same thing.

Well, you can swap out the world using mmap() aggressively, good for problem speed and bad for retaining control! File pages rolled into RAM are left there, even unmapped, until there is demand, and are recently touched, so . . . .

SSD based swap is like another, faster tier, but for control structures, not the data (which remains in the mmap()'d space, not swap space). Sorting like this is like building an index in the heap, compared to sort, which actually merges files over and over to create ascending/descending strings of items that grow into a sorted whole. It makes me recall those tape drives reading backward in tape sorts, where the data was distributed onto all but one drive in ascending amounts and then they all read backwards writing the first until one hit BOT, then it became a written volume and the old written volume started reading backward. No rewind time.

This seems to be what is classically called merge sort. In the wikipedia article there are also its Landau symbols for runtime estimates.

I hope this helps.

bakunin

If the OP knew how to write a faster sort program using mmap(), this thread wouldn't exist.

Besides, all mmap() does at the application level is turn reading/writing data from/to a file into a memory access - the underlying read()/write() from/to disk still has to happen. If you know how to code IO operations, it's not hard to beat mmap() performance with standard read()/write() system calls, because you know your data access pattern and can tune your IO operations to the specifics of the underlying file system and hardware. mmap() IO is generic and untuned.

And mmap() has some problems with writing data, especially if you try to extend your file and run out of space.

The quickest and easiest way to make sorting data faster is use more memory. An SSD swap device will do that indirectly by providing a lot of extremely fast swap, along with being a lot bigger than any reasonable amount of actual RAM.

I think you're overstating the benefits of tuning, the operating system will match I/O to memory page sizes and do readahead and things for you anyway without being asked. I/O all ends up in the generic cache bucket unless you do direct I/O and control your cache closely. Which there are POSIX-standard means to do so for both files and memory.

Agreed, RAM is what it comes down to in the end, reducing the number of times you need to re-read the same data.

Generally it's possible to beat default page-cached IO by 10-20% - but only if you can accurately predict your access pattern. And if you screw up and the data isn't accessed in the pattern you thought it would be, you can really destroy performance, especially on RAID5/6 disk systems.

Basically, the default IO is pretty darn good almost all the time. Don't try to beat it unless you really have to - for example, a $600 million system where there's more data to process then you ever could and the amount you process is limited by your IO speed.

Thanks to all for your comments. I was asking for ways to tune UNIX sort, because while I know how, I'm unwilling to rewrite it for this project -- I'm likely to be mired in bugs for too long.

I did some time testing, and in spite of bugs that are going to make me do it again, there are some rough results. These are on a 14GB test file with records of 64 bytes plus newline.

First, I quickly abandoned the idea of having sort compress its temporary files. Using gzip, even with -fast, is a loss of 10% to 20% in speed. Using -best is much worse, for a loss around 1500%.

Second, adding 10 to the batch-size parameter costs about 10% in speed until you can eliminate a merge pass. At that point, it turns into a benefit of about 30%. That's the sweet spot, because it starts going up again if you add even more to the batch.

Third, adding to the -parallel parameter is a win if you have multiple cores. Not huge: about 10% each for doubling the parameter from 1 to 2 or from 2 to 4.

Finally, changing the buffer-size parameter from 1g to 11g was a big loss -- roughly doubling the execution time. I don't know if there's a sweet spot in there, and I'll have to do finer-grained testing, or testing on a larger input. I suspect that it only pays when it's the only way to reduce the number of merge passes.

So, to a first approximation, it's best to add to parallel and keep buffer-size and batch-size small so long as (buffer-size * 0.4) * batch-size is at least as big as the input file. This will give the minimum 2 passes through the data. The 0.4 represents the observed fact that the temporaries are a bit smaller than half the requested buffer-size, at least on my data.

The additional testing may give me some information about how to balance buffer-size and batch-size subject to the above formula.