Improve the performance of my C++ code

Hello,
Attached is my very simple C++ code to remove any substrings (DNA sequence) of each other, i.e. any redundant sequence is removed to get unique sequences. Similar to sort | uniq command except there is reverse-complementary for DNA sequence. The program runs well with small dataset, but when I increase the data size to ~1,000 entries (some maybe 100,000bp long), it took about 2 hours to finish.
My question is: How to improve the performance of my code?
It seems memory issue can be excluded as 256GB RAM is available.
1) What are the room for coding techniques based on my current algorithms, which is a simple "sorting---looping---comparing" with complexity n^2 ?
2) What are the better algorithms, for sure there are many?

Either of the two questions is too complicate for myself, but I am wondering if anybody can give me some help to increase the performance of the program. Thanks a lot!

You need to reformat that code - I'm seeing it all as one line.

You can save time by keeping it always sorted. That would mean that you'd be able to check for duplicates every time you try and add a line, not afterwards.

I don't mean that you should call sort() every loop, I mean you should find the spot in the container where it belongs and insert there. This would be easier and faster with a list<> than a vector<>.

achenle, I do not know what happened, but it is fine with my vim/gedit , and good viewed with cat/more/less/head etc on my Linux console: ubuntu/Mint 17.0.

corona688, can I make sure the difference between list<> and vector <> can be hours? I am aware the data is kind of big (~6MB, for 300 entries with 166,000bp in total), but it's nothing compared with ~10GB file with ~100 millions of entries. I did not try ~10BG file yet, which would be forever!! I must have missed something big for my code.

It is UNIX text, not Windows text.

Vector is not "fast" and list is not "slow".

If you try to insert in arbitrary places anywhere inside a vector, it will be slow.

If you try to use a list for random access, it will be slow.

What I have suggested is better suited for lists than vectors.

You are comparing every element to every other element. If you have 300 elements, that's 90,000 comparisons. If you have 3000 elements, that's 9 million comparisons. Any sequence you remove early means 300 fewer loops later.

You are also searching for strings inside strings without using any sort of index, but that would be complicated.

1 Like

Your reply reminds me of two ideas that bug me a lot, or I have been trying to catch to handle fasta files. 1) Use some sort of index (hashing? FM-index?); 2) use suffix array, tree or trie to do the job. I'm trying to get example code by starting from what I have.

If you want to run faster,

  1. Insert data in a way that it's always sorted, as Corona688 has already noted.
  2. Don't use new/delete in a loop.
  3. Don't use C++ I/O routines - use C open/read/close or other low-level routines.

---------- Post updated at 04:34 PM ---------- Previous update was at 04:34 PM ----------

I don't always have access to a Unix box.

You say you have a lot of memory so I think you would be best using a hash table. Store both each entry and it's reverse-complementary.

My problem is the implementation. I want to try programming using some available "libraries". I appreciate any code example on top of mine. This sounds lazy, but I'm self-learning by practice. Of course googled a while, but did not get any similar example. Thanks a lot!

I would agree to achenles 1(Corona688) and 3 points.

On top of that I would just use simple dynamic array with pointer to next element, something like :

typedef struct _sList _sList;
struct _sList{
    _sList    *next;
    int       SEQ_size;
    SEQ       *element;
};

when reading elements from file on complete element just start comparing SEQ_size from start of list, until point where element_size from file equals or smaller to element in list, then in case of smaller - add element to list (unique one). If equal memcmp(list_elemen, file_element, SEQ_size) until list element is greater, equal or SEQ_size of file element is greater, if equal delete - its equal, if greater add to list before greater element.
P.S. this is case when sorting from small to big.

This way you will fast forward to elements of same length, and then will compare only until you will find equal element or storing spot.

In the end you will have sorted list of unique elements.

Need even faster then make more difficult structures from which you could build graph(data tree). In this area only your imagination and content of element will stop from optimizing even more. Usually more complex structures will pay off when having bigger amount of data.

1 Like

"hash table" isn't exactly a library, it's different enough from other data structures it's often hand-rolled. Generalizing it too much would run the risk of poor performance, you need to pick the right algorithms for your application. It has a lot of restrictions as well (hard to iterate, deletion can cause something like fragmentation, and it can't be sorted). I've seen a few attempts at building a library for it, but nothing I ever liked very much.

In the end it's not that complicated. It's a big array with strict rules about what data gets put in what element. I'd suggest "open chaining" for your table -- basically an array full of lists -- with an index that's not really hashed at all, just converted from ACGT into boolean. Four letters would be 8 bits, for an array 256 long for example. Then you could just look up the first four letters of your sequence, find that list, and speedily check every possible thing which might contain your sequence without having to brute-force it.

1 Like