Graphics cards

More and more applications which consist of computationally intensive tasks rely on the graphics card's capabilities in order to speed up the calculations. The question is how can one know whether the graphics card will have any influence for a proprietary application?

You do not know for sure unless you try it. However in general you will not see any improvement in performance unless you can partition your problem domain into threads of execution which keep the GPU processors busy but minimize data transfers to/from the GPU.

It's also best to minimize branching to take best advantage of the GPU's execution buffers. This still holds for processing in general, even, but frequent branching hurts GPU processors especially badly since they lose much of the advantage their very deep buffers and very wide paths otherwise give them. They can't easily drop everything and change directions the way general-purpose CPU's are expected to.

---------- Post updated at 11:28 AM ---------- Previous update was at 11:06 AM ----------

Also: the graphics cards in many consumer-level machines are quite terrible.