Sorting is not the bottleneck in most modern applications, but we use it a lot, and sometimes you need to sort data where comparison keys cannot fit in RAM. Therefore, it makes sense to pick the best sorting algorithm out there.
The typical choice is Quicksort, because it is perceived as being the fastest out there. It is easily implemented in a way that performs very well on most CPUs, performs extremely fast on almost sorted data, but it does have an o(n*n) worst case performance, no matter what.
Mergesort, on the other hand, has o(n*log(n)) as worst case, and a good implementation has o(n) as best case. The extra memory used is small compared to modern hardware, and using the right swap algorithm, it requires between o(1) and o(n) swaps, after finding the right order. Mergesort is a stable algorithm and it parallelizes well, if needed. The amount of memory needed can be determined at the beginning of the algorithm, and it would typically be 2*n*log2(n) bits, where n is the number of items to sort.
As RAM amounts increase and we get multiple CPUs, I believe that Mergesort will start to replace Quicksort, especially in cases where comparisons are expensive.
I once created a mergesort for linked lists that performs really well, not using multiple threads but keeping only up to 32 sorted linked lists for lengths 2, 4, 8, ... etc. THis really boosted the performance because the sorting works more on data in the processor cache then with the normal merge to 2 count lists, merge to 4 lists etc.
If you are interested, see our website for a small description and a download with source code.
NB: especially when compares are costly (like with string natural ordering sorts) the constant for the O() is quite a lot smaller than with quicksort and the difference becomes even more positive.
Only when comparing to integer array qsort <-> integer linked list mergesort did the quicksort win with our tests.
Post a Comment