I took the following pictures around 11pm on the Cambria coast. First attempt at nighttime long exposure using the Sony RX100mii:
The Least Significant Digit Radix sort algorithm is a classic near-linear-time sorting algorithm. It is a good example of how additional information about the inputs for an algorithm allow us to improve our code’s efficiency.
In the real world, for an overwhelming majority of applications, using the native .NET Array.Sort() implementation is efficient and adequate. The native sort algorithm implemented in the .NET library is a smart combination of three different sort algorithms (insertion sort, heapsort, and quicksort) depending on the input parameters. These provide a worst-case runtime in the order of O(nlogn) where n is the input size.
However, theory is very powerful, and for some applications, when you know more about the input (for example – the range and distribution of the population), you can achieve near-linear time sorting. Counting sort is a classic simple example of the concept, useful for sorting integers.
Following is a C# implementation of Least Significant Digit Radix Sort. This algorithm sorts strings (or anything that can be represented as a string) in O(n*k) time (where k is the average length of each string key). For many languages (DNA, words in the English language, ISBNs, etc.), this means near-linear performance. The complete implementation is available here.
How the Algorithm Works
During each step we sort the strings according to one of their characters, starting from the rightmost character and working our way left to the first character:
Note the number of these steps depends on the length of the longest key. This is why the performance of the algorithm is of O(k*n).
Between each step, sorting the keys by one character is performed via Bucket Sort. We create R buckets (one bucket for each letter in our alphabet) and add the strings to the buckets in order. The buckets are then combined in order and the result is fed to the next step. In the C# implementation I used queues to represent the buckets:
LSD Radix Sort vs. Array.Sort()
The C# implementation of Radix LSD sort linked above performs much faster than the native .NET Array.Sort() implementation:
LSD Radix sort also has the benefit of being stable (relative order of elements remains the same for elements with the same key), and allows for very easy reconfiguration of the alphanumerical order (by changing the alphabet definition).
However, at peak it does end up using roughly 2n+k memory – and for most cases, the O(nlogn) performance of the native algorithm is more than adequate. The LSD Radix algorithm also does not lend itself well to parallelization.
The implementation provided can be optimized in a variety of ways:
- Scanning the input once for the alphabet. The implementation relies on us already knowing the alphabet and the length of the longest key string – a real world implementation would likely need a quick pass of the input to gather this information upfront.
- Use of a more efficient data structure than queues – The algorithm spends a lot of time merging the queues between the different iterations. An array-based data-structure that handles these merges in constant time (by modifying index references rather than copying over values) would significantly improve the runtime of the algorithm.
- Compound alphabets could be used (ie combining every adjacent two letters and increasing the number of queues at each step to RxR) to optimize performance for cases where you know some two letter combinations are very common (ie consonants and vowels in the English language). This would come at a cost of much higher memory consumption.
Let me know if you have any ideas as to how the implementation could be improved.