The algorithm of STL (Standard Template Library is said to be carefully optimized. So what is optimized in its sorting algorithm?
Since the birth of the rapid sort algorithm, from the average performance, the performance of the rapid algorithm is 3 to 3 times the performance of the fast algorithm in addition to the size of the data is less than inserted in the case of extreme (<= 20). This is already an indisputable fact in the textbook.
One of the simplest mixed algorithms is when the data is less (n <20), the algorithm is transferred to insert sorting, and at other times are still rapid sorting, such as
void quicksort (_RandomIterator __start, _RandomIterator __last) {while (__last - __first> __stl_threshold) {_RandomIterator __pivot = partition (__ first, __last, mean (* __ first, * __ last, * (__ first (__last -__ first) / 2)); Quicksort (__ first, __pivot); __first = __pivot;} __insert_sort (__ number, __last);
Here is a choice, just when to insert sort: The above algorithm is transferred to insert sorting each time it is less than the threshold, and another algorithm is exiting once it is subdivided to the data length is less than the threshold, and finally summarizes When you come again, the total insert is sorted. It should be said that these two algorithms have no big difference, but STL uses the latter. The reason is finally said.
STL's true place is a supplement to the rapid sorting algorithm. Rapid sorting features a good average performance, achieving O (NLGN) performance, the disadvantage is that the worst case performance will fall to O (n ^ 2). The supplement to this is introduced into a recursive count. When the recursive depth exceeds a certain threshold (the threshold setting of the STL is 2LGN), the algorithm is transferred to a slower but worst, and the algorithm of the O (NLGN), such as a heap Sort (STL sequencing the heap to partial_sort is part sorted). This algorithm monitors its own recursive depth, which is known as introsort-introspective sort, which is actually a variety of rapid sorting methods, is a mixing algorithm. It can approximate the performance of O (NLGN) in worst cases. In fact, in the worst case, it is almost better than the stack, but it is much better than rapid sorting. The average performance and rapid sorting are similar. Its algorithm is as follows:
void introsort_loop (RandomIterator __first, RandomIterator __last, int m) {while (__last - __first> __stl_threshold) {if (0 == m) {partial_sort (__ first, __last, __last); return;} RandomIterator __pivot = mean (* __ first, * __ last, * (__ first (__ last -__ first) / 2)); introsort__loop (__ first, __pivot); __first = __pivot 1;}} void introsort (RandomIterator __first, RandomIterator __last) {introsort_loop (__ first, __last, __lg (__ last -__first) * 2); __final_insert_sort (__ first, __last);
STL played a small acceleration trick in __final_inser_sort. Algorithm is as follows: void __final_insert_sort (__ first, __last) {if (__last - __first <__stl_threshold) __insert_sort (__ first, __last); else {__insert_sort (__ first, __first __ stl_threshold); __unguarded_insert_sort (__ first __ std_threshold 1, __last);} }
I didn't quite understand why the insert algorithm would be the case. Later, he tried to optimize the insertion algorithm, it was found to have less bound test conditions in the cycle of __unguarded_insert_sort, so that the boundary test condition was dropped from two. The reason is that after "rough" rapid sorting, the minimum element can be determined that in the first __stl_threshold element, the location-based boundary condition can be removed. See the algorithm for insertion sorting. No longer.