Data Structure Learning (C ++) Continued - Sort [2] Insert Sort

zhaozj2021-02-16  50

Basically, each step will be sorted by the record, inserted into the appropriate position of the recorded record in front of its key code, and can be tail from the head.

Direct insertion

Template

Void Insertsort (T A [], Int N, Int & KCN, INT & RMN)

{

KCN = 0; RMN = 0;

For (INT I = 1; I

{

T Temp = a [i]; RMN ;

For (int J = i; j> 0 && kcn && temp

A [J] = TEMP; RMN ;

}

}

This is the case after a streamlined:

Template Void InsertSort (T A [], INT N)

{

For (INT I = 1; I

{

T temp = a [i];

For (int J = i; j> 0 && temp

a [j] = TEMP;

}

}

Test Results:

Sort ascending n = 10000 timeespared: 0ms

KCN = 9999 kcn / n = 0.9999 kcn / n ^ 2 = 9.999e-005 kcn / nlogn = 0.07525

RMN = 19998 RMN / N = 1.9998 RMN / N ^ 2 = 0.0001998 RMN / NLOGN = 0.1505

Sort randomness n = 10000 timeespared: 330ms

KCN = 24293730 kcn / n = 2429.37 kcn / n ^ 2 = 0.242937 KCN / NLOGN = 182.829

RMN = 24303739 RMN / N = 2430.37 RMN / N ^ 2 = 0.243037 RMN / NLOGN = 182.904

Sort descending n = 10000 timeespared: 711ms

KCN = 49995000 kcn / n = 4999.5 kcn / n ^ 2 = 0.49995 kcn / nlogn = 376.25

RMN = 50014998 RMN / N = 5001.5 RMN / N ^ 2 = 0.50015 RMN / NLOGN = 376.4

It can be seen that the average performance is approximately N2 / 4, and there is no deception in the book (nonsense, how many people do more tests can be drawn).

Folding sequencing

This sorting method can be obtained by sequential searching strategies in the order of order. Obviously, only KCN can only reduce RMN, and the performance improvement can be increased.

Template

Void BinaryInsertsort (T A [], Int N, Int & KCN, INT & RMN)

{

KCN = 0; RMN = 0;

For (INT I = 1; I

{

T Temp = a [i]; RMN ; int low = 0, high = i - 1; while (low <= high) // fold half lookup

{

INT MID = (Low High) / 2;

IF ( KCN && Temp

}

For (int J = i - 1; j> = low; j -) {a [j 1] = a [j]; rmn ;} // Record

a [low] = TEMP; RMN ; // Insert

}

}

Test Results:

Sort ascending n = 10000 timeespared: 0ms

KCN = 123617 KCN / N = 12.3617 kcn / n ^ 2 = 0.00123617 KCN / NLOGN = 0.930311

RMN = 19998 RMN / N = 1.9998 RMN / N ^ 2 = 0.0001998 RMN / NLOGN = 0.1505

Sort randomness n = 10000 timeespared: 320ms

KCN = 118987 KCN / N = 11.8987 KCN / N ^ 2 = 0.00118987 KCN / NLOGN = 0.895466

RMN = 24303739 RMN / N = 2430.37 RMN / N ^ 2 = 0.243037 RMN / NLOGN = 182.904

Sort descending n = 10000 Timespared: 631ms

KCN = 113631 kcn / n = 11.3631 kcn / n ^ 2 = 0.00113631 KCN / NLOGN = 0.855158

RMN = 50014998 RMN / N = 5001.5 RMN / N ^ 2 = 0.50015 RMN / NLOGN = 376.4

It can be seen that the KCN is approximately NLOG2N, has a certain performance improvement.

Table insertion sort

If you use a "pointer" to indicate the order of records, you can avoid a lot of record movement, of course, and finally to rearrange it according to the "pointer". Natural, folding is not used here.

Template

Void TableInsertsort (T A [], INT N, INT & KCN, INT & RMN)

{

KCN = 0; RMN = 0;

INT * Link = new int [n]; int head = 0, pre, cur, i; link [0] = -1;

For (i = 1; i

{

IF (a [head]> a [i]) {link [i] = head; head = i; kcn ;} // has no head, so this judgment does not record KCN when it fails.

Else

{

For (cur = head; cur! = -1 && KCN && A [CUR] <= a [i]; cur = link [cur]) pre= CUR;

LINK [pre] = i; link [i] = CUR;

}

}

Cur = head; // Rearborne sequence for (i = 0; i

{

While (Cur

Pre = link [cur];

IF (Cur! = i)

{

SWAP (A [I], A [Cur]); RMN = 3;

LINK [CUR] = LINK [I]; LINK [i] = CUR;

}

Cur = pre;

}

DELETE [] LINK;

}

Test Results:

Sort ascending n = 10000 timeespared: 751ms

KCN = 49995000 kcn / n = 4999.5 kcn / n ^ 2 = 0.49995 kcn / nlogn = 376.25

RMN = 0 RMN / N = 0 RMN / N ^ 2 = 0 RMN / NLOGN = 0

Sort randomness n = 10000 timeespared: 621ms

KCN = 25721250 kcn / n = 2572.13 kcn / n ^ 2 = 0.257213 KCN / NLOGN = 193.572

RMN = 29955 RMN / N = 2.9955 RMN / N ^ 2 = 0.00029955 RMN / NLOGN = 0.225434

Sort descending n = 10000 Timespared: 0ms

KCN = 9999 kcn / n = 0.9999 kcn / n ^ 2 = 9.999e-005 kcn / nlogn = 0.07525

RMN = 15000 rmn / n = 1.5 rmn / n ^ 2 = 0.00015 RMN / NLOGN = 0.112886

It can be seen that RMN has indeed reduced RMNMAX = 3 (N-1). However, in the average of the average, performance is not as simple as simple - this is because the test object is an integer. For linked lists, this method does not require the final rearrangement. The algorithm for rearrangement has a detailed description of the "Data Structure (C language version)" in severeity.

Hill sort

The average efficiency of the previous algorithm is not good, but we note that the direct insert sorting in the case of the key code, the efficiency is the best, and when the number of key codes is small, N and N2 The gap is not so obvious. Based on the above facts, DLShell proposed narrowing incremental sorting in 1959 (old antiques). Basic idea is: Take an interval (GAP), divide the sequence into several subsequences, and insert each subsequences; then Gradually shrink interval, repeat the above process until the interval is 1. At the beginning, there is very little key in each sub-sequence, and the intersection is high; with the narrowing of the interval, the key code of the subsequence is increasing, but the key code has basically In order, the efficiency of direct insert is still high.

Hill sorted time complexity is not well estimated, GAP's selection has not contained, the program of GAP = [GAP / 2] is best written, as for why, I will know.

Template

Void shellsort (T A [], int N, int & KCN, INT & RMN) {

KCN = 0; RMN = 0;

For (int GAP = N / 2; GAP; GAP = GAP / 2)

For (int i = GAP; i

{

T Temp = a [i]; RMN ;

For (int J = i; j> = Gap && KCN && Temp

{a [j] = a [j - gap]; rmn ;}

A [J] = TEMP; RMN ;

}

}

Test Results:

Sort ascending n = 10000 timeespared: 0ms

KCN = 120005 kcn / n = 12.0005 kcn / n ^ 2 = 0.00120005 KCN / NLOGN = 0.903128

RMN = 240010 RMN / N = 24.001 RMN / N ^ 2 = 0.0024001 RMN / NLOGN = 1.80626

Sort randomness n = 10000 Timespared: 10ms

KCN = 258935 kcn / n = 25.8935 kcn / n ^ 2 = 0.00258935 KCN / NLOGN = 1.94868

RMN = 383849 RMN / N = 38.3849 RMN / N ^ 2 = 0.00383849 RMN / NLOGN = 2.88875

Sort descending n = 10000 timeespared: 10ms

KCN = 172578 kcn / n = 17.2578 kcn / n ^ 2 = 0.00172578 KCN / NLOGN = 1.29878

RMN = 302570 RMN / N = 30.257 RMN / N ^ 2 = 0.0030257 RMN / NLOGN = 2.27707

Note that the test results at this time are not accurate, and the sort of 10,000 integers has not been tested (it is estimated that the new machine is 0ms, I also have some time here all 0). Therefore, the following is retelled with 100,000 intense sorts:

Sort ascending n = 100000 Timespared: 140ms

KCN = 1500006 KCN / N = 15.0001 KCN / N ^ 2 = 0.000150001kcn / nlogn = 0.903094

RMN = 3000012 RMN / N = 30.0001 RMN / N ^ 2 = 0.000300001RMN / NLOGN = 1.80619

Sort randomness n = 100000 Timespared: 230ms

KCN = 4041917 kcn / n = 40.4192 kcn / n ^ 2 = 0.000404192kcn / nlogn = 2.43348

RMN = 5598883 RMN / N = 55.9888 RMN / N ^ 2 = 0.000559888RMN / NLOGN = 3.37086

Sort descending n = 100000 Timespared: 151ms

KCN = 2244585 KCN / N = 22.4459 kcn / n ^ 2 = 0.00024459 kcn / nlogn = 1.35137RMN = 3844572 RMN / N = 38.4457 RMN / N ^ 2 = 0.000384457RMN / NLOGN = 2.31466

This result indicates that Hill sort has little worse cases, whether it is positive or sequence, reverse order, disorder, and use time is not a lot, and additional storage is O (1), it is really very good. It is indeed a good choice until you don't know how to sort quickly, but it is indeed a good choice. I have been using it.


New Post(0)