Data structure study bifurcation

xiaoxiao2021-03-06  42

Tree because of this "tree" in the real world, this "tree", the grade system, directory classification, etc., in order to study this type of problem, it is necessary to store the tree, and how to store will depend on the required operation. There is a problem here that it allows an empty tree. Some books believe that the tree is non-empty, because the tree indicates a realistic structure, and 0 is not a natural number; I have used the textbooks that can have a space, of course, for the bifurcation. There is no in principle difference, anything is a habit. The binary tree binary tree can be said to be a model of people's imaginary, so that there is no controversy. The binary tree is ordered, and there is a binary tree on the left side of the left. There is a different tree. To do this, it is because people give the left child and the right kids different significance. In various applications of the binary tree, you will see it clearly. Only the chain structure is explained below. Look at all kinds of books, you will find an interesting phenomenon: in the binary tree, the basic operation has the high calculation tree, all kinds of traversal, is not inserted, deleted - how is the tree established? In fact, this is well understood that for non-linear tree structures, inserting deleted operations is not meaningful, it is meaningless. Therefore, there is only an insertion deletion operation only in a specific application. The node structure data domain, the left pointer, and the right pointer must be necessary. Unless few parents who use nodes, or resources are tight, it is recommended to attach a double-pro-pointer, which will bring convenience, especially in this "space change time".

Template

Struct btnode

{

Btnode (t data = t (), btnode

* Left = null, btnode

* Right = NULL, BTNODE

* Parent = null)

: DATA (DATA), LEFT (LEFT), Right (Right), Parent (PARENT) {}

Btnode

* Left, * Right, * Parent;

T data;

}

Basic binary tree

Template

Class Btree

{

PUBLIC:

Btree (btnode)

* root = NULL): root (root) {}

~ Btree () {makeempty ();

Void makeempty () {destroy (root); root = NULL;}

protected:

Btnode

* root;

Private:

Void Destroy (btnode

* P)

{

IF (p)

{

DESTROY (P-> Left);

DESTROY (P-> Right);

Delete P;

}

}

}

Two-fork tree traversal

There are basically four types of traversal methods, first, middle, and then root, layer-by-layer. I was confused about this, what did you do? It will be understood later, this is a different application needs. For example, it is necessary to determine whether the two binary trees are equal, as long as the substru root node is different, it will not wait, it is obvious to use the order, and delete the binary tree, you must first delete the left and right trees, then you can delete the root node, then It is necessary to use the sequence.

In fact, in so many traversal methods, the root cause is that the tree stored in memory is a nonlinear structure. There are no necessary methods for the binary tree stored with an array. With C packages and overload features, these traversal methods can be clearly expressed.

Pretty sequence traversal

Public: Void Preorder (Void (* VIT) (T & DATA) = Print) {Preorder (root, visit);} private: void preorder (btnode * p, void (* visit) (T & DATA))

{

IF (p) {VIT (P-> DATA); Preorder (P-> Left, Visit); Preorder (P-> Right, Visit);

}

2. Sedential traversal

Public: Void inorder (Void (* VIT) (T & DATA) = Print) {inorder (root, visit);} private: void inorder (btnode

* P, Void (* Visit) (T & DATA))

{

IF (p) {inorder (P-> Left, Visit); Visit (P-> Data); Inorder (P-> Right, Visit);

}

3. Back sequence traversal

PUBLIC: VOID PostOrder (VOID (T & DATA) = Print) {PostORDER (root, visit);} private: void postorder

* P, Void (* Visit) (T & DATA))

{

IF (p) {PostORDER (P-> Left, Visit); PostORDER (P-> Right, Visit); VISIT (P-> Data);

}

4. Hierarchical traversal

Void LevelORDER (Void (* Visit) = Print) {queue

*> a; btnode

* p = root; // Remember #include

While (P)

{

Visit (P-> DATA);

IF (p-> left) a.push (p-> left); if (p-> right) a.push (p-> right);

IF (a.empty ()) Break; P = a.front (); A.POP ();

}

}

Note: The default Visit function print is as follows

Private:

Static Void Print (T & DATA) {COUT << Data << ';}

5. Non-recursive order of the stack

When there is a Parent pointer, you can do not have a stack to achieve non-recuperated middle-sequence traversal, and this method is mentioned, but it does not give the routine.

Public: btnode

* next ()

{

IF (! current) return null;

IF (current-> right) {current = current-> Right; while (current-> left) current = current-> left;}

Else

{

Btnode

* y = current-> parent;

While (Y && Current == Y-> Right) {current = y; y = y-> parent;}

Current = Y;

}

Return Current;

}

Private:

Btnode

* Current;

The above function allows the Current pointer to move forward, if you want to pass through the entire binary tree, you need to point the current to the first node of the middle sequence, such as the following member function:

Public: void first () {current = root; while (current-> left) current = current-> left;

Traine-based binary tree This is the first difficult point in the data structure course. I don't know if you look like this. Anyway, I have a lot of brain cells - of course, the annoying matrix compression and related addition multiplication is not Consider the columns. I have spent a lot of brain cells because of thinking: what do they do? Very happy, I saw this shadow book, this chapter was called *, although I didn't sure the author did an idea - clues in the PC in the current PC! - I don't know if I have done this conclusion, it will be killed. In order to prove this conclusion, let's take a look at the reason why the linearized binary tree is: First, we want to use a few times, find a front drive or subsequent manner of a two-fork tree. Of course, this kind of operation is very frequent, it makes sense. Second, the leaf node of the binary tree has two pointer domains, which can save memory. Seriously, it is really delicious, and it is true that "waste use" is completely "waste use" - this person should really participate in environmental protection. But on the computer's dead plate, people 's integrity is often unable - for speed, the various components of the computer are neatly planned, and the idea is often built into complications. Let's take a look at the two targets that traveled two forks can reach the above objectives. The forward drive and successor of the linear sequence after traversal. The aforementioned lassification can be found in sequential, but the front drive requires parents; the pre-sequence linearization before the prior to the pre-priority is not required, but it is not very straight; the rear serology can be found in sequence, but after successive parents. It can be seen that the clues into the order is the best choice, which is basically the requirements. Save memory. Add two flags, how is the two bits store? Even on the CPU stored in the support bit, it is also not possible to store the memory, and the first is that the address of the structural member is limited, and the second is that the number of bit memories is limited. Therefore, one byte is required to store these two flags. For speed and transplantation, in general, memory is to be aligned, in fact, there is no memory! However, when this space is used to store dual-pro pointers, the convenience is absolutely not a trail, and it has given a stack of non-recursion traversal. Also, inserting a deleted operation on the lines of the lines is too large. In summary, the clues are preferably subordinate linearization (the pre-sequence linear is also available after the stack, why need to be linearized), at least 1 byte of the additional flag domain space, at 32-bit CPU request Aligned to 4 bytes, it is better than storing a dual-pro-pointer, and it can also achieve the purpose of neutralization, and can bring other benefits. So, the line-based binary tree is useless on the current PC! Since there is nothing to understand for other systems, the following points of view are listened. There is a lot of memory space now, and there is a node 2 to 3 bytes. It doesn't mean it (actually because it is noticed); and there is a very valuable place in memory (such as a single-chip), you will try to avoid the use of tree structure - Utilize other methods. Therefore, it seems that the linen trigemine is really unused. The binary search tree is probably the most important application of the binary tree. Its ideas is actually a natural thing - lookup value is left left down than the current node, the big turn right, etc., if it is, it is not found. The more natural things are better, the more you don't need me to talk nonsense. Before giveing ​​the implementation of the BST, we have to add a member function of a printed tree structure in the class of the binary tree so that the insertion and deletion process can be clearly seen. Public: void print () {queue

*> a; queueflag; OFSTream outfile ("out.txt");

Btnode

* p = root; btnode

ZERO; BOOL V = TRUE;

INT i = 1, Level = 0, H = height ();

WHILE (i <2 <

{

IF (i == 1 <

{

COUT << Endl << SETW (2 << (h - level)); Level ;

IF (v) cout << p-> data;

Else Cout << '';

}

Else

{

COUT << SETW (4 << (H - Level 1));

IF (v) cout << p-> data;

Else Cout << "

}

IF (p-> left) {a.push (p-> left); flag.push (true);}

Else {a.push (& zero); flag.push (false);

IF (P-> Right) {a.push (p-> right); flag.push (true);}

Else {a.push (& zero); flag.push (false);

P = a.front (); A.POP (); v = flag.front (); flag.pop (); i ;

}

Cout << Endl;

}

The core of the print tree is used to traverse the binary tree at a level, but the binary tree has many nodes in the left or right sub-tree. The more the links are, the larger the lower gap. In order to print according to the structure of the tree, the binary tree must be completed into a fully binary tree, so the following node knows what position is --a.push (& zero); but such a node cannot print it, so each node There is a sign that is printed. It is reasonable to say that the PAIR structure is very suitable. For the sake of simplicity, I use two queues in parallel, one placement point pointer --a, a print mark - Flag. In this way, the end of the loop cannot be a queue empty - never can't empty. When null, I will make up a node - but it has become the last node of the full binary tree 2 ^ (Height 1) - 1. - Huang Pei's definition of the height is that the height of the empty tree is -1. For the output format, note that the nodes of the first, 2nd, and 8th, and in the same line, the domain width of the first node is half of the rear sequence node. The above functions can be displayed normally at the level of the tree. When height <= 4), it must be output to the file to go to the file to go to the file outfile; - If the hierarchy is more It doesn't make sense to print out.

The implementation of the binary search tree is actually on the basis of the binary tree, adds insertion, delete, and find.

#include "basetree.h" Template

Class Bstree: Public Btree

{

PUBLIC:

Btnode

* & Find (Const T & DATA)

{

Btnode

** p = & root; current = null;

While (* p)

{

IF ((* P) ​​-> DATA == DATA) BREAK;

IF ((* P) ​​-> Data right);} else {current = * p; p = & ((* p) -> left }

}

Return * p;

}

Bool Insert (Const T & DATA)

{

Btnode

* & p = find (data); if (p) returnaf;

P = new btnode

(DATA, NULL, NULL, CURRENT); RETURN TRUE;

}

Bool Remove (Const T & DATA)

{

Return Remove (Find (DATA));

}

Private:

Bool Remove (btnode

* & P)

{

IF (! p) returnaf false; btnode

* T = P;

IF (! p-> left ||! p-> right)

{

IF (! p-> left) p = p-> Right; Else P = P-> Left;

IF (p) p-> parent = current;

DELETE T; RETURN TRUE;

}

T = P-> Right; while (t-> left) t = t-> left; p-> Data = t-> data; current = t-> Parent;

Return Remove (current-> left == T? current-> left: current-> right);

}

}

The above code is a bit notes, it is necessary to explain that the implementation of nonlinear chain structure operation is very good. INSERT and REMOVE are based on find, so you must maximize the use of these two operations.

1. For INSERT, you need to modify the pointer content when the lookup fails, obviously this is an internal pointer (in the internal point of the piped node, not like root and current, pointing to the node outside the node), which requires Find to return a reference to an internal pointer . However, after C reference is bound to an object, it cannot be changed, so the implementation within the Find is a two-pointer. The Insert operation also needs to modify the PARENT node domain of the inserted node, so a pointer to the node where you can be accessed by INSERT is generated in Find, which is used here. In fact, the pointer references returned by Find is not current-> left is current-> right. In this way, INSERT is very simple.

2. For Remove, you need to modify the contents of the pointer when you find success, and it is also an internal pointer. Based on Find, it is easy to get this internal pointer reference (btnode)

* & p = find (data).

At least one of the p-> left and p-> Right, if p-> left == null, then re-connect the right sub-tree P = P-> Right, in turn, reinstall the left subtree P = P-> Left. Note that the case of the left and right sub-trees is also included in these two operations - reinstall the right child when p-> left == null, and this time P-> Right is also null - therefore do not have to column come out. If the P is not empty, you need to modify P-> Parent = Current.

If P-> LEFT and P-> Right are not empty, it can be converted to have an empty. For example, a sequential ordered sequence [1, 2, 3, 4, 5], suppose 3 has both the left sub-tree and the right subtree, then its pre-drive 2 has a lack of right trees, and then 4 must miss the left sub-tree. . [Note 1] This way to delete node 3 is equivalent to delete node 4 from [1, 2, 3 (4), 4, 5]. This allows for at least one method in the case of NULL in P-> LEFT and P-> RIGHT. Did not change the binding object due to the C reference, this is to be solved using recursive, but also only recursive once. If you use a dual pointer to be full of stars, this is clearly the reason why the end is not eliminated. [Note 1] This is because if there is a left subtree and a right man tree, then 2 must be on the left subtree of 3, 4 must be on the right of 3; if 2 has a right tree, then There should be a node between 2 and 3; if 4 has a left subtree, there should be a node between 3 and 4.

[Gossip] About the Remove Operation P-> LEFT and P-> Right does not explain the explanation of the method of processing, from Yan Weimin's courseware, I suddenly turned over, I really don't know why she own "data Structure (C language version) "It is so difficult to write here, I didn't understand it.

Recurrent traversal and non-recursion traverses in the same way, the understanding of the recursive is always difficult, because the example of persuasion cannot be put forward, just elaborate "recursive is a kind of thinking." But as long as you can build "recursive is a kind of thinking" this concept, my efforts have no white feet. Now, the binary search tree is finished, and finally there is an example that can explain the problem. According to the code provided earlier, you should be able to debug the following program.

#include

Using namespace std;

#include

#include

#include "bstree.h"

#include "timer.h"

#define random (NUM) (Rand ()% (NUM))

#define randomize () SRAND (NULL) TIME (NULL)

#define nodenum 200000 // Node Number

Int data [nodenum];

Void Zero (INT & T) {T = 0;

int main ()

{

BStree

a; time t; rDOMize (); int i;

For (i = 0; i

For (i = 0; i

T.Start (); for (i = 0; i

Cout << "Insert Time:" << T.gettime () << "tnode number:" << nodenum << endl;

T.Start (); for (a.first (); a.get ()! = null; a.next ()) a.get () -> DATA = 0;

COUT << "Non-Stack Time:" << T.gettime () << endl;

T.Start (); a.LEVELORDER (ZERO); cout << "LevlORDER TIME:" << T. Gettime () << Endl; T.Start (); A.PREORDER (ZERO); COUT << "Preorder Time: "<< t.gettime () << Endl;

T.Start (); a.inorder (zero); cout << "inorder time:" << T.gettime () << endl;

T.Start (); a.postorder (zero); cout << "PostORDER TIME:" << t.gettime () << endl;

Return 0;

}

The following is the content of Timer.h

#ifndef Timer_h # define Timer_H # include

Class Timer

{

PUBLIC:

Timer () {QueryperFormanceFrequency (& FREQUENCY);

Inline void start () {queryperformancecounter (& Timerb);

Inline Double GetTime ()

{

QueryperFormanceCounter (& Timere);

Return (Timere.quadpart - Timerb.quadpart) / (Double) Frequency.quadpart * 1000.0;

}

Private:

Large_integer Timerb, Timere, Frequency;

}

#ENDIF

In the above program, the hierarchical traversal is the queue, which should represent the case where the stack is decatible, and the result of running on my machine C500 is:

Insert Time: 868.818 Node Number: 200000non-Stack Time: 130.811LEVLORDER TIME: 148.438PREORDER TIME: 125.47POrder Time: 129.125POSTORDER TIME: 130.914

The above is the result of the Release version of VC6, and the time unit is MS, and it does not mean that some people will think is a crash, ^ _ ^. It can be seen that recursive traverses is actually not slow, the opposite, faster, and the result of the debug version is this:

Insert Time: 1355.69 Node Number: 200000non-Stack Time: 207.086levlorder Time: 183.287Irder Time: 179.835POSTORDER TIME: 190.674

The speed of recursive traverses is the fastest, and it is probably the most direct conclusion that the above result is obtained. I don't know where to listen to "The recursive speed is slow, in order to improve the speed, you should use the stack to understand recursive", the evidence is the calculation of the Fiboacci number, and unfortunately the non-intended algorithm of the Fiboaccai number is the loop. Not a stack undeterained; if he really took the stack to simulate, he will find that there is a slower use of the stack. Let's take a look at why. The recursive implementation is to stack the parameters, then Call itself, and finally returned according to the layer, a series of actions are operated on the stack, using instructions such as PUSH, POP, CALL, RET. The ADT stack is used to simulate recursive calls, and the functions of the above instructions are achieved, and the ADT implementation of those instructions can be not just a directive. Who understands that the implementation efficiency of the simulation is definitely better than the true difference, how can I become in this question? Of course, you have to add ISTREAM FILE1 ("Input.txt") in the Visit function, and then put this on the loop with the stack simulation, and finally you said that the stack simulation is fast, I There is no way to say that I have seen someone, http://www.9cbs.net/develop/read_article.asp? Id = 18342 put the database connection in the Visit function, and then the recursive speed is slow. If a recursive process is implemented with a non-recursive method, the speed is improved, that is just because of the recursive doing some useless work. For example, with a loop digestion, it is a non-use stack and out of the stack to make the speed damage; the recursive recursive recursive cycle iteration of the Fiboacciped number is greatly improved, because it has changed repeated Calculated problems. If a recursive process must be remelted with a stack, then the result will not be improved at the speed after full simulation, will only slow down; if you have improved it, then you only prove your recreation function There is a problem, such as many duplicate operations - open the shutdown file, connect the disconnect database, and these can be exactly the outside. The recursive method itself is concise and efficient, but people use non-payment methods. In this way, the research recursive stack eliminates it is useless, in fact, it is still a bit meaningful to use stack simulation, but not large, the example will be given below. The advantage of simulation recursive is to save the stack to change the above program // Node Number, change the value of the line of 15000 - do not change, don't find me, I will comment with // random swap, run the debug version, patience Waiting for 30 seconds, it will be throwing, the final output is like this: Insert Time: 27555.5 Node Number: 15000non-stack time: 16.858levlORDER TIME: 251.036 This can only explain the stack overflow. You can see that levels of traversal can work (such push, stack simulation can work), but recursive can't work. This is because the total memory space is rare, and the stack space is very small. If the recursive level is too deep, the stack overflows. So, if you are in advance, you may have excessive levels, you have to consider using the stack to digest. However, if you have to use recursive, the recursive level is deeply overflow, it is definitely your algorithm has problems, or that program is not suitable for running on the PC - running, like a crazy, so Who dares to use? Therefore, it means that the stack simulation recursive is meaningful, but it is not large because it is rarely used.

转载请注明原文地址:https://www.9cbs.com/read-77420.html

New Post(0)