On the essential surface of the algorithm, the factors affect the factors of the algorithm, but in fact, if we are rooted, it will find some essence in this seemingly messy world. First consider such a question: If B is in A, it is going to b. It is going to b. It can have any way to shorten the time spent? First of course is a vehicle. Choosing a car and walking naturally cannot be the same effect, this corresponds to the first part of the following: Function Storage The second is not to go, if he is gone to the southeast, then he has a south in the north and south. Move, then he must go to the northeast direction to offset the offset just now, this correspondence with the following part: calculate redundancy. Now let his travel speed and its experience, for example, after the first mountain has experienced experience, due to experience, the speed of the mountain will be greatly improved, which corresponds to the second part of the following: Process storage
1 Function Storage The computer can be implemented with the fastest speed, we will be called basic operations. For example, simple add-oriented, logical operations, memory read writing, if ... THEN statements, etc. Obviously, the more basic operations can be made, the easier the algorithm speed is easier. The basic operation of the map foreground is very small, and a simple addition must be calculated on a lot of steps, so no one will try to try the platform of the platform to make a computer for the computer. ^ _ ^. For input as limited, we can create a hash table, map the input directly to the memory address, and the stored content is the output result of the corresponding input, which generates a run time almost and one basic operation. algorithm. This is actually equivalent to directly storeing the function calculated directly to the memory, making the computer more basic operations. It can be seen that the speed improvement of the algorithm will be amazing (although it is sacrificed). This way, in fact, we have applied in batches: What is the method of primary school? Store a range of functions, etc., you can break it into these functions when calculating more complex multiplications, and then quickly get results. No one will take the opportunity to remove it first, then add it again? In summary, when the algorithm has frequent calls to a function, this function is a defined domain and there is not much elements, and we can store the function to the memory, making a basic operation, which will Quality of efficiency of algorithm.
However, the basic operation is also the advantage, so that various operations are easy to do, the functions are not intersecting, and the function B completed with operation A must not be replaced, such as the variable value plus 1 operation and goto statement. In this way, you don't need to make a good choice in each feasible basic operation when you implement a feature. So I want to store optimization algorithms with functions, if one day computer can generate algorithms yourself, it is basically impossible to grasp this method mentioned above, unless dependent on manual intelligence. It can be easily mastered by the method you should be below:
2 Process Storage The "process" will appear in the following "Procedure" in the general programming language, "Process" concept, which is a specific call of the function, including the input value, and the called function. The value returned by calling a process is the result after the input value is calculated. At some point in one calculation, accessing any variable is equivalent to calling a process, which is to access the past calculation results, and the call function is a function that will pass the data or control to the future afterwards; Established, only process calls may appear to the past, and only function calls may appear in the future.
The influence of the data structure on the speed of the algorithm is related to the two basic operations provided by the electronic computer, the memory read / write. And they are related in fact, there is also an operation of all and store / read related operations such as intermediate variables. Establish a suitable data structure, establish a suitable intermediate variable, it is possible to significantly improve the efficiency of algorithm, what is their essence? One amount of storage means that the process can be calculated in the future, such as a unit time computing speed, so that if a process occurs multiple times in one calculation, then store its first run resulting result storage to replace it The later multiple operations are undoubtedly efficient, this is the key to the multi-variable reading problem: store the calculated results, then except for the calculation process, we are temporarily called process storage. Obviously, process storage is different from a function store. The latter is the algorithm design to "store", it is "storage" in the run; the latter is increasing the speed by increasing basic operations, while the former only avoids repetition calculations and does not increase the basic operation. Process storage and function storage is the same, but also for frequent calls that appear to a function in the algorithm, and this function is when the domain is limited and the elements are not much. Of course, its efficiency will be slightly lower. There is another beautiful description of the process storage, which is transfers. We may wish to completely erase the concept of storage and reading from the field of view, and a quantity store is just one or more functions of the future, and reading is just the input from a process in the past. In this way, the process of calling the process is wider, and has become a cross-time concept. This is not difficult to understand the essence of intermediate variables and data structures on the algorithm, the former acts as a bridge that calls past processes, avoiding the repetition calculation of the same process, and the latter is more complex in the former, and is closely related to the pointer relationship, so We only exemplish it. Example: In the input of a algorithm contains a string S, the algorithm is running a lot of lookup work. If only by the above information and does not consider the space complexity, we know that the hash table is the best choice for storing S. All characters appearing in S are mapped to different memory addresses through havehing function f, and data in memory is the location of the corresponding letters in the string. This is better than the method of traversing S is better than the data structure that does not change S. Let's analyze the relationship between it and process storage: When you establish a hash table, if a array is written in an item that is a (value f (k)), it stores that K appears in all positions in S. Then, this is equivalent to providing the future function to call the call: traverse S, record all elements equal to K in S and put it in an array, return this array. Of course, this call takes time is very short, but far from the time of S.
The relationship between data structures and process storage is still not clear, and further analysis is further analyzed. The data structure is in the form of information organizations in your computer. This concept is actually very vague, what is information? What is an organizational form? Why is the organizational form on an algorithm speed? The information can be represented by a function. For example, a string can be used as an argument with a character in it, the character is a function value, ie: f: position-> character information is the relationship between a number of values. However, when it is actually expressed in some form, it is often only indicative of a one-way form of the relationship. After the string in the above example stores a array, the array contains all the information of the string. But it can only provide a one-way access, that is, get the character by the location, to get its corresponding address, oh, sorry, the number of this item does not provide this service, please write code traversal lookup. Traversing natural means inefficient. This is related to the innate deficiencies of the computer memory: it can only find the corresponding content by the address, but cannot find the corresponding address. In order to improve the algorithm speed, it should of course avoid such a situation in the execution of the algorithm. For example, if the algorithm needs to be stored in the corresponding address by the character, it cannot be stored in the form of a position-> character, to be changed to the character -> position, this is the hash table. If necessary, you can construct a location <-> character form, which is more complicated. It can be seen that the organizational form of information is in memory, and the accessibility between the various amounts included in the information. One thing is not ignored, that is, the information originally input computer is not necessarily the optimal structure of the algorithm, so it is necessary to convert this structure into the structure of the algorithm, which is the process of process storage. 3 calculate redundancy
Below we only consider the situation in the numerical calculation. And look at this code:
A A -
I am afraid that the world is not more negative than this. It is indeed a type of influencing factor, that is, what we want to calculate redundancy. Look at this function:
f (x) = 5x-3X
We can write two calculations to calculate its algorithm, a calculated (algorithm A) in accordance with the class (algorithm A) and another 2X value (algorithm B). However, both of the operational time is unsatisfactory, affecting the factors of the runtime time, is calculating redundancy. For algorithm A, if the result is stored with a variable y, it is first cycled to X plus 5 times, and then the loop reduction x is reduced 3 times, the intermediate passes after an increased decrease process; and the algorithm B But it is just an increase process. It can be seen as a result of increasing X and reduced X pair of operations in A. Obviously, calculating redundancy is an unnecessary calculation caused by a pair of effects in the algorithm while operating. But no algorithm means that the opposite operation means it can be simplified? Obviously, such as this function:
g (x, y) = 1 x-y
Directly calculate its algorithm we can write it immediately, and the opposite operation will inevitably appear, but it is already the simplest form, we can't rely on the opposite operation. The difference between the two is actually that the number of times in the former calculation (increasing X and reduced X) (5 times and 3 times) is not determined by the input variable, it is only the algorithm itself It can be determined, it is an attribute that the algorithm itself has, so we can limit the algorithm according to it; for the latter, a pair of opposite operations (plus 1 and minus 1) times (Y times and Z times Is it not determined in the algorithm itself, which varies with different values of input variables, and thus is unconflated. We can summarize the following: If an algorithm has an opposite operation, the number of times is not determined by the input variable, then this algorithm can use the method of cancellation. Although it is pushed in the instance of numerical calculations, this idea is not only applicable to numerical calculations, it should be valid for any data type. 4 Information use first look at this problem: Where is the high efficiency of a buckling? Obviously, it is not a matter of factor here. It is efficient because of the order of the input string. From the perspective of information, we write an algorithm when it is not known to the input information, which contains at least this point, ie: the string is ordered, and the order is made of small to large (or vice versa). This is the problem of the adequateness of the programmer's use of information. A series of information about the program must be mastered before the design procedure, which mainly includes information and function description of input data. Function Description determines the functionality of the program (or the function it implements), and the data related information is a description of the characteristics of the presented program input (function definition domain) information. Similar examples are very common in practical applications, such as statistics, keywords are searched, to adjust the algorithm, and give priority to high search frequency.