Terms 18: Staging Booth also expects to calculate
In the terms 17, I tried to praise the advantages of lazy, and delay time as much as possible, and I explain how to improve the running efficiency of the program. I will adopt a different attitude in this terms. There will be no lazy here. I encourage you to make the program more than being required to improve the performance of the software in this way. The core of this clause is Over-Eager Evaluation: It is required to complete them before you ask you to do something. For example, the template class below is used to indicate a collection of large numbers of digital data:
Template
Class Datacollection {
PUBLIC:
NumericType min () const;
NumericType max () const;
NumericType AVG () Const;
...
}
Suppose the min, max, and AVG functions returns the minimum, maximum and average value of this collection, and there are three ways to implement these three functions. Using EAGER EVALUATION, when the Min, Max, and AVG functions are called, we detect all values in the collection and then return a suitable value. Using the Lazy Evaluation (lazy calculation), we only ask the function to return the data structure that can be used to determine the accurate value when the return value is required. With Over-Eager Evaluation, we keep track of the minimum, maximum and average value of the current collection, so when min, max, or AVG is called, we can return the correct value without calculations. If you frequently call MIN, MAX, and AVG, we divide the overhead of the tracking set minimum, maximum and the average value to all these functions, and each function calls the shared overhead than EAGER Evaluation or Lazy Evaluation.
The idea hidden behind Over-Eager Evaluation is if you think that a calculation needs to be frequent. You can design a data structure to efficiently handle these calculation needs, which can reduce the overhead of each calculation.
The easiest way to use OVER-EAger is that the caching (cache) has been calculated and there may be values in the future. For example, you have written a program to provide information about employees, which are often required in this information is the employee's office partition number. Suppose the employee information is stored in the database, but for most applications, the employee compartment is not related, so the database does not check them. In order to avoid your program to cause heavy burden to the database, you can write a function FindCubicleNumber to find data for cache. When the compartment number has been acquired in the future, you can find it in cache without querying the database.
The following is a method for implementing FindCubicleNumber: it uses the MAP object in the standard template library (STL) (for see Terms 35).
INT FindcubicleNumber (const string & employeename)
{
// Define static MAP, store (EMPLOYEE NAME, CUBICLE NUMBER)
// Pairs. This MAP is Local Cache.
Typedef Map
Static CubicleMap Cubes;
// Try to find an entry for Employeename in the cache;
// The Stl Iterator "IT" Will Then Point to the Found // Entry, IF There (See Item 35 for Details)
CubicleMap :: item it = cubes.find (Employeename);
// "IT" 's value will be cubes.end () if no entry WAS
// Found (This Is Standard Stl Behavior). IF this is
// the Case, Consult The Database for the Cubicle
// Number, Then add it to the cache
IF (it == cubes.end ()) {
Int cubicle =
The Result of Looking Up Employeename's Cubicle
Number in the database;
CUBES [Employeename] = CUBICLE; / / Add the Pair
// (EMPLOYEENAME, CUBICLE)
// To the cache
Return Cubicle;
}
Else {
// "IT" Points to the Correct Cache Entry, Which IS A
// (Employee Name, Cubicle Number) Pair. WE WANT ONLY
// The second company of this pair, and the member
// "Second" Will Give It to us
Return (* it) .second;
}
}
Don't fall into the Detail of the STL code (you will be more clear after you finish the terms 35). Attention should be placed on this function. This method is to use Local Cache to query the larger database query in a relatively small memory. If the compartment is more than once, it will reduce the average overhead of the return compartment number in FindCubicleNumber.
(One detail in the above code needs to explain, the last statement returns (* it). Second, not a commonly used IT-> Second. Why? The answer is to comply with STL rules. Simply, Iterator It is an object, not a pointer, so it cannot be guaranteed "->" is applied correctly to it. However, STL requires "." And "*" are legal on the Iterator, so (* it). Second Although comparison Cumbersome, but guaranteed to run.)
Catching is a method for calculating costs expected. Prefetching is another method. You can imagine the discount to buy a large number of items. For example, when the disk controller reads data from the disk, they read the data of a whole or the entire sector, even if the program only needs a small piece of data. This is because a large piece of data is read at a time than reading two or three small pieces of data at different times. And experience shows that if you need a place, it is likely to need data next to it. This is a location-related phenomenon. For this phenomenon, the system designer has reason to use disk cache and memory Cache for instructions and data, and use instruction prefetch.
You said that you don't care about the low level like disk controller or CPU Cache. no problem. PREFETCH has advantages in high-end applications. For example, you have a template for the Dynamic array. Dynamic is the beginning of the beginning, and the array can be expanded automatically, so all non-negative indexes are legal: Template
Class DynaRray {...}; // Template
Dynarray
// is a legal array element
a [22] = 3.5; // A automatic expansion
//: Now index 0-22
// is legal
A [32] = 0; // has its own expansion;
// Now A [0] -a [32] is legal
How does a DynaRray object expand when needed? A direct method is to assign additional memory required. Just like this:
Template
T & DynaRray
{
IF (INDEX <0) {
Throw an exception; / / negative index is still not
} // legal
IF (index> current maximum index value) {
Call the NEW allocate enough extra memory so that
Index legal;
}
Returns the array elements at the Index location;
}
Each time you need to add an array length, this method is called New, but the New New will trigger the Operator New (see Terms 8), the call of Operator New (and Operator Delete) is usually overhead. Because they will result in the call of the underlying operating system, the speed of the system call is generally slower than the function call within the process. So we should use the system calls as little as possible.
Using OVER-EAGER Evaluation methods, why we must now increase the size of the array to accommodate index i, then according to location-dependent principles, we may also increase the array size to accommodate other indexes than I in the future. In order to avoid the second (expected) memory allocation, we now add Dynarray's size than the legal size of the I can make the future extension will be included in the range we provide. For example, we can write Dynarray :: Operator like this []:
Template
T & DynaRray
{
IF (INDEX <0) throw an exception;
IF (index> current maximum index value) {
INT DIFF = INDEX - current maximum index value;
Call the new allocation of enough extra memory so that
INDEX DIFF legal;
}
Returns the array elements at the Index location;
}
The memory each assigned is twice the memory required by the array extension. If we look at the situation you have encountered before, it will notice that Dynarray only assigns an extra memory, even if its logic size is extended twice:
DynaRray
a [22] = 3.5; // Call the NEW extension // a storage space to index 44
// a logical size
// becomes 23
A [32] = 0; // a logical size
// is changed, allowing A [32],
/ / But there is no call
If you need to expand A again, as long as the new index provided is not more than 44, the extension is not large.
Throughout this Territor is a common topic, faster speeds often consume more memory. Track the minimum, maximum and average value of the runtime, which requires additional space, but can save time. The Cache computing results require more memory, but once the results need to be regenerated when the results of Cache are needed. Prefetch requires a space to place the prefetch, but it reduces the time required to access them. Since there is a computer with a computer: You can change time in space. (Although not always, using large objects means not suitable for virtual memory or cache page. In some rare cases, establishing big objects will reduce the performance of software because the increase in paging operations (see the operating system memory management Note), cache hits the rate, or both happen at the same time. How to find out what you are encountering? You must Profile, Profile, Profile (see Terms 16).
The recommendations I proposed in this Terfeter are expected to be expected by over-Eager methods, such as Caching and PrefeThing, which does not contradict the recommendations of Lazy Evaluation in Terms 17. When you have to support some operations, you can use Lazy Evaluation to improve program operation efficiency when you need to support some operations. When you must support certain operations and the result is almost always needed or more than once, you can use Over-EAGER to improve program running efficiency. They have proven to have a significant increase in performance in this regard.