More Effective C ++ Terms 16

zhaozj2021-02-08  404

effectiveness

I doubt some people in the C software developers, otherwise many programmers will fluently in the mouth when referring to the word "efficiency". (Scott Meyers Real Humor Translator Note)

In fact, efficiency is not a joke. One of the great or too slow programs, no matter how distinct, it will not be accepted. It should be like this. Software is used to help us work better, saying that the running speed is better, saying that the program that requires 32MB memory is better than only 16MB of memory, saying that the program occupying 100MB disk space is only 50MB disk space This is a good discussion. Moreover, although some programs are indeed in order to make more complicated operations, they can only blame their bad design and sloverting for many programs.

Before using C , you must recognize that C itself is absolutely related to any performance you encounter. If you want to write a highly efficient C program, you must first write an efficient program. Too many developers have ignored this simple truth. Yes, the loop can be manually deployed, shift operation can replace the multiplication, but if the high-level algorithm you use is low, these fine tones do not have any effect. Do you still use quadratic equation algorithms when the linear algorithm is available? Do you calculate the repetition value over and over again? If so, you can more exaggerate your program into a second-run sightseeing place, that is, if you have an extra time, it is worth seeing.

The content of this chapter will elaborate from two perspectives. The first is from a language independent point of view, paying attention to those that you can use in any language. C provides special attractive implementation paths because it supports the packaging of very good, so that it is possible to replace inefficient type of class with a better algorithm and data structure, while the interface can remain unchanged.

The second is to pay attention to the C language itself. Although the high performance algorithm and data structure is very good, if the code is realized in actual programming, efficiency will also reduce quite. The largest error of potential harm is that it is an incorrect than a misprising mistake, and it is a such error. Excessive object structure and object release For your program performance, it is like a bleeding. In each process of establishing and releaseing an unwanted object, it will run away. This problem is very common in the C program. I will use four terms to explain where these objects come from, how to eliminate them on the basis of the correctness of the program code.

Establishing a large number of objects will not make the program becomes large and will only slow the speed. There are other factors that affect performance, including the selection and language characteristics of the library (Implementations Of Language Features). I will also be involved in the following terms.

After learning the content of this chapter, you will be familiar with several principles that can improve program performance, which can be applied to any procedures you have written. You will know how to accurately prevent unwanted objects in your software and have a keen feeling of behavior that generates executable code for the compiler.

As the saying goes, there is no need, and more is foreaged. So I want the following content to be prepared before the battle.

Terms 16: Keep in mind 80-20 guidelines (80-20 rule)

80-20 guidelines are about 20% of code use 80% of program resources; about 20% of code consumes approximately 80% of runtime; about 20% of code uses 80% of memory; approximately 20% The code executes 80% disk access; 80% of maintenance is put on approximately 20% code; through countless machines, operating systems and applications, this standard has been verified three times. 80-20 guidelines are not just a good language, which is a guideline for system performance, which has a wide range of applicability and solid experimental foundations. When you think of 80-20 guidelines, don't be entangled on specific numbers, some people like more stringent 90-10 guidelines, and some test evidence supports it. Regardless of the accurate number, the basic view is the same: the performance of the software depends on a small portion in the code composition.

When the programmer strives to maximize the performance of the software, 80-20 guidelines simplify your work and make your work complicated. On the one hand, 80-20 guidelines say that you can write performance-average code, because 80% of the efficiency of these codes does not affect the performance of the entire system, which reduces some of your work pressure. On the other hand, this guidelines also indicate that if your software has a performance problem, you will face a difficult job, because you must not only find the location of the small piece of code that caused the problem, you must also look for how to improve their performance. . The most difficult thing in these tasks is generally found in the system bottleneck. There are basically two different ways to find: Methods and the right method of most people.

Most people find bottlenecks is guess. Through experience, intuition, strategy card, Xieling board, rumor, or other ridiculous things, one programmer is a performance problem that is actually claiming the process, because the network is delayed, incorrect memory allocation, the compiler is not available Sufficient optimization or some stupid rejects use assembled statements in a critical cycle. These assessments are always published in a rack of a laughter, usually these ridicuies and their prophets are wrong.

Most programmers are incorrect in their programs performance characteristics, because program performance features often do not rely on intuition. As a result, there is a significant effect on improving the efficiency of each part of the program, but there is no significant impact on the overall behavior of the program. For example, add a strange algorithm and data structure capable of minimizing computational amount, but if the performance limit of the program is mainly on I / O (I / O-BOUND), it does not work. The library with I / O performance replacement of the compiler itself (see Terms 23), if the program's performance bottleneck is mainly on the CPU (CPU-Bound), this method will not work.

In this case, what should you do in the face of a slower running speed or a program that occupies too much memory? The meaning of 80-20 guidelines is that it is impossible to improve the efficiency of some of the procedures. Program performance features often do not rely on intuitive determination, this fact means that it is good to try to guess the performance bottleneck is not likely to increase the efficiency of a part of the program than in the manner. So what is the result?

The result is that the 20% of the experience recognition process will only cause you heartache. The correct way is to identify the 20% part of an annoying program with the Profiler program. Not all work makes Profiler to do. You want it to directly measure the resources you are interested in. For example, if the program is too slow, you want PROFILER to tell you how much time it costs. Then you pay attention to where local efficiency can be greatly improved, which will greatly improve overall efficiency.

PROFILER tells you how many times each statement is executed or each function is called, which is a limited tool. From the viewpoint of improving performance, you don't have to care about a statement or a function called how many times. After all, the caller of the user or library complained that there are too many statements or call too much. If the software is fast enough, no one cares about how many statements are executed, if the program is too slow, there will be fewer concern about the statement. What they care is that they are awaiting, if your program is waiting, they will hate you. But knowing the statement execution or function call, sometimes helping you inspect the behavior of the software. For example, if you have established 100 some type of object, you will find that you call the constructor of this class, this information is undoubtedly valuable. Moreover, the number of statements and functions can indirectly help you understand software behaviors that cannot be measured. For example, if you don't directly measure the use of dynamic memory, you know that the memory allocation function and the call frequency of the memory release function are also helpful. (I.e., Operators new, new [], delete, and delete [] - see Terms 8)

Of course, even the best PROFILER is also affected by the data hereled. If you use lack of representative data profile your program, you can't complain that PROFILER will lead to the 80% of the program, so that there is no impact on the usual performance of the program. Remember that PROFILER can only tell you a program running at a time (or a few run), so if you use a program that does not have a representative input data profile, then your PROFILE is not a representation. Instead, doing this is likely to optimize unopened software behavior, while in the common domain of software, the efficiency of the software is reversed (ie efficiency decrease).

Preventing this incorrect result, the best way is to use as many data profile yours. In addition, you must ensure that each group of data can be representative in terms of customer (or at least the most important customer). It is easy to obtain representative data, because many customers are willing to let you use their data. After all, you will optimize the software for their needs.

转载请注明原文地址:https://www.9cbs.com/read-967.html

New Post(0)