Refactoring mode 1

zhaozj2021-02-11  170

Refactoring Patterns: Part 1

content:

What is refactoring? Refactoring Principles Reference About the author

related information:

Other parts of this series

Shiyiying@hotmail.com) Zhejiang University Lingfeng Technology Development Company Director December 2001

This is the first part of Refactoring Thinking. This article will introduce the basic concepts, definitions of refactoring, and interpret the principles that Refactoring needs to be insisted in the same manner.

The introduction code is too easy to get bad. The code always tends to have larger classes, longer methods, more switching statements and deeper conditions nested. Duplicate code can be seen everywhere, especially those who have seen similar images and different codes throughout the system: conditional expression, cycle structure, collection enumeration .... Information is shared between some components of some of the system, usually This makes almost all important information in the system becomes globally or repeated. You can't see any good design of this code. (If any, it is already unrecognizable.) This code is difficult to understand, let alone modify it. If you care about the system architecture, design, or a good program, your first reaction is to refuse to work in such a code. You will say: "So bad code, let me modify, it is better to rewrite." However, you may not override the system that has been able to be even working, you can't guarantee that the new system can achieve all the original Features. What's more, you are not living in vacuum, there are more investment, delivery, competitive pressure. So you use a QUICK-AND-DIRTY method, if the system has problems, then directly find this problem, modify it locally. If you want to add a new feature, you will find a similar code from the original system, copy it, and make some modifications. For the original system, you think, since I can't write it, and they are already working, let it go. Then, your increased code has become the object of the next programmer. The system is increasingly difficult to understand, more and more difficult, more and more expensive. The system becomes a full large mud ball. This situation is that everyone is reluctant to encounter, but strange is that such a scene once again appeared in most people's programming career. This is because we don't know how to solve it. The best way to solve this problem is of course not going to happen. However, to block the corruption of the code, you need to pay an extra price. Every time you modify or add the code, you have to look at the code in your hand. If it has a good taste, then you should be able to easily join new features. If you need a long time to understand the original code, spend longer to increase and modify the code. So, let's go down, let us do refactoring. What is refactoring? Everyone seems to have their own refactoring definition, although they are talking about the same thing. In so much definition, the first Raloh Johnson in the theoretical research of Refactoring is clearly more convincing: refactoring is the process of reintertreating an object design using various means, the purpose is to make the design more flexible and / or more Reuse. You may have several reasons to do this, in which efficiency and maintainability may be the most important reason. Martin Fowler [Fowler] defines the refactoring as two parts: refactoring (noun): Changes to the internal structure of the software without changing the observable behavior, the purpose is to make it easier to understand and can be more cheap Change change. Another part is the verb form: Refactor: Reconstructs a software by applying a series of refactoring that does not change the software observable behavior. Martin Fowler's noun form is that refactoring is a change in the internal structure of the software. The premise of this change is that it cannot change the observable behavior of the program, and this change is to make it easier to understand, more easily modified. The verb form highlights the refactor is a software reconstruction behavior, which is the application of a series of refactoring.

Software structures can be changed because of a variety of reasons, such as printing beauty, performance optimization, etc., but only for understandability, modifications, and maintenance, the change is refactoring. This change must maintain the observable behavior. In accordance with Martin, what is the functionality of the software before the refactoring, which is the function of achieving it. Any user, whether end users or other programmers, do not need to know that something has changed. Refactoring Principles Two Hats (Two Hats) Kent Beck Prospect this part. He said that if you are using refactoring development software, you divide the development time to two different activities: add functions and refactoring. When you increase your function, you should not change any already existing code, you just add new features. At this time, you add new tests and let these new tests can pass. When you change your hat refactoring, you have to remember that you should not add any new features, you are just the reconstruction code. You won't add new tests (unless you find it out). Certain tests will only change when your Refactoring changes a first code. During a software development process, you may frequently exchange these two hats. You start to add a new feature, then you realize that if the original code structure is better, new features can be added more conveniently. Therefore, you take off your featured hat and replace the refactoring hat. After a while, the code structure is getting better, you take off the refactoring hat, wear the increased hat. After adding new features, you may find that your code makes the structure of the program difficult to understand, then you exchange your hat. The story about the exchange of two hat is constantly happening in your daily development, but no matter where you have a certain hat, you must remember that there must be a certain hat only. The observable behavior of the UNIT TEST keeps the code is not referred to as the security of Refactoring. The refactoring tool is confirmed by semi-forming theoretical proof to ensure the security of Refactoring. However, it is very difficult to keep the observational behavior of the system remain unchanged from the theory. Although it is not impossible, it is also very difficult. Tools also have their own defects. First, the current theoretical study of Refactoring is not very mature, and some refactoring that have been proven to be secure recently discovered is not safe under specific occasions. Second, the current tool does not support the "informal" refactoring operation, if you find a new refactoring skill, the tool does not immediately let this refactoring for you. Automated testing is a way to verify the security and effective ways. Although we can't exhaust all the tests in the entire system, if we successfully test before refactoring failed, we will know that the refactoring just doing has destroyed the observational behavior of the system. Automated testing automatically detects such behavior destruction without manual intervention. The most practical tool in the automated test is the XUnit Series Unit Test Frame, which was originally developed by Kent Beck and Eric Gamma as a SmallTalk community. Eric Gamma has had such an important thing for test: The less you write, the lower your productivity, the more unstable your code. The more you have no productivity, the more accurate, your pressure is more, the bigger the pressure is from JavaWorld, two Sun developers show their fanaticism and demonstrate their expansion unit testing To check the distributed controls like EJB: We have never been over testing software, but we have made it less. . .

I hope that the test is a part of the software development process but often misunderstood. For each code unit, unit test ensures that he can work, independent of other units. In object-oriented languages, a unit is usually, but not always, a class of equivalents. If a developer believes that every piece of the application can work correctly according to their design, they will recognize that the problem that the assembled application will be in the process of combining all parts. The unit test tells the programmer an application 'Pieces Are Working as designed'. I have thought that I am a good programmer. It is almost impossible to think that your code is almost wrong. But in fact, I have no evidence to prove this, and I have no confidence, my code will not be wrong, or when I add a new feature, the original behavior must not be damaged. On the other hand, I think too much test is not repaired, and the test can only stay above theory, or only those strong companies can do. This point of view was completely overthrown in 1999 after the Kent Beck and Gamma's JUnit test framework. JUnit is one of the important tools for XP. XP promotes a rule called Test-First Design. With the Test First Design method, you write a unit test before writing a new feature, and use it to test the code that implements new features but maybe error. This means that the test is first fail, the purpose of writing code is to make these tests can be successfully run. JUnit's simple, easy-to-use and powerful features almost immediately accepted the idea of ​​unit testing, not only because it allows me to have evidence that my code is correct, but more importantly, I will modify the code each time. At the same time, I have confidence that all changes will affect the original features. Test has become part of all my code. About this, Kent Beck pointed out in its "Extreme Programming Explained": there is no program without an automated test. The programmer write unit tests so they can confine the correctness of the program operation to become part of the program itself. At the same time, customers write functional tests, so they can confirm the correctness of the program operation to become part of the program itself. As a result, over time, a program becomes more and more credible - he has become more acceptable, not the opposite. The basic process of unit test is as follows:

Designing a test compiler that should fail should immediately reflect the failure. Because classes and methods that need to be used in the test have not been implemented. If there is a compilation error, complete the code, as long as the compilation passes, the code only reflects the intent of the code and is not implemented. Run all the tests in JUnit, it should indicate that the test failed to write the actual code, the purpose is to make the test can succeed. Run all the tests in JUnit to ensure that all tests are passed, once all tests pass, stop encoding. Consider whether there is anything else, write test, run it, if necessary, modify the code until the test passes the test, pay attention to the content of the test, is not a test, the better. Kent Beck : You don't have to write a test for each method, only those who may have productive methods that may be wrong. Sometimes you just want to find something possible. You explore half an hour. Yes, it may happen. Now you abandon your code and start from the unit test. Another author Eric Gamma said: You can always write more tests. However, you will soon find that only some of the test you can think is really useful. What you need is to write tests for those where you think they should work, or you think may fail, but eventually, it is successful. Another method is considered at the point of cost / revenue. You should write feedback information with money. You may think that the unit test is good, but it will increase your programming burden, and others spend money is to write code, not to write test. But Willam Wake said: Write unit tests may be a boring thing, but they save you for future time (Bug after caught). Relative is not obvious, but equally important is that they can save your current time : Test focuses on design and implementation of simplicity, they support Refactoring, which verifies it while developing a feature. You will also think that unit tests may increase your maintenance amount, because if the code has changed, the corresponding test needs to make changes. In fact, the test will only make your maintenance faster, because they make you change your changes, if you do something wrong, you will also remind you. If the interface changes, you certainly need to change your interface, but this is not too difficult. The unit test is part of the program, not the task of the independent test department. This is the so-called self-test code. The programmer may take some time in writing code, spend some time in understanding the code of others, spending some time is doing design, but their up to time is being commissioned. Anyone has such an encounter, a small problem may cost you an afternoon, one day, or even a few days of time to debug. To correcting a bug is often very simple, but it is a big problem to find a bug. If your code can bring an automated self-test, once you join a new feature, the old test will tell you that there is a bug with the original code, and the newly joined test tells which newly joined code introduced into bugs. . Another principle of Small Step Refactoring is that every step is always doing very little work. Each time a small amount is modified, it will be tested to ensure that the Refactoring program is safe. If you have made too much modifications at a time, it is possible to intervene a lot of bugs, and the code will be difficult to debug. It is also very difficult if you find that the changes are not correct, you want to return to the original state. These subtle steps include:

Look for places that need refactoring. These places may be discovered when understanding the code. Or found the taste of the code. Or through some code analysis tools. If there is no unit test on the code that needs Refactoring, first write unit tests. If there is already a unit test, take a look at whether the unit test takes into account the problem you face, otherwise it will be improved. Run unit test to ensure that the original code is correct. According to the phenomenon presented in the mind, reflect what kind of refactoring, or find a book about the Refactoring Classification directory, find similar situations, and then follow the instructions in the book Stefactoring. Each step is completed, unit tests are performed, ensuring its security, that is, observable behavior has not changed. If refactoring changes the interface, you need to modify all the unit tests, function tests to ensure that the observable behavior of the entire system is not affected. If you do refactoring in this step, you may have a small chance, just as Kent Beck said: "I'm NOT A Great Programmer; I'm Just A Good Programmer with Great Habits". Request to use small steps The gradually refactoring is not completely considered for practical considerations. A research team leading Ralph Johnson leaders in Illinois State University is the guidelines and the most important theoretical groups of refactoring theory. Among them, William Opdyke 1992's doctoral thesis "Refactoring Object-Oriented Framework" is a recognized refactoring first formal proposal. In that paper, Opdyk describes his views on the Refactoring Reconstruction: Usually, people either a high level in accordance with the increasing characteristics of the system, or in accordance with a low level of the changed code line. Variety. Refactorings is a recombination plan that supports changes in a middle level. For example, consider, refactoring moves a class's member function to another class. . . . In order to achieve such an Intermediate Level operation, Opdyke proposed the concept of atomic researchoring. He pointed out that the support of the refactoring below is atomic; that is, they are the most primitive level Refactorings. Atom Refactoring created, deleted, changed, and mobile entities ... High-level Refactoring By these 26 low-level (atom) refactoring, Opdyke first proved that these atoms Refactoring will not change the program's Observable Behaviour. The higher level of Refactoring can be proof by decomposing Refactoring of these atoms. OpdyKe also proves how he proposed high-level refactoring is required after each of the atomic Atomics. Small step forward makes it possible to make each step becomes possible, eventually proven from higher levels, the security and correctness of these Refactoring from a higher level. The Refactoring tool relies on these theoretical research for refactoring. If everyone can perform refactoring in this small step, it is very hoping that his Refactoring can be properly recorded and used for the entire object-oriented community. At the same time, he proves that the Refactoring tool can be further developed to him.

Maybe you will think that with the development of the tool, the programmer will become a Refactoring robot. Such a view is incorrect. Although using a Refactoring tool to avoid involving various bugs that can be generated by handmade, reduce compilation, test, and code review. However, as the author of the SMALLTALK REFAACTORY Browser said, the refactoring tool does not plan to replace the programmer, the programmer needs himself to decide to decAactoring, what to do for what is refactoring. At this point, the experience is not replaced. Code Review and Pair Programming To ensure that the correctness of Refactoring, there is a useful way to perform Code Review. Code REVIEW is now generally implemented in some major companies. They may hire experts to conduct Code Review to discover problems in the code, improve system design, and improve programmers. Also in the Refactoring process, we can also use the Code Review method. The problem is whether we have enough energy and personnel to carry out such a REVIEW? XP successful experience shows that Code Review should not be only big companies. Even, the Pair Programming in XP is actually extremely to the Code Review, which is also more suitable to express the role that the Code Review can play during the Refactoring process. Kent Beck said: There are two roles in each pair. A partner, holding keyboards and mice, is considering the best way to achieve the method achieved. Another collaborator, more consideration of strategic issues: Is this the process of completing the work? Is there any other test suit not working? There are no other way to simplify the entire system, making the current problem no longer appearing?

Using this method for refactoring, you can do not think of a unit test that should have any unit tests, when a programmer can't find a suitable refactoring method or when a programmer does not follow the correct method, another programmer You can propose your own views and suggestions. Even in extreme cases, when a programmer with a keyboard does not have the concept of how to complete this refactoring, another programmer can pick up the keyboard and do it directly. XPCHINA's NOTYY is that the Code Review should not belong to one of the principles of Refactoring. Strictly, you can perform refactoring without implementing Pair Programming or Code Review. But because of the specialty of Refactoring, it does not add new code, but modifications already exists, it is likely that the code that has been dependent by many other modules, so Pair programing is more important than the average new code here. In another aspect, if you are doing BIG Refactory, such as Refactor To Design Pattern, PAIR Programming is more helpful to exchange both parties to refer to the refactor for design mode. Therefore, although this is not a necessary principle, I still describe it as one of the principles. The Rule of Three Don Roberts proposed by the rule of three seems to be very similar to the Pattern community's verification: Do something for the first time, you just do it. The second time you do something, see repetition, you have some retreat, but no matter what, you repeat it. The third time you do something similar, you refactor.

转载请注明原文地址:https://www.9cbs.com/read-5219.html

New Post(0)