Extreme Programming - Variable price

xiaoxiao2021-03-06  126

Extreme Programming - Variable price

March-Bird Lucian Yjf Taopin WL Jazz Han Wei Nullgate Simon [Aka] (Reprinted from Cutter.com) September 15, 2003

The Cost of Change Change cost Early on in Beck's book, he challenges one of the oldest assumptions in software engineering. From the mid-1970s, structured methods and then more comprehensive methodologies were sold based on the "facts" shown in Figure 1. I shop Know; I Develop, Taught, Sold, And Installed Several of these Methodologies During the 1980s. Figure 1 - Historical Lifecycle Change Costs.

Figure 2 - Comtemporary Lifecycle Change CHANGE CHANGE CHANGE CHANGE CHANGE CHANGE CHANGE CHANGE CHANGE CHANGE CHANGE CHANGE CHANGE CHANGE will continue to challenge in the software engineering one? Quot; ancient training. Structured method from the mid-1970s, and later those More complex methods, they all based on the "facts" shown in Figure 1, in the 1980s, I have to understand, use, discuss, implement these methods. Beck asks US to consider That Perhaps the Economics of Figure 1, probably valid in the 1970s and 1980s, now look like Figure 2 - that is, the cost of maintenance, or ongoing change, flattens out rather than escalates Actually, whether Figure 2 shows today's cost profile or not is irrelevant -. we have to make IT True! If Figure 1 Remains True, THEN We are doomed Because. Beck has given us a question, and those who can also have effects in the 1970s and 1980s, their funding overhead (Figure 1) has now changed (as shown in Figure 2), that is, the cost of maintenance (or equivalent to changing changes) is lowered, not increasingly high. In fact, Figure 2 The export situation is not important in today's fact. It is important that we must realize that if the phenomenon of Figure 1 is still repeating, we only have a dead road, because the changes in the current era are too fast (that is, The cost of maintenance will be a high price).

. The vertical axis in Figure 1 usually depicts the cost of finding defects late in the development cycle However, this assumes that all changes are the results of a mistake - ie, a defect Viewed from this perspective, traditional methods have concentrated on ". . defect prevention "in early lifecycle stages But in today's environment, we can not prevent what we do not know about -. changes arise from iteratively gaining knowledge about the application, not from a defective process So, although our practices need to be geared toward preventing some defects, they must also be geared toward reducing the cost of continuous change. Actually, as Alistair Cockburn points out, the high cost of removing defects shown by Figure 1 provides an economic justification for practices like pair programming. Figure 1 The Y-axis in the middle is usually used to indicate incomplete costs after the posture of the development cycle. However, this is verifying a hypothesis, that is, all the startings from the later period comes from the previous mistake, and it is a design defect. From this point, traditional methods are too dependent on the early "no mistake" in the software life cycle. But in today's rapidly changing environments, we can't fully prevent what we predigible - the change caused by the growing demand, and this change is impossible to meet and prevent it. Therefore, although we must make certain precancerous prevention measures in the early days, it is more important to reduce the overhead of later changes. As AliStai Cockburn pointed out, it is necessary to correct the correction defective method shown in Figure 1, just give some practical methods (such as pairing programming) from the perspective of saving expenses (such as pairing programming).

In this issue of eAD, I want to restrict the discussion to change at the project or application level - decisions about operating systems, development language, database, middleware, etc., are constraints outside the control of the development team (For ideas. on "architectural" flexibility, see the June and July 1999 issues of ADS.) Let's simplify even further and assume, for now, that the business and operational requirements are known. in this issue of eAD, I intend to discuss located in the project Or changes in the application of the application - the discussion of similar operating systems, programming languages, databases, components, etc. is not discussed. (About software structure, you can refer to the issue of ADS magazine June 1999) In addition, let us further simplify, that is, the user needs of software have been determined. Our design goal is to balance the rapid delivery of functionality while we also create a design that can be easily modified Even within the goal of rapid delivery, there remains another balance: proceed too hurriedly and bugs creep in; try to anticipate every eventuality and. Time Flies. However, Let's Again Simplify Our Problem and Assume We Have Reached A Reasonable Balance of Design Versus Code and Test Time. Our goal is to quickly publish new features, while making software design easy to change. Even if it is rapidly released this goal, it still needs to be put on "Quot; fast release but BUG" and "the face but the loop lasting" is handled. Therefore, let me simplify the problem we have to discuss, we assume that we are already designed , A reasonable balance between encoding and testing these three.

With all these simplifications, we are left with one question: how much anticipatory design work do we do Current design produces the functionality we have already specified Anticipatory design builds in extra facilities with the anticipation that future requirements will be faster to implement Anticipatory?.. design trades current time for future time, under the assumption that a little time now will save more time later. But under what conditions is that assumption true? Might it not be faster to redesign later, when we know exactly what the changes are, rather Than Guessing Now? Only on the simplified basis, there is still a tail: how far we have to look at the unknown future when design? The current design has achieved some of the features we think now. The predictive design can achieve future demand faster, that is, the predictability design method is exchanged in the future at present time, if a little bit of time can be exchanged for a lot of time, of course, is cost-effective of. But how can this construction be a reality? Maybe in the future, the whole redesigned is not slow, why bother now? This is where refactoring enters the equation. Refactoring, according to author Martin Fowler, is "the process of changing a software system in such a way that it does not alter the external behavior of the code yet improves its internal structure." XP proponents practice continuous, incremental refactoring as a way to incorporate change If changes are continuous, then we'll never get an up-front design completed Furthermore, as changes become more unpredictable -.. a great likelihood today - then much anticipatory design likely will BE WASTED. This is why we have to rebuild. Reconstructing, Martin Fowler said that it does not change the software's external performance but reforming internal service. The supporters of the XP method practice continuous, incremental reconstruction methods in changing environments. If the change is evolving, it is impossible to have any step to the design method. To put it bluntly, if the change is unpredictable - as the situation in today's society - too much, considering the possible changes in the design, it is entirely a waste. Figure 3 - Balancing Design and Refactoring, pre-internet.

Figure 4 - Balancing design and refactoring today I think the diagram in Figure 3 depicts the situation prior to the rapid-paced change of the Internet era Since the rate of change (illustrated by the positioning of the balance point in the figure).. was lower, more anticipatory designing versus refactoring may have been reasonable As Figure 4 shows, however, as the rate of change increases, the viability of anticipatory design loses out to refactoring- -.. a situation I think defines many systems today FIG. I think 3 given the situation before the Internet age. Due to the slower speed of the change (the figure is represented by the left of the balance), the early prediction is reasonable. However, in Figure 4, due to the changing speed of changing, there is too much to predict when designing, this situation is now facing many systems. In the long run, the only way to test whether a design is flexible involves making changes and measuring how easy they are to implement. One of the biggest problems with the traditional up- front-design-then-maintain strategy has been that software systems exhibit tremendous entropy; they degrade over time as maintainers rush fixes, patches, and enhancements into production The problem is worse today because of the accelerated pace of change, but current refactoring approaches are not the first to address the problem Back in the.. "dark ages" (circa 1986), Dave Higgins wrote Data Structured Software Maintenance, a book that addressed the high cost of maintenance, due in large part to the cumulative effects of changes to systems over time. Although Higgins advocated a particular program-design Approach (The Warnier / Orr Approach), One of His Primary Themes Was To Stop The Degradation of Systems Over Time By SystemArtain Redesigning Programs During Maintenance Activities. In a long-term project, check one Whether it is designed to have good flexibility, through changing needs, and see if the original design can be easily realized.

The biggest problem with this traditional "first design, re-maintenance" strategy is that there is a very large entropy (very easy to change, no law). A system will make the system's entropy greater and larger with time, such as time, maintenance, inclination, patch, and enhancement. Now due to the acceleration of external environment, the situation is getting worse. However, the current reconstruction technology is not the first way to try to solve this problem. As early as the so-called "dark period" (Circa 1986), Dave Higgins wrote a book named "Data Structured Software Maintenance", which pointed out that due to the increased cumulative impact of the change in time, maintenance The expenditures needed to be huge, and Higgins proposed a new design method to prevent the negative impact of the entropy increment of the system. The idea of ​​this method is in the maintenance process. There are systems to redesign the procedures. Higgins's approach to program maintenance was first to develop a pattern (although the term pattern was not used then) for how the program "should be" designed, then to create a map from the "good" pattern to the "spaghetti" code. Programmers would then use the map to help understand the program and, further, to revise the program over time to look more like the pattern. Using Higgins's approach, program maintenance counteracted the natural tendency of applications to degrade over time. "The objective was not to Rewrite The Entire Application, "SAID HIGGINS IN A Recent Conversation," But to Rewrite Those Portions for Which Enhancements Had Been Requested. "Higgins method first makes the program to design a model (although there is no mode yet Method, then establish a mapping between the detailed code design and "Good" mode, the programmer understands the system and modifies the program based on this mapping relationship, making the modified result closer to that mode. This method using Higgins can increase the trend by maintaining any time for the cancellation of the system. Higgins said: "The goal of this method is not to rewrite the entire system, but only rewritten some parts that must be enhanced as needed.

转载请注明原文地址:https://www.9cbs.com/read-98689.html

New Post(0)