Extreme Programming - Reconstruction
March-Bird Lucian Yjf Taopin WL Jazz Han Wei Nullgate Simon [Aka] (Reprinted from Cutter.com) September 15, 2003
Refactoring reconstruction Refactoring is closely related to factoring, or what is now referred to as using design patterns Design Patterns:. Elements of Reusable Object-Oriented Software, by Erich Gamma, Richard Helm, Ralph Johnson, and John Vlissides, provides the foundational work . on design patterns Design Patterns serves modern-day OO programmers much as Larry Constantine and Ed Yourdon's Structural Design served a previous generation;. it provides guidelines for program structures that are more effective than other program structures Reconstruction (refactoring) and configuration (to factoring ) Or closely related to the use of design patterns. Erich Gamma, Richard Helm, Ralph Johnson, and John Vlissides Complete "Design Patterns: Elements of Reusable Object-Oriented" made a foundation for design patterns. As the structured design advocated by Larry Constantine and Ed Yourdon, the design model has made great contributions to contemporary object-oriented technical procedures, which brought the gospel to developers. Through design patterns, the structure of the program is more effective than ever. If Figure 4 shows the correct balance of designing versus refactoring for environments experiencing high rates of change, then the quality of initial design remains extremely important. Design patterns provide the means for improving the quality of initial designs by offering models that have proven effective in the If the design shown in Figure 4 (Refactoring) is objective, the quality of the initial design is particularly important in the face of the adaptation of the high-speed change environment. By providing past has proven to be an effective mode, Design Patterns provides a method of increasing initial design quality.
So, you might ask, why a separate refactoring book? Can not we just use the design patterns in redesign? Yes and no. As all developers (and their managers) understand, messing with existing code can be a ticklish proposition. The cliché "IF IT Ain't Broke, Don't Fix IT", "The Program, as Fowler Comments," The Program MAY NOT BROKEN, But it does hurt. "Fear of Breaking Some Part of the Code . base that's "working" actually hastens the degradation of that code base However, Fowler is well aware of the concern: "Before I do the refactoring, I need to figure out how to do it safely .... I've written down the safe steps in the catalog "Fowler's book, Refactoring:. Improving the Design of Existing code, catalogs not only the before (poor code) and after (better code based on patterns), but also the steps required to migrate from one to the Other., this migration step, now, maybe you will ask, why still needs Do you want an independently reconstructed book? Don't we use the design model to redesign? Yes, it is not possible. As all developers (including managers) know, modifying the original program code is a tricky thing. There is a sentence in development folklore, "if it isn't broken, don't fix it". However, as Fowler mentioned, "The program may have no 'bad", but there is a potential harm. "I am afraid that the reconstruction of code that can also" work "is actually only exacerbated by the decline of code performance. At the same time, Fowler also knows: "Before software reconstruction, you need to find safe practice ... I wrote these security steps into the directory". <> Written by Fowler not onlys how to reconstruct the previous (poor) code and later (better) code based on mode design, but also the steps of code division reconstruction. These steps reduce the chances of errors in the process of reconstruction.
Beck describes his "two-hat" approach to refactoring - namely that adding new functionality and refactoring are two different activities Refactoring, per se, does not change the observable behavior of the software;. It enhances the internal structure When new functionality. needs to be added, the first step is often to refactor in order to simplify the addition of new functionality. This new functionality that is proposed, in fact, should provide the impetus to refactor. Beck with "two-hat" method described weight Things, that is, add new features and reconstructions are two different behaviors. In essence, the reconstruction does not change the external function visible to the software, it just enhances the internal structure of the software. When there is a new feature to be added, the first step is often reconstructed to add software to add more simplified. In fact, this added new feature provides driving force for reconstruction. Refactoring might be thought of as incremental, as opposed to monumental, redesign. "Without refactoring, the design of the program will decay," Fowler writes. "Loss of structure has a cumulative effect." Historically, our approach to maintenance has been " Quick and Dirty, "So Even IN Those Cases Where, IT Degraded over Time. Refactance can be considered to be an incremental design," no heavy design) Structure, programming will rot, "Fowler write," structural defects will bring cumulative effect. " Historically, our method for software maintenance is "Quick and Dirty" (fast but not complete?), Causing some initial design work to do good projects, will "degrade". Figure 5 - Software Entropy over Time.
Figure 5 shows the impact of neglected refactoring - at some point, the cost of enhancements becomes prohibitive because the software is so shaky At this point, monumental redesign (or replacement) becomes the only option, and these are usually high- risk,. or at least high-cost, projects. Figure 5 also shows that while in the 1980s software decay might have taken a decade, the rate of change today hastens the decay. For example, many client- server applications hurriedly built in the early 1990s are Now More Costly To Maintain Than Mainframe Legacy Applications Builtin The 1980s. Chart 5 shows the situation when there is no reconstruction: because the software is so unreliable, the upgrade maintenance cost changes to the huge (Monumental) design (or Replacement) became a unique choice, the risk of the project, at least invested, and it grew more and more. Figure 5 also shows that in the 1980s, the Software's survival period is about 10 years, and the rapid change in demand today exacerbates the rotation of the software. For example, the C / S application software made in many years in the early 1990s is much higher than the maintenance cost of the maintenance costs left in the 1980s. Data Refactoring: Comments by Ken Orr Data Reconstruction: Ken Orr Notes Editor's Note:. As I mentioned above, one thing I like about XP and refactoring proponents is that they are clear about the boundary conditions for which they consider their ideas applicable For example , Fowler has an entire chapter titled "Problems with Refactoring." Database refactoring tops Fowler's list. Fowler's target, as stated in the subtitle to his book, is to improve code. So, for data, I turn to someone who has been thinking about Data Refactoring for a long time (although NOT Using That Specific Time). The Following Section On Data Refactoring Was Wrist By Ken Orr. Editor Note: If the XP and Reconstruction Thoughts attract me is that they can clearly understand what they want Consider the boundary condition of the implementation problem. For example, Fowler wrote a chapter "Problems with refactoring". The first question is the database reconstruction. As shown by the subtitle of the book, the goal of Fowler is to improve code quality.
To this end, I have consulted some people with deeper research in data reconstruction (or other terms). The following is written by the data reconstruction part by Ken Orr. When Jim Asked on Refactoring, I Had to ask Him What true required meant. It seemed to me to come down to a couple of very Simple Ideas: When Jim wants me to talk about it, I asked him. What does refactor mean? For me, summarize it into the following simple points: do what you know how to do. Do you will do the Do IT Quickly. Speed-rate, the speed, Go Back and Redesign, when changing , go back and redesign Go to 1. Back 1 Over the years, Jim and I have worked together on a variety of systems methodologies, all of which were consistent with the refactoring philosophy. Back in the 1970s, we created a methodology built on data structures. The idea was that if you knew what people wanted, you could work backward and design a database that would give you just the data that you needed, and from there you could determine just what inputs you needed to update the database so In the past few years, JIM works with me to study Systems Methodologies, and found that all methodologies have a consistent place for refactoring philosophy. In the 1970s, we have established a methodology based on data structure. Its main thinking is: After knowing people's needs, the reverse work, design a database that contains only required data, and then determines the input data necessary to update the database, generate the required output data.
Creating systems by working backward from outputs to database to inputs proved to be a very effective and efficient means of developing systems. This methodology was developed at about the same time that relational databases were coming into vogue, and we could show our approach would always that create a well-behaved, normalized database. More than that, however, was the idea that approaching systems this way created minimal systems. in fact, one of our customers actually used this methodology to rebuild a system that was already in place. The customer STARTED WITH THEPUTS AND WIKED BACKWARD to Design A Minimal Database with minimal input requirements. The method of constructing a system from the output result reverse engineering to the input is proved to be a very effective and efficient system development method. Almost at the same time, this method has also developed while the relational database begins. Let us build a good, standardized database. In addition, this idea is also applicable to creating minimum systems. In fact, one of our customers has used this method when rebuilding a system and has achieved success. The customer starts from the output and designs a minimum database that meets the minimum input requirements by reverse engineering. The new system had only about one-third the data elements of the system it was replacing This was a major breakthrough These developers came to understand that creating minimal systems had enormous advantages:.. They were much smaller and therefore much faster to implement, and The new system is only one-third of the data element (Data Elements). This is a big breakthrough. Developers have begun to gradually realize that the minimum system has a huge advantage: the system is smaller and can be achieved faster; the function is more adaptable.
Still, building minimal systems goes against the grain of many analysts and programmers, who pride themselves on thinking ahead and anticipating future needs, no matter how remote. I think this attitude stems from the difficulty that programmers have had with maintenance. Maintaining large systems has been so difficult and fraught with problems that many analysts and programmers would rather spend enormous effort at the front end of the systems development cycle, so they do not have to maintain the system ever again. But as history shows, this approach of guessing about the future never works out. No matter how clever we are in thinking ahead, some new, unanticipated requirement comes up to bite us. (How many people included Internet-based e-business as one of their top requirements in systems they were building 10 YEARS AGO?) However, creating the minimum system does not meet the ideas of many analysts and programmers, no matter how far, they always think that they can think of and foresee future demands. I think this causes the reason for the software difficult to maintain. Maintaining a large system is so difficult and full of problems, so that many analysts and programmers prefer to design a large amount of energy in the pre-system development of the system to design one quot; improved system, so as to seek one for all. However, the facts prove The forecast is futuristic. No matter how smart we are, there are many times before our thinking, there will always be some demands that have never been expected.
(How many people can write INTERNET-based e-commerce as a future demand for future demands to their own software) Ultimately, One of the Reasons That Maintenance Is So Difficult Reas Around The Problem of Changing The Database Design. In Most Developers 'eyes, once you design a database and start to program against it, it is almost impossible to change that database design In a way, the database design is something like the foundation of the system:. once you have poured concrete for the foundation, there is almost no way you can go back and change it. As it turns out, major changes to databases in large systems happen very infrequently, only when they are unavoidable. People simply do not think about redesigning a database as a normal part of systems MAINTENANCE, AND, AS A Consesequence, Major Changes Are Off. Finally, one of the reasons such as maintaining this is that other problems will follow when changing the database design. In most developers, once the database is designed and started on this, it is almost impossible to change the design of the database. To a certain extent, the design database is like the foundation of the construction system: Once you put the concrete, you can't change it again. Therefore, unless inevitable, there will be a large change in the large system. People cannot use the redesigned database only as a normal part of the system maintenance. Otherwise, it is difficult to imagine the system to make a big change.
Enter Data Refactoring enter data reconstruction Jim and I had never been persuaded by the argument that the database design could never be changed once installed. We had the idea that if you wanted to have a minimal system, then it was necessary to take changes or new requirements to the system and repeat the basic system cycle over again, reintegrating these new requirements with the original requirements to create a new system. You could say that what we were doing was data refactoring, although we never called it that. Jim and I Never recognize that once the system starts running, you cannot change the view of the database design. We believe that if you want to keep the system to keep the most streamlined, you must introduce the changes you want to do to the system and repeat The basic development process makes new needs and old demand to become a new system. You may say that we do is data reconstruction, but we never say so. The Advantages of this approach turned Out To be significant. For one thing, there was no major difference between development of a new system and the maintenance or major modification of an existing one. This meant that training and project management could be simplified considerably. It also meant that our systems tended not to Degrade over Time, Since We "Builtin" To "adding the existing system. The benefits of this are obvious. First, develop a new system and maintenance or the difference between the old system is not a big transformation is not Very big. This means that managing a project and training will be greatly reduced. At the same time, the development time will also be reduced because we have different ways of handling the change, one is 'Built IN' (above the change), The other is 'adding them on' (added changes). Over a period of years, we built a methodology (Data Structured Systems Development or Warnier / Orr) and trained thousands of systems analysts and programmers. The process that we developed was largely manual, although we thought that if we built a detailed-enough methodology IT Should Be Possible To Automate Large Pieces Of That Methodology In Case Tools. In the past few years, we have established a method (structured system design or warnier-forerolic) and trained thousands of systems. Analysts and programmers.
Even if we define a variety of descriptions, it is possible to achieve most of the work with the CASE tool, but the development process still requires a lot of manual work. Automating Data Refactoring automated data reconstruction To make the story short, a group of systems developers in South America finally accomplished the automation of our data refactoring approach in the late 1980s. A company led by Breogán Gonda and Nicolás Jodal created a tool called GeneXus1 that accomplished what we had conceived in the 1970s They created an approach in which you could enter data structures for input screens;. with those data structures, GeneXus automatically designed a normalized database and generated the code to navigate, update, and report against that database In order to shorten the development time, South American system developers have developed data reconstruction automation tools in the 1980s. Companies led by Breogán Gonda and Nicolás Jodal have developed a tool called GeneXus, which is what we have in the 1970s. The way they created allows us to automatically create specifications for the specification database after entering the data structure and generate code for browsing, updating, and outputting data. But that was the easy part. They designed their tool in such a way that when requirements changed or users came up with something new or different, they could restate their requirements, rerun (recompile), and GeneXus would redesign the database, convert the previous database automatically to the new design, and then regenerate just those programs that were affected by the changes in the database design. They created a closed-loop refactoring cycle based on data requirements. this makes things simple, this tool so that when the user After the demand or system requirements change, only the original definition is changed, re-compiled, and can redesign the database to accommodate new requirements, and generate code that is only affected by the database modification. This is the reconstruction process based on the data-based closed loop.
GeneXus showed us what was really possible using a refactoring framework. For the first time in my experience, developers were freed from having to worry about future requirements. It allowed them to define just what they knew and then rapidly build a system that did just what they had defined. Then, when (not if) the requirements changed, they could simply reenter those changes, recompile the system, and they had a new, completely integrated, minimal system that incorporated the new requirements. GeneXus makes us realize reconstruction Can convince the real thing that we bring. As far as my experience, this will give developers from the concerns about future demand, so that developers can only define all what they know and quickly implement the defined content. Therefore, when the system's demand changes, they only need to simply join those modifications, recompile, you can get a new, fully integrated, and meet the new needs. What does all this mean? What does all this mean? Refactoring is becoming something of a buzzword. And like all buzzwords, there is some good news and some bad news. The good news is that, when implemented correctly, refactoring makes it possible for us to build very robust systems very rapidly. The bad news is that we have to rethink how we go about developing systems. Many of our most cherished project management and development strategies need to be rethought. We have to become very conscious of interactive, incremental design. We have to be much more willing to prototype our Way to success and to use tools That Will Do Complex Parts of The Systems Development Process (Database Design and Code Generation) for US. Reconstruction is gradually turning into a fashionable word. Like all stylish things, there is a good side and there is a bad side. A good side is: If you can implement correctly, the reconstruction makes us so possible to quickly build a strong system. On the one hand, we have to reconsider how to develop. All development and management strategies used in the original need need to be reconsidered. We must understand the development of interactive, incremental development; we must also be accustomed to successfully developing methods and using tools to complete the complex work in system development (database design and code generation).
In the 1980s, CASE was a technology that was somehow going to revolutionize programming. In the 1990s, objects and OO development were going to do the same. Neither of these technologies lived up to their early expectations. But today, tools like GeneXus really do many of the things that the system gurus of the 1980s anticipated. It is possible, currently, to take a set of requirements, automatically design a database from those requirements, generate an operational database from among the number of commercially available relational databases (Oracle, DB2, Informix, MS SQL Server, and Access), and generate code (prototype and production) that will navigate, update, and report against those databases in a variety of different languages (COBOL, RPG, C, C , and Java). In the 1980s, Case made the development of revolutionary changes. In the 1990s, objects and OO methods also made the development of revolutionary changes. These technologies are not as expected. But now, the tools like Genexus have done something expected in the 1980s. It is indeed possible to automatically perform database design after a given system requirement, generating a commercial relational database (Oracle, DB2, Informix, MS SQL Server, And Access), and generate data to be browsed, updated, and output data Different languages (COBOL, RPG, C, C, C , And Java) code (prototypes and results). This new approach to systems development allows us to spend much more time with users, exploring their requirements and giving them user interface choices that were never possible when we were building things at arm's length. But not everybody appreciates this new world. For one thing, IT Takes A Great Deal of The MyStery Out of the Process. For Another, IT Puts Much More Stress on Rapid Development. New system development methods allow us to exchange, analyze user needs, let users choose Different interaction interfaces, this is impossible when you use yourself to do everything. But not everyone likes this development method. One is because it will greatly open the mystery of the development process. The other is because this also adds pressure to the rapid development.