[Favorite Dongdong] Extreme Programming Extreme Programming (Chinese and English Comparison)

zhaozj2021-02-16  106

Extreme Programming

Author's Source http://www.cutter.com/

Trial Bigmac [Aka] Translator March-Bird Lucian Yjf Taopin WL Jazz Han Wei Nullgate Simon [AKA]

As We since Explored in Several Issues of Ead, The Two Most Pressing Issues In Information Technology Today: As we explore in the several periods of EAD, the most urgent two issues in today's information technology are:

How do We deliver Functionality To Business Clients Quickly? How can I deliver the function to commercial users? How do we keep up with NEAR-Continuous Change? How can I keep up with almost contiguous changes?

Change is changing Not only does the pace of change continue to accelerate, but, as the September issue of eAD pointed out, organizations are having to deal with different types of change -.. Disruptive change and punctuated equilibrium Disruptive technologies, like personal computers in the early 1980s, impact an industry (in the case of PCs, several related industries), while a punctuated equilibrium - a massive intervention into an ecosystem or an economy - impacts a very large number of species, or companies The Internet,. Which Has Become The Backbone for E-Commerce and E-Business, HAS DISRUPTED A WIDE RANGE OF INDUSTRIES - MORE A PUNCTUATED Equilibrium Than A DISRUPTION. Change itself is constantly changing. Not only the speed of change is constantly increasing, but also, if the EAD is pointed out in October, the organization is not allowed to cope with various types of changes - the dramatic change and the constant balance of breaking. Producing a dramatic technology, like a personal computer in the 1980s, impacting an industrial (PC and several related industries) and breaking from time to time - an intervention of an ecosystem or a huge impact on the entire economy - Then affect countless species, or, the company. The Internet has become an e-commerce pillar, which has caused a wide range of industries - more is a breaking balance rather than just a dramatic.

When whole business models are changing, when time-to-market becomes the mantra of companies, when flexibility and interconnectedness are demanded from even the most staid organization, it is then that we must examine every aspect of how business is managed, customers are delighted And Products Are Developed. When the entire business model is changing, when "time means the market" is becoming the company's spell, when the adaptability and interconnectability is becoming the need for even the most stretched organization, we will have It is necessary to check every aspect of the following: How does the business are managed? Why is the customer feel happy, and how the product is developed. The Extreme Programming movement has been a subset of the object-oriented (OO) programming community for several years, but has recently attracted more attention, especially with the recent release of Kent Beck's new book Extreme Programming Explained: Embrace Change Do not be. put off by the somewhat "in-your- face" moniker of Extreme Programming (XP to practitioners). Although Beck does not claim that practices such as pair programming and incremental planning originated with XP, there are some very interesting, and I think important, concepts articulated by XP. There's a lot of talk today about change, but XP has some pretty good ideas about how to actually do it. Hence the subtitle, Embrace Change. Extreme programming (Extreme programming) movement has become an object-oriented programming community Some of them have been several years, but until recently, more attention, especially the "Extreme Programming Explained: EMBRACE CHANGE" book published. Never give it a resentment because of the ultimate programming (the industry is referred to as XP). Although Beck does not say like pair programming, the incremental planning is from XP, but still have some very interesting, I think it is also very important concept to borrow XP. There are many discussions on changes, but XP has many very good ideas to do actually do. That is, this subtitle: hug changes.

There is a tendency, particularly by rigorous methodologists, to dismiss anything less ponderous than the Capability Maturity Model (CMM) or maybe the International Organization for Standardization's standards, as hacking The connotation:. Hacking promotes doing rather than thinking and therefore results in low quality THIS An Easy Way To Dismiss Practices That Conflict With One's Own Assumptions About The World. There is a trend, especially in those strict methodologists, hoping to eliminate those "Capability Maturity Model CMM) or The standards of international standardized organizations are not so cumbersome, such as Hacking. Note: Hacking advises action rather than thinking, resulting in lower quality. Remove the practice of conflict with someone on this world, this is a simple way. . Looked at another way, XP may be a potential piece of a puzzle I've been writing about over the past 18 months Turbulent times give rise to new problems that, in turn, give rise to new practices - new practices that often fly In the face of conventional Wisdom But Survive Because The New Reality. There at Least Four Category: From another angle to XP, it is possible to be a potential Part of this one I have been writing in the past 18 months. New problems during the chaotic period, and the latter leads to new practices - new practices openly violating traditional knowledge, but they are surprised because they can better adapt to this new real world. At least four practice methods I think it belongs to this category:

Xp - The focus of this issue of EADXP - EAD This time focus Lean Development - Discussed in the November 1998 Issue of EAD Lightweight development (Lean Development) - Already discussed in EAD 1998 Crystal Light Methods - Mentioned In The November 1999 Issue of Ead and Further Discussed In this Issue Lightweight Crystal Method (CRYSTAL LIGHT METHODS) - mentioned in November 1999 in EAD, in this issue, further discussion Adaptive software development - described in the August 1998 issue of eAD (then called application development strategies - ADS) adaptive software development (adaptive software development) - described in eAD mid-August 1998 (then known as application development strategy application development strategies . - ADS) Although there are differences in each of these practices, there are also similarities: they each describe variations from the conventional wisdom about how to approach software development Whereas lean and adaptive development practices target strategic and project management, XP brings its differing Despite these practices in these practices, they also have similar places: they all describe different ways to develop traditional software. Although lightweight development and adaptive development is targeted by strategic and project management, XP has brought the development method into the field of programmers and testers in different perspectives.

Much of XP is derived from good practices that have been around for a long time. "None of the ideas in XP are new. Most are as old as programming," Beck offers to readers in the preface to his book. I might differ with Beck in one respect:. although the practices XP uses are not new, the conceptual foundation and how they are melded together greatly enhance these "older" practices I think there are four critical ideas to take away from XP (in addition to a number Other Good Ideas: Many of the parts in XP are actually from those excellent development practices that have existing. "There is no idea in XP is brand new. Most ideas generated by the same time," Beck in the preface in his book. But I considered in a certain aspect may differ from Beck: Although the practice used by XP is not brand new, the establishment of the concept and how they are integrated to greatly enhance these "old" practice. I want (except for many other good ideas,) can extract four key ideas from XP: The cost of change changes the cost of change Refactoring COLLABORATION SIMPLCITY simplified

But First, I Discuss Some XP Basics: The Dozen Practices That Define Xp. But first, let's discuss the basics of XP: The twelve two practical methods for XP.

XP - The BasicsXP- foundation

I must admit that one thing I like about XP's principal figures is their lack of pretension. XP proponents are careful to articulate where they think XP is appropriate and where it is not. While practitioners like Beck and Ron Jeffries may envision that XP has wider applicability ., they are generally circumspect about their claims For example, both are clear about XP's applicability to small (less than 10 people), co-located teams (with which they have direct experience); they do not try to convince people that the Practices Will Work for Teams of 200. I must admit one thing, that is what I like XP should be that it doesn't have other fancy things. People who support XP always point out the XP suitable place and some of his limitations. The XP practitioner BECK and RON Jeffties believe that XP will have a wider range of application prospects. They usually be very cautious about their own requirements. For example: small (less than 10 people), the company is partially (they have direct experience) is very clear for XP adaptability; they have not tried to let people believe that XP can apply to a 200-person team. The Project Project

The most prominent XP project reported on to date is the Chrysler Comprehensive Compensation system (the C3 project) that was initiated in the mid-1990s and converted to an XP project in 1997. Jeffries, one of the "Three Extremoes" (with Beck and Ward Cunningham), and I spent several hours talking about the C3 project and other XP issues at the recent Miller Freeman Software Developer conference in Washington, DC, USA. the most famous XP project is Chrysler comprehensive compensation system (C3 project), It started at the mid-1990s in the mid-1990s, and the 1997 evolved into XP. Jeffries is one of the "Ultimate Programming Triple Group" (the other two is the beck with Ward Cunningham). I talked about C3 and other XP projects in Washington DC. ================================= 注解: chrysler comprehensive Compensation System Chrysler comprehensive compensation system ======= =========================

Originally, the C3 project was conceived as an OO programming project, specifically using Smalltalk. Beck, a well-known Smalltalk expert, was called in to consult on Smalltalk performance optimization, and the project was transformed into a pilot of OO (XP) practices after the original project was deemed unreclaimable. Beck brought in Jeffries to assist on a more full-time basis, and Jeffries worked with the C3 team until spring 1999. The initial requirements were to handle the monthly payroll of some 10,000 salaried employees. The system CONSISTS OF AppROXIMATELY 2,000 CLASSES AND 30,000 Methods and Was Readyin A Reasonable Tolerance Period of The Planned Schedule. Initially, C3 is an OO (object-oriented technology) development project, especially it develops in the Smaltalk language. (SMALTALK: Xerox company developed a high-level programming language, it supports an option screen driven application with mouse to help build computer programs that are easy to use.) As a famous SmallTalk expert, Beck is invited to discuss Smaltalk performance optimization has become an experimental project that uses an object-oriented OO (XP) method when the original item is considered to be non-rescuable. Beck has brought Jeffries to help those basic things, Jeffries have been dry in the c3 group until 1999. The most beginning demand is to be a system that is managed for approximately 10,000 employees per month. This system consists of approximately 2,000 classes and 30,000 methods and provides reasonable tolerance in planning.

As we talked, I asked Jeffries how success on the C3 project translated into XP use on other Chrysler IT projects. His grin told me all I needed to know. I've been involved in enough rapid application development (RAD) projects for large IT organizations over the years to understand why success does not consistently translate into acceptance. There are always at least a hundred very good reasons why success at RAD, or XP, or lean development, or other out-of-the-box approaches does not That - But More on this question is talked to us, I asked Jeffries to successfully change C3 into XP and applied to other Chrysler IT projects. He laughed and told me. Over the years, I have developed a lot of RAD systems (fast prototype development) for many large IT organizations, so I know why we can't use successful experience in other projects. For RAD, XP, lightweight development, and others Get a wide range of applications, there are at least one reason for their success. Ppropertices practice

One thing to keep in mind is that XP practices are intended for use with small, co-located teams. They therefore tend toward minimalism, at least as far as artifacts other than code and test cases are concerned. The presentation of XP's practices have both positive and negative aspects At one level, they sound like rules -.. do this, do not do that Beck explains that the practices are more like guidelines than rules, guidelines that are pliable depending on the situation However, some, like. the "40-hour week," can come off as a little preachy. Jeffries makes the point that the practices also interact, counterbalance, and reinforce each other, such that picking and choosing which to use and which to discard can be tricky. should One thing to remember is that we should tend to use XP in a small, partial team. In addition to the exception of code and test, minimize some impact. XP practices have both positive performance and negative. In some respects, they sound like a bunch of rules, do this, don't do that. This beck is explained, and XP is more like a guideline, a flexible development policy that relies on the specific environment. However, such as "40 hours a week", etc., it may feel the renewal channel. Jeffries make practical, balanced, and enhanced each other. So what is the tricky thing to choose to use. The planning game. XP's planning approach mirrors that of most iterative RAD approaches to projects. Short, three-week cycles, frequent updates, splitting business and technical priorities, and assigning "stories" (a story defines a particular feature requirement and is displayed in A Simple Card Format ALL DEFINE XP's Approach To Planning. The plan is planned: the features of the implementation of most iterative RAD projects can be reflected in the implementation method of the development plan. Short-term, every three weeks is a loop, frequently updated, according to priority division task and technology, allocate Stories (a Story defines a special functional requirement and records on cards in a simple manner), all of these That is to form a plan in XP.

Small releases "Every release should be as small as possible, containing the most valuable business requirements," states Beck This mirrors two of Tom Gilb's principles of evolutionary delivery from his book Principles of Software Engineering Management:.. "All large projects are capable of Being Divided Into Many Useful Partial Result Steps, "AND" Evolutionary Steps Should Be Delivered on The Principle of The Juiciest One Next. "Summary:" Each version should be as small as possible, and contain the most commercial value, Beck is said. This also reflects Tom Gilb's two points mentioned in his Book: "All large projects can be divided into local, useful small steps" and " evolutionary step is passed to the next stage. "Small releases provide the sense of accomplishment that is often missing in long projects as well as more frequent (and more relevant) feedback. However, a development team needs to also consider the difference between" release "and" releasable "The cost of each release -. installation, training, conversions - needs to be factored into whether or not the product produced at the end of a cycle is actually released to the end user or is simply declared to The publishing of small versions means realization of frequent feedback that is often lacking in large projects.. However, a development team can of course consider "release" the same "can be released". Whether it is the final version release or a simple release version of the release, it is necessary to pay the cost of installation, training, transformation.

Metaphor. XP's use of the terms "metaphor" and "story" take a little wearing in to become comfortable. However, both terms help make the technology more understandable in human terms, especially to clients. At one level, metaphor and architecture are synonyms - they are both intended to provide a broad view of the project's goal But architectures often get bogged down in symbols and connections XP uses "metaphor" in an attempt to define an overall coherent theme to which both developers and business clients can relate.. The Metaphor Describes The Broad Sweep of The Project, While Stories Are Used to Describe Individual Features. Metaphor: The use of "metaphor" and "story" in XP may make people feel uncomfortable. However, the use of these terms can help us understand in a more humane manner, especially for our customers. To a certain extent, metaphor is the same as the architecture - they are all focused on a project from globally. But architectures often trapped in symbols and connecting patches. XP uses "metaphor" to define a comprehensive consistent topic from developers to business customers. Metaphor is used to describe the comprehensive appearance of the project, and Story is used to describe individual specific features. .... Simple design Simple design has two parts One, design for the functionality that has been defined, not for potential future functionality Two, create the best design that can deliver that functionality In other words, do not guess about the future: create the best (simple) design you can today. "If you believe that the future is uncertain, and you believe that you can cheaply change your mind, then putting in functionality on speculation is crazy," writes Beck. "Put in what you NEED WHEN You NEED IT. "Simple design: Simple design contains two parts. First, design for a defined function, not designed to potentially future possible functions. Second, create the best designs that can implement functionality. In other words, it will be the best design that is currently achievable without the future. "If you believe in the future, it is uncertain, and you believe that you can change your ideas, then the consideration of future features is dangerous." Beck wrote. "Only when you really need it,"

In the early 1980s, I published an article in Datamation magazine titled "Synchronizing Data with Reality." The gist of the article was that data quality is a function of use, not capture and storage. Furthermore, I said that data that was not systematically . used would rapidly go bad Data quality is a function of systematic usage, not anticipatory design Trying to anticipate what data we will need in the future only leads us to design for data that we will probably never use;. even the data we did guess correctly on will not be correct anyway XP's simple design approach mirrors the same concepts As described later in this article, this does not mean that no anticipatory design ever happens;.. it does mean that the economics of anticipatory design changes dramatically at. In the 1980s, I published an article "actual synchronization data" article on automation information management, the quality of data is to use the function, not capturing and storage. In addition, I said that if the data is not very systematic, it will become. Data quality is the function of the system, not a pre-designed design. Regardless of whether it is wrong or wrong, try to design a data we have never used, the end result is likely to use them once again. The simple design method of XP reflects the same point of view. As described later in this article, this does not mean that it does not require prediction, but the design made by the predicted content has changed, the cost of it is very huge. . Refactoring If I had to pick one thing that sets XP apart from other approaches, it would be refactoring - the ongoing redesign of software to improve its responsiveness to change RAD approaches have often been associated with little or no design; XP should be. THOUGHT OF AS Continuous Design. in Times of Rapid, Constant Change, Much More Attention Needs to Be Focused On Refactoring. See The Sections "Refactoring" and "Data Refactoring," Below. Reconstruction: If I have to find out one can The things that are distinguished from XP and other methods are the reconstruction - the continuous software re-designed to improve its reaction. The RAD method often rarely does not even correspond to design; XP should be seen as a continuous design. When the change is fast and frequent, there should be more energy on the reconstruction. See "Reconstruction" and "Data Reconstruction" section below.

. Testing XP is full of interesting twists that encourage one to think - for example, how about "Test and then code" I've worked with software companies and a few IT organizations in which programmer performance was measured on lines of code delivered? and testing was measured on defects found - neither side was motivated to reduce the number of defects prior to testing XP uses two types of testing: unit and functional However, the practice for unit testing involves developing the test for the feature prior to.. Writing the code and further stat. Once the code is written, it is immediely subjected to the test suite instant feedback. Test: XP full of thoughtful puzzles. For example: What is "encoding first test"? I have worked with software companies and some IT agencies, where is the number of lines through code to assess the performance of programmers, and the performance of the test is taken by the number of defects found. Both methods cannot encourage the number of defects generated before testing. XP uses two tests: unit testing and function testing. The unit test requires the test method of the corresponding function before writing code, and tests should be automated. When the code is completed, it is immediately tested with the test set, so it can get feedback immediately.

The most active discussion group on XP remains the Wiki exchange (XP is a piece of the overall discussion about patterns). One of the discussions centers around a lifecycle of Listen (requirements) Test Code Design. Listen closely to customers while gathering their requirements. Develop test cases. Code the objects (using pair programming). Design (or refactor) as more objects are added to the system. This seemingly convoluted lifecycle begins to make sense only in an environment in which change dominates. XP most active discussion group It is still Wiki Exchange (XP is part of the Pattern's overall discussion), one of which discusses the life cycle of listening to (demand) -> test -> code -> design. Close to customers listen and collect their needs. Development Test Cases. Complete object coding (using paired programming). Design (or reconstruction) when more objects are added to the system. This seemingly chaotic life cycle is only meaningful in the environment that is often changed. Pair programming. One of the few software engineering practices that enjoys near-universal acceptance (at least in theory) and has been well measured is software inspections (also referred to as reviews or walkthroughs). At their best, inspections are collaborative interactions that speed learning as much as they uncover defects. One of the lesser-known statistics about inspections is that although they are very cost effective in uncovering defects, they are even more effective at preventing defects in the first place through the team's ongoing learning and incorporation of better PROGRAMMING PRACTICES. Pairing programming: Software (or use inspection directly is one of the few software engineering practices that are widely accepted (at least in theory) and effective metrics. In the best case, the inspection of Inspection can accelerate learning while discovering defects. A less known statistics about InsPection is although it is very effective in discovering defects, but through the team's continued learning and collaboration for good development practices, it can prevent defects in the first time.

One software company client I worked with cited an internal study that showed that the amount of time to isolate defects was 15 hours per defect with testing, 2-3 hours per defect using inspections, and 15 minutes per defect by finding the defect before it got . to the inspection the latter figure arises from the ongoing team learning engendered by regular inspections pair programming takes this to the next step -.? rather than the incremental learning using inspections, why not continuous learning using pair programming one I worked on software Company Customers cited an internal research result, indicating that a defect is found for 15 hours during the test phase. It takes 2-3 hours in the Inspection phase, and it takes only 15 minutes before inspection. The subsequent data comes from the continued team of conventional review. Pairing programming will bring this next step - with its incremental learning with inspection, why don't you use the programming to learn? "Pair programming is a dialog between two people trying to simultaneously program and understand how to program better," writes Beck. Having two people sitting in front of the same terminal (one entering code or test cases, one reviewing and thinking) creates a continuous , dynamic interchange. Research conducted by Laurie Williams for her doctoral dissertation at the University of Utah confirm that pair programming's benefits are not just wishful thinking (see Resources and References). "pairing two people at the same time trying to understand how to better programming and A conversation of programming, "writes by BECK. Let two people sit in front of a terminal (a person knocking code or test case, a person's review and thinking) produces a continuous, dynamic communication. Williams conducted a doctoral thesis in Utah, proved that pairing programming is more than just a beautiful idea and is effective. (See Resources and Reference)

Collective ownership. XP defines collective ownership as the practice that anyone on the project team can change any of the code at any time. For many programmers, and certainly for many managers, the prospect of communal code raises concerns, ranging from "I don ' T Want Those Bozos Changing My Code "To" WHO Do I Blame WHEN PROBLEMS ARISE? "Collective Ownership PROVIDES ANOTHER Level To The Collaboration Begun by Pair Programming. Code Sharing: Everyone in the project group can modify other project members at any time The code, this is the code sharing defined in XP. . For many programmers and manager, the idea of ​​total code will cause some doubts, such as "I don't want those idiots to change my code", "Who should I blame?" And so on. The shared code provides support from another level of collaboration in pairing programming. Pair programming encourages two people to work closely together: each drives the other a little harder to excel Collective ownership encourages the entire team to work more closely together:. Each individual and each pair strives a little harder to produce high-quality designs, code, And Test Cases. Grand, All this Forced "TOGETHERNESS" May Not Work for Every Project Team. Pairing programming encourages two people to work closely: everyone motivates another effort to surpass. Common all encourage the entire team more closely collaboration: Each individual and each double strive to produce high quality design, code and test sets. Of course, all of these forced "common" do not necessarily apply all items groups.

Continuous integration Daily builds have become the norm in many software companies -.. Mimicking the published material on the "Microsoft" process (see, for example, Michael A. Cusumano and Richard Selby's Microsoft Secrets) Whereas many companies set daily builds as a minimum, XP practitioners set the daily integration as the maximum - opting for frequent builds every couple of hours XP's feedback cycles are quick:.. develop the test case, code, integrate (build), and test often integration: daily build (build ) In many companies have become specifications, things on publications on the "Microsoft" process are imitated. (See, for example, Michael A. Cusumano and Richard Selby's Microsoft Secrets) Many companies share the daily chain as the minimum requirement, XP practitioners use daily integration as the maximum requirement, and choose to completely chain each two hours. XP feedback cycle rapid: development test set, encoding, integration (chain chain) and test. The perils of integration defects have been understood for many years, but we have not always had the tools and practices to put that knowledge to good use. XP not only reminds us of the potential for serious integration errors, but provides a revised perspective on Practices and Tools. It has been many years of understanding of integrated defects, but we don't always have the corresponding tools and time to use these knowledge. XP not only reminds us that there may be serious integration errors, but also provide a new understanding from the perspective of practical and tools.

40-hour week Some of XP's 12 practices are principles, while others, such as the 40-hour practice, sound more like rules I agree with XP's sentiments here;... I just do not think work hours define the issue I would Prefer A Statement Like, "Don't Burn Out The Troops", "Rather Than A 40-Hour Rule. There Are Situations in Which Working 40 Hours Pure Drudgery And Others In Which The Team Has To Be Pried Away from A 60-Hour Work Week. Just 40 hours per week: XP has 12 basic principles of practice, but sometimes, like only 40 hours of principles per week, it sounds more like rules. I agree with the views in XP. Just don't think there is a need for hard standard for working hours. Compared to it, I prefer a word similar to "not burning the arm". In some cases, work 40 hours is too tired, and in other groups, even 60 hours of work will be taken. Jeffries provided additional thoughts on overtime. "What we say is that overtime is defined as time in the office when you do not want to be there. And that you should work no more than one week of overtime. If you go beyond that, there's something wrong - and you're tiring out and probably doing worse than if you were on a normal schedule I agree with you on the sentiment about the 60- hour work week When we were young and eager, they were probably okay.. It's the Dragging Weeks to Watch for. "Jeffries provide more thinking about overtime:" We said that overtime is defined for us to stay in the office when we don't want to stay in the office. And you should not overtake more than one week. If you exceed you If you have anything, you have a problem - you are too tired, it is possible to do it off when you get on time. I agree with your stay in 60 working. When we are young and full, this may not Question. It is worth noting that the one week of dragging is. "

I do not think the number of hours makes much difference. What defines the difference is volunteered commitment. Do people want to come to work? Do they anticipate each day with great relish? People have to come to work, but they perform great feats By Being Committed to the Project, and commitment Only Arises from a Sense of Purpose. I don't think that the week's working time will cause a big difference. It is a voluntary contribution to the decision. Will people work? Are they full of strength every day? People must work, but they have created great achievements through investment projects, while investment is only a sense of object. .. On-site customer This practice corresponds to one of the oldest cries in software development - user involvement XP, as with every other rapid development approach, calls for ongoing, on-site user involvement with the project team on-site customer: this. The problem raised by the initial software development - user participation. XP, like other rapid development, requiring customers to continuously participate in the project group on site.

. Coding standards XP practices are supportive of each other For example, if you do pair programming and let anyone modify the communal code, then coding standards would seem to be a necessity coding standards:.. XP practices support each other. For example, if you pair programming and let others modify a total code, then the coding criterion looks must be.

VALUES AND PRINCIPLES Value

On Saturday, 1 January 2000, the Wall Street Journal (you know, the "Monday through Friday" newspaper) published a special 58-page millennial edition The introduction to the Industry & Economics section, titled "So Long Supply and Demand:. There's A New Economy Out There - And It Looks Nothing Like The Old One, "Was Written By Tom Petzinger." The Bottom Line: Creativity Is Overtaking Capital As The Principal Elixir of Growth, "Petzinger State. January 1, 2000 At Saturop, Wall Street Journal (published on Monday to Friday) released a thousand editions of the annual commemoration with a 58-page layout. Tom Petzinger is labeled Tom Petzinger in the introduction of the industry and financial introduction: "Long-lasting demand and summons: new economic growth points - see the same episode." Petzinger under the bottom: "Creative is replacing the capital of 'Wanjin Medicine' in the primary factor." Petzinger is not talking about a handful of creative geniuses, but the creativity of groups - from teams to departments to companies Once we leave the realm of the single creative genius, creativity becomes a function of the environment and how people interact and collaborate. to produce results. If your company's fundamental principles point to software development as a statistically repeatable, rigorous, engineering process, then XP is probably not for you. Although XP contains certain rigorous practices, its intent is to foster creativity and communication. Petzinger not Talk about the creativity of a small number of genius, but talk about the following groups of creativity - from the group to the department. Once we look down on the individual's individual creation, creativity is the ability of the environment, and people use and mutual assistance to achieve our results. If your company believes that software development is just a statistical repetition test, engraved, technical process, then XP is not suitable for you. Although there is also a strict practice in XP, XP itself is pursuing "creation" and "communication."

Environments are driven by values ​​and principles XP (or the other practices mentioned in this issue) may or may not work in your organization, but, ultimately, success will not depend on using 40-hour work weeks or pair programming -. It Will Depend On WHETHER or NOT The VALUES AND Principles of XP Align With Those of Your Organization. The environment is a system that drives with the value of the same rule. XP (or other similar) may not suit your unit, but it should be clarified that success is not crazy about 40 hours a week or paired programming, nor relying on XP in your unit. Value or rules. BECK Identifies Four Values, And Ten Secondary Principles - But I'll Mention Five That Should Provide Enough Background. Beck pointed out four values, five basic rules, and ten auxiliary rules - but I want to mention It is these five rules.

Communication. So, what's new here? It depends on your perspective. XP focuses on building a person-to-person, mutual understanding of the problem environment through minimal formal documentation and maximum face-to-face interaction. "Problems with projects can invariably be traced back to somebody not talking to somebody else about something important, "Beck says XP's practices are designed to encourage interaction - developer to developer, developer to customer communication: Yes, communication, however, there seems to be no new thing in inside? Communication is mainly to see people's own opinions, the basics of XP construction is human and people, through the most concise documents, the most direct face-to-face communication to obtain understanding of the task environment.

. Simplicity XP asks of each team member, "What is the simplest thing that could possibly work?" Make it simple today, and create an environment in which the cost of change tomorrow is low concise:. XP ask each member of the development team: "What is the most concise method that might achieve?". The simplicity of today can be maintained, can reduce the costs brought about by tomorrow

Feedback "Optimism is an occupational hazard of programming," says Beck Whether it's hourly builds or frequent functionality testing with customers, XP embraces change by constant feedback Although every approach to software development advocates feedback.. "Feedback is the treatment.". - even the much-maligned waterfall model -. the difference is that XP practitioners understand that feedback is more important than feedforward Whether it's fixing an object that failed a test case or refactoring a design that is resisting a change, high-change environments require a Much Different Understanding of Feedback. Feedback: "For programming, optimism is an adventure.", "and feedback is the corresponding solution." Whether it is using repeated construction or frequent user function testing, XP can constantly receive feedback. Although we will say and feedback each time, we will say and feedback - even a very harmful waterfall model - different is that XP practitioners believe that feedback is more important than feedback (feedForward). Whether it is modified to test failure or software rejected by users from new rework, the rapid change in development environment requires a better understanding of feedback.

Courage. Whether it's a CMM practice or an XP practice that defines your discipline, discipline requires courage. Many define courage as doing what's right, even when pressured to do something else. Developers often cite the pressure to ship a buggy product and the courage to . resist However, the deeper issues can involve legitimate differences of opinion over what is right Often, people do not lack courage -. they lack conviction, which puts us right back to other values ​​If a team's values ​​are not aligned,. THE TEAM WON 'Right, "Right," and, WITHOUT CONVICTION, Courage Doesn't Seem So Important. It's hard to work up the energy to fight for something you don't believe in. Courage: Whether you It is a method of using a CMM method or XP, which itself is required to be courageous. Many places define courage as the right to do something, even forced to do other things. Developers often excuses from pressure to emit products with many defects and adhere to this. However, a further should include other correct different things. Usually, people are not lack of courage, but lack of reasons for gaining correct practice, and don't believe this, courage is not as important as it seems. And if you don't have confidence, it is hard to try hard. "Courage is not just about having the discipline," says Jeffries. "It is also a resultant value. If you do the practices that are based on communication, simplicity, and feedback, you are given courage, the confidence to go ahead in A lightweight manner, "as opposed to being weight down by the more cumbersome, design-heavy phalices", Jeffries said that it is a final value. If you work in a mode based on "Communication", "Simple", "Feedback", you will get courage, the more important it is, the more you trust it.

Quality work. Okay, all of you out there, please raise your hand if you advocate poor-quality work. Whether you are a proponent of the Rational Unified Process, CMM, or XP, the real issues are "How do you define quality? "and" What actions do you think deliver high quality "Defining quality as" no defects "provides one perspective on the question; Jerry Weinberg's definition,"? Quality is value to some person, "provides another I get weary of methodologists who use. the "hacker" label to ward off the intrusion of approaches like XP and lean development. It seems unproductive to return the favor. Let's concede that all these approaches are based on the fundamental principle that individuals want to do a good, high-quality job What "Quality" means and how to achieve it - now the the gist of the real debate! Quality work: Ok, if you have a bad work, please leave here. Whether you are a Rational Unified Process, CMM, or XP's Predor, its essential point of view "How do you define quality and" What activities will win high quality ", define" no shortage "quality is a problem direction. Jerry Weinberg's definition is "Quality is a" Managing XP Management XP for Most People "

One area in which XP (at least as articulated in Beck's book) falls short is management, understandable for a practice oriented toward both small project teams and programming. As Beck puts it, "Perhaps the most important job for the coach is the acquisition of Toys and food. "Components Of Management Strategy.) is understandable to manage the management in XP (at least BECK book) for small projects and programming. Just like Beck, "For coach, the most important job is to get toys with food" (Guidance is an integral part of Beck's management strategy)

With many programmers, their recommended management strategy seems to be:.?. Get out of the way The underlying assumption Getting out of the way will create a collaborative environment Steeped in the tradition of task-based project management, this assumption seems valid However. , in my experience, creating and maintaining highly functional collaborative environments challenges management far beyond making up task lists and checking off their completion Like many programmers, they recommend management strategies such as: escape. What is the assumption below? Skating will build a collaborative environment, in traditional task-based management, this assumption is effective. However, according to my experience, creation and maintaining a collaborative environment will challenge management away from the preparation task list and check. Figure 1 - Historical Lifecycle Change Costs. Figure 2 - COMTEMPORARY LIFECLE CHANGE COSTS.

The cost of change changes

. Early on in Beck's book, he challenges one of the oldest assumptions in software engineering From the mid-1970s, structured methods and then more comprehensive methodologies were sold based on the "facts" shown in Figure 1. I should know; I developed, Taught, Sold, And Installed Several of these Methodologies During The 1980s. Beck begins with some of the "Ancient Traditional" of those software engineering to make challenges. Structured from the mid-1970s, the later more complex methods, they all based on the "facts" shown in Figure 1, in the 1980s, I have to understand, use, discuss, implement these methods .

Beck asks us to consider that perhaps the economics of Figure 1, probably valid in the 1970s and 1980s, now look like Figure 2 - that is, the cost of maintenance, or ongoing change, flattens out rather than escalates Actually, whether Figure 2. Shows Today's Cost Profile or Not Is Irrelevant - We Have To make it true! if Figure 1 Remains True, Then We are doomed Because of Today's Pace Of Change. Beck gives us a problem, those in the 1970s and 1980s Perhaps the effect can also play the effect (as shown in Figure 1) now has changed (Figure 2), that is, the cost of maintenance (or equivalent to changing changes) is reduced. Not getting higher and higher. In fact, the expenditure of the overhead shown in Figure 2 is actually not important. It is important that we must realize that if the phenomenon of Figure 1 continues to repeat, we only have a dead road, because today's change is really too Fast (that is, the cost of maintenance will be a high price).

. The vertical axis in Figure 1 usually depicts the cost of finding defects late in the development cycle However, this assumes that all changes are the results of a mistake - ie, a defect Viewed from this perspective, traditional methods have concentrated on ". . defect prevention "in early lifecycle stages But in today's environment, we can not prevent what we do not know about -. changes arise from iteratively gaining knowledge about the application, not from a defective process So, although our practices need to be geared toward preventing some defects, they must also be geared toward reducing the cost of continuous change. Actually, as Alistair Cockburn points out, the high cost of removing defects shown by Figure 1 provides an economic justification for practices like pair programming. Figure 1 The Y-axis in the middle is usually used to indicate incomplete costs after the posture of the development cycle. However, this is verifying a hypothesis, that is, all the startings from the later period comes from the previous mistake, and it is a design defect. From this point, traditional methods are too dependent on the early "no mistake" in the software life cycle. But in today's rapidly changing environments, we can't fully prevent what we predigible - the change caused by the growing demand, and this change is impossible to meet and prevent it. Therefore, although we must make certain precancerous prevention measures in the early days, it is more important to reduce the overhead of later changes. As AliStai Cockburn pointed out, it is necessary to correct the correction defective method shown in Figure 1, just give some practical methods (such as pairing programming) from the perspective of saving expenses (such as pairing programming).

In this issue of eAD, I want to restrict the discussion to change at the project or application level - decisions about operating systems, development language, database, middleware, etc., are constraints outside the control of the development team (For ideas. on "architectural" flexibility, see the June and July 1999 issues of ADS.) Let's simplify even further and assume, for now, that the business and operational requirements are known. in this issue of eAD, I intend to discuss located in the project Or changes in the application of the application - the discussion of similar operating systems, programming languages, databases, components, etc. is not discussed. (About software structure, you can refer to the issue of ADS magazine June 1999) In addition, let us further simplify, that is, the user needs of software have been determined. Our design goal is to balance the rapid delivery of functionality while we also create a design that can be easily modified Even within the goal of rapid delivery, there remains another balance: proceed too hurriedly and bugs creep in; try to anticipate every eventuality and. Time Flies. However, Let's Again Simplify Our Problem and Assume We Have Reached A Reasonable Balance of Design Versus Code and Test Time. Our goal is to quickly publish new features, while making software design easy to change. Even under this goal is quickly released, there is still a need to pay between "quick release but BUG" and "faces but the last lasting". Therefore, let me simplify the problems we have to discuss, we assume that we have made a reasonable balance between the three, coding, and testing these three.

With all these simplifications, we are left with one question: how much anticipatory design work do we do Current design produces the functionality we have already specified Anticipatory design builds in extra facilities with the anticipation that future requirements will be faster to implement Anticipatory?.. design trades current time for future time, under the assumption that a little time now will save more time later. But under what conditions is that assumption true? Might it not be faster to redesign later, when we know exactly what the changes are, rather Than Guessing Now? Only on the simplified basis, there is still a tail: how far we have to look at the unknown future when design? The current design has achieved some of the features we think now. The predictive design can achieve future demand faster, that is of. But how can this construction be a reality? Maybe in the future, the whole redesigned is not slow, why bother now? This is where refactoring enters the equation. Refactoring, according to author Martin Fowler, is "the process of changing a software system in such a way that it does not alter the external behavior of the code yet improves its internal structure." XP proponents practice continuous, incremental refactoring as a way to incorporate change If changes are continuous, then we'll never get an up-front design completed Furthermore, as changes become more unpredictable -.. a great likelihood today - then much anticipatory design likely will BE WASTED. This is why we have to rebuild. Reconstructing, Martin Fowler said that it does not change the software's external performance but reforming internal service. The supporters of the XP method practice continuous, incremental reconstruction methods in changing environments. If the change is evolving, it is impossible to have any step to the design method. To put it bluntly, if the change is unpredictable - as the situation in today's society - too much, considering the possible changes in the design, it is entirely a waste.

Figure 3 - Balancing Design and refactoring, pre-internet. Figure 4 - Balancing Design and refactoring Today.

I think the diagram in Figure 3 depicts the situation prior to the rapid-paced change of the Internet era. Since the rate of change (illustrated by the positioning of the balance point in the figure) was lower, more anticipatory designing versus refactoring may have . been reasonable As Figure 4 shows, however, as the rate of change increases, the viability of anticipatory design loses out to refactoring- -. a situation I think defines many systems today I think is given in Figure 3 before the Internet era Happening. Due to the slower speed of the change (the figure is represented by the left of the balance), the early prediction is reasonable. However, in Figure 4, due to the changing speed of changing, there is too much to predict when designing, this situation is now facing many systems.

In the long run, the only way to test whether a design is flexible involves making changes and measuring how easy they are to implement. One of the biggest problems with the traditional up- front-design-then-maintain strategy has been that software systems exhibit tremendous entropy; they degrade over time as maintainers rush fixes, patches, and enhancements into production The problem is worse today because of the accelerated pace of change, but current refactoring approaches are not the first to address the problem Back in the.. "dark ages" (circa 1986), Dave Higgins wrote Data Structured Software Maintenance, a book that addressed the high cost of maintenance, due in large part to the cumulative effects of changes to systems over time. Although Higgins advocated a particular program-design Approach (The Warnier / Orr Approach), One of His Primary Themes Was To Stop The Degradation of Systems over Time By SystemArtain Redesigning Programs During Maintenance Activities. In a long-term project, check a setting Whether you have good flexibility is through changing needs, while see if the original design can be easily achieved. The biggest problem with this traditional "first design, re-maintenance" strategy is that there is a very large entropy (very easy to change, no law). A system will make the system's entropy greater and larger with time, such as time, maintenance, inclination, patch, and enhancement. Now due to the acceleration of external environment, the situation is getting worse. However, the current reconstruction technology is not the first way to try to solve this problem. As early as the so-called "dark period" (Circa 1986), Dave Higgins wrote a book named "Data Structured Software Maintenance", which pointed out that due to the increased cumulative impact of the change in time, maintenance The expenditures needed to be huge, and Higgins proposed a new design method to prevent the negative impact of the entropy increment of the system. The idea of ​​this method is in the maintenance process. There are systems to redesign the procedures.

Higgins's approach to program maintenance was first to develop a pattern (although the term pattern was not used then) for how the program "should be" designed, then to create a map from the "good" pattern to the "spaghetti" code. Programmers would then use the map to help understand the program and, further, to revise the program over time to look more like the pattern. Using Higgins's approach, program maintenance counteracted the natural tendency of applications to degrade over time. "The objective was not to Rewrite The Entire Application, "SAID HIGGINS IN A Recent Conversation," But to Rewrite Those Portions for Which Enhancements Had Been Requested. "Higgins method first makes the program to design a model (although there is no mode yet Method, then establish a mapping between the detailed code design and "Good" mode, the programmer understands the system and modifies the program based on this mapping relationship, making the modified result closer to that mode. This method using Higgins can increase the trend by maintaining any time for the cancellation of the system. Higgins said: "The goal of this method is not to overwrite the entire system, but only rewritten some parts that must be enhanced as needed." Althought "WAS NOT WIDELY PRACTICED, The Ideas Are The Same as The isy is Today Today - The need today, or drive, increased levels of refactoring: one is better languages ​​and Tools, and the Other is rapid change. Although this original "reconstructed" technology is not extensive Practical inspection, its thoughts and current reconstruction are still connected, but now the demand changes faster, larger. However, there are two things drivers, improved modern reconstruction techniques: First, better programming language and development tools; second is a faster change requirement.

Another approach to high change arose in the early days of RAD: the idea of ​​throwaway code The idea was that things were changing so rapidly that we could just code applications very quickly, then throw them away and start over when the time for change arose. Turb-Term Strategy. There is another way to cope with changes in early RAD (fast prototype development) methods: code discards thinking. This idea believes that the environment and demand changes are too fast, so our only way can only prepare new code quickly, and quickly abandon the old code. We think this is not a long time. Refactoring Reconstruction

Refactoring is closely related to factoring, or what is now referred to as using design patterns Design Patterns:. Elements of Reusable Object-Oriented Software, by Erich Gamma, Richard Helm, Ralph Johnson, and John Vlissides, provides the foundational work on design patterns . Design Patterns serves modern-day OO programmers much as Larry Constantine and Ed Yourdon's Structural Design served a previous generation;. it provides guidelines for program structures that are more effective than other program structures Reconstruction (refactoring) and configuration (factoring), or It is closely related to the use of design patterns. Erich Gamma, Richard Helm, Ralph Johnson, and John Vlissides Complete "Design Patterns: Elements of Reusable Object-Oriented" made a foundation for design patterns. As the structured design advocated by Larry Constantine and Ed Yourdon, the design model has made great contributions to contemporary object-oriented technical procedures, which brought the gospel to developers. Through design patterns, the structure of the program is more effective than ever.

If Figure 4 shows the correct balance of designing versus refactoring for environments experiencing high rates of change, then the quality of initial design remains extremely important. Design patterns provide the means for improving the quality of initial designs by offering models that have proven effective in the If the design shown in Figure 4 (Refactoring) is objective, the quality of the initial design is particularly important. By providing past has proven to be an effective mode, Design Patterns provides a method of increasing initial design quality.

So, you might ask, why a separate refactoring book? Can not we just use the design patterns in redesign? Yes and no. As all developers (and their managers) understand, messing with existing code can be a ticklish proposition. The cliché "IF IT Ain't Broke, Don't Fix IT", "The Program, as Fowler Comments," The Program MAY NOT BROKEN, But it does hurt. "Fear of Breaking Some Part of the Code . base that's "working" actually hastens the degradation of that code base However, Fowler is well aware of the concern: "Before I do the refactoring, I need to figure out how to do it safely .... I've written down the safe steps in the catalog "Fowler's book, Refactoring:. Improving the Design of Existing code, catalogs not only the before (poor code) and after (better code based on patterns), but also the steps required to migrate from one to the Other., this migration step, now, maybe you will ask, why still needs Do you want an independently reconstructed book? Don't we use the design model to redesign? Yes, it is not possible. As all developers (including managers) know, modifying the original program code is a tricky thing. There is a sentence in development folklore, "if it isn't broken, don't fix it". However, as Fowler mentioned, "The program may have no 'bad", but there is a potential harm. "I am afraid that the reconstruction of code that can also" work "is actually only exacerbated by the decline of code performance. At the same time, Fowler also knows: "Before software reconstruction, you need to find safe practice ... I wrote these security steps into the directory". <> Written by Fowler not onlys how to reconstruct the previous (poor) code and later (better) code based on mode design, but also the steps of code division reconstruction. These steps reduce the chances of errors in the process of reconstruction.

Beck describes his "two-hat" approach to refactoring - namely that adding new functionality and refactoring are two different activities Refactoring, per se, does not change the observable behavior of the software;. It enhances the internal structure When new functionality. needs to be added, the first step is often to refactor in order to simplify the addition of new functionality. This new functionality that is proposed, in fact, should provide the impetus to refactor. Beck with "two-hat" method described weight Things, that is, add new features and reconstructions are two different behaviors. In essence, the reconstruction does not change the external function visible to the software, it just enhances the internal structure of the software. When there is a new feature to be added, the first step is often reconstructed to add software to add more simplified. In fact, this added new feature provides driving force for reconstruction. Refactoring might be thought of as incremental, as opposed to monumental, redesign. "Without refactoring, the design of the program will decay," Fowler writes. "Loss of structure has a cumulative effect." Historically, our approach to maintenance has been " Quick and Dirty, "So Even IN Those Cases Where, IT Degraded over Time. Refactance can be considered to be an incremental design," no heavy design) Structure, programming will rot, "Fowler write," structural defects will bring cumulative effect. " Historically, our method for software maintenance is "Quick and Dirty" (fast but not uncomfortable?), Causing some initial design work to do good projects, will "degrade" over time.

Figure 5 - Software Entropy over Time.

Figure 5 shows the impact of neglected refactoring - at some point, the cost of enhancements becomes prohibitive because the software is so shaky At this point, monumental redesign (or replacement) becomes the only option, and these are usually high- risk,. or at least high-cost, projects. Figure 5 also shows that while in the 1980s software decay might have taken a decade, the rate of change today hastens the decay. For example, many client- server applications hurriedly built in the early 1990s are Now More Costly To Maintain Than Mainframe Legacy Applications Builtin The 1980s. Chart 5 shows the situation when there is no reconstruction: because the software is so unreliable, the upgrade maintenance cost changes to the huge (Monumental) design (or Replacement) became a unique choice, the risk of the project, at least invested, and it grew more and more. Figure 5 also shows that in the 1980s, the Software's survival period is about 10 years, and the rapid change in demand today exacerbates the rotation of the software. For example, the C / S application software made in many years in the early 1990s is much higher than the maintenance cost of the maintenance costs left in the 1980s. Data Refactoring: Comments by Ken Orr Data Reconstruction: Ken ORR Note

Editor's Note: As I mentioned above, one thing I like about XP and refactoring proponents is that they are clear about the boundary conditions for which they consider their ideas applicable For example, Fowler has an entire chapter titled Database. "Problems with Refactoring." refactoring tops Fowler's list. Fowler's target, as stated in the subtitle to his book, is to improve code. So, for data, I turn to someone who has been thinking about data refactoring for a long time (although not using that specific term) The Following Section on Data Refactoring Was Written By Ken Orr. Editor's Note: If the XP and Reconstruction Thoughts attract me is that they can clearly recognize the boundary conditions of the implementation of the implementation of the implementation. For example, Fowler Write a chapter "Problems with refactoring". The first question is the retrogition of the database. As shown by the subtitle of the book, the goal of Fowler is to improve code quality. To this end, I have consulted some people with deeper research in data reconstruction (or other terms). The following is written by the data reconstruction part by Ken Orr. When Jim Asked on Refactoring, I Had to ask Him What true required meant. It seemed to me to come down to a couple of very Simple Ideas: When Jim wants me to talk about it, I asked him. What does refactor mean? For me, summarize it as a simple point:

Do what you know how to do. Do you will do the do it quickly. Avantly, Go Back and Redesign Them IN. When a change occurs, go back to red Design Go To 1. Back 1

Over the years, Jim and I have worked together on a variety of systems methodologies, all of which were consistent with the refactoring philosophy. Back in the 1970s, we created a methodology built on data structures. The idea was that if you knew what people wanted, you could work backward and design a database that would give you just the data that you needed, and from there you could determine just what inputs you needed to update the database so that you could produce the output required. in the past few years Jim and I have worked together to study Systems Methodologies, and found that all methodologies have a consistent place for Refactoring Philosophy. In the 1970s, we have established a methodology based on data structure. Its main thinking is: After knowing people's needs, the reverse work, design a database that contains only required data, and then determines the input data necessary to update the database, generate the required output data. Creating systems by working backward from outputs to database to inputs proved to be a very effective and efficient means of developing systems. This methodology was developed at about the same time that relational databases were coming into vogue, and we could show our approach would always that create a well-behaved, normalized database. More than that, however, was the idea that approaching systems this way created minimal systems. in fact, one of our customers actually used this methodology to rebuild a system that was already in place. The customer STARTED WITH THEPUTS AND WIKED BACKWARD to Design A Minimal Database with minimal input requirements. The method of constructing a system from the output result reverse engineering to the input is proved to be a very effective and efficient system development method. Almost at the same time, this method has also developed while the relational database begins. Let us build a good, standardized database. In addition, this idea is also applicable to creating minimum systems. In fact, one of our customers has used this method when rebuilding a system and has achieved success. The customer starts from the output and designs a minimum database that meets the minimum input requirements by reverse engineering.

The new system had only about one-third the data elements of the system it was replacing This was a major breakthrough These developers came to understand that creating minimal systems had enormous advantages:.. They were much smaller and therefore much faster to implement, and The new system is only one-third of the data element (Data Elements). This is a big breakthrough. Developers have begun to gradually realize that the minimum system has a huge advantage: the system is smaller and can be achieved faster; the function is more adaptable. Still, building minimal systems goes against the grain of many analysts and programmers, who pride themselves on thinking ahead and anticipating future needs, no matter how remote. I think this attitude stems from the difficulty that programmers have had with maintenance. Maintaining large systems has been so difficult and fraught with problems that many analysts and programmers would rather spend enormous effort at the front end of the systems development cycle, so they do not have to maintain the system ever again. But as history shows, this approach of guessing about the future never works out. No matter how clever we are in thinking ahead, some new, unanticipated requirement comes up to bite us. (How many people included Internet-based e-business as one of their top requirements in systems they were building 10 YEARS AGO?) However, creating the minimum system does not meet the ideas of many analysts and programmers, no matter how far, they always think that they can think of and foresee future demands. I think this causes the reason for the software difficult to maintain. Maintaining a large system is so difficult and full of problems, so that many analysts and programmers prefer to design a "perfect" system in the early stage of the system development, so as to see for one for all. However, it has proved that the predictive future is futment. Whether we have multiple smart, there are many times before our thinking, there will always be some demands that have never been expected. (How many people can write the Internet-based e-commerce as future demand for future demands)

Ultimately, one of the reasons that maintenance is so difficult revolves around the problem of changing the database design. In most developers' eyes, once you design a database and start to program against it, it is almost impossible to change that database design. In a way, the database design is something like the foundation of the system: once you have poured concrete for the foundation, there is almost no way you can go back and change it As it turns out, major changes to databases in large systems happen. very infrequently, only when they are unavoidable. People simply do not think about redesigning a database as a normal part of systems maintenance, and, as a consequence, major changes are often unbelievably difficult. Finally, one of the reasons is that so difficult to maintain, When changing the database design, other problems will follow. In most developers, once the database is designed and started on this, it is almost impossible to change the design of the database. To a certain extent, the design database is like the foundation of the construction system: Once you put the concrete, you can't change it again. Therefore, unless inevitable, there will be a large change in the large system. People cannot use the redesigned database only as a normal part of the system maintenance. Otherwise, it is difficult to imagine the system to make a big change. ENTER DATA Refactoring Enter Data Reconstruction

Jim and I had never been persuaded by the argument that the database design could never be changed once installed. We had the idea that if you wanted to have a minimal system, then it was necessary to take changes or new requirements to the system and repeat the basic system cycle over again, reintegrating these new requirements with the original requirements to create a new system. You could say that what we were doing was data refactoring, although we never called it that. Jim and I will never admit that once the system Start running can no longer change the point of view of the database design. We believe that if you want the system to maintain the most streamlined state, you must introduce the changes or new features you want to do to the system and repeat the basic development process, making the new The demand and the old demand are integrated to become a new system. You may say that we do is data reconstruction, but we never say so.

The advantages of this approach turned out to be significant. For one thing, there was no major difference between development of a new system and the maintenance or major modification of an existing one. This meant that training and project management could be simplified considerably. It Also Meant That Our Systems Time, Since We "Builtin" To the existing system. The benefits of this existing system. First, develop a new system and maintenance or old The difference between the system is not very transformed. This means that managing a project and training will be greatly reduced. At the same time, the development time will also be reduced, because we have different ways to change, one is' Built In '(based on changing), the other is' adding them on' (add changes). Over a period of years, we built a methodology (Data Structured Systems Development or Warnier / Orr) and trained thousands of systems analysts and programmers. The process that we developed was largely manual, although we thought that if we built a detailed-enough methodology IT Should Be Possible To Automate Large Pieces Of That Methodology In Case Tools. In the past few years, we have established a method (structured system design or warnier-forerolic) and trained thousands of systems. Analysts and programmers. Even if we define a variety of descriptions, it is possible to achieve most of the work with the CASE tool, but the development process still requires a lot of manual work.

Automating Data Refactoring Automated Data Reconstruction

To make the story short, a group of systems developers in South America finally accomplished the automation of our data refactoring approach in the late 1980s. A company led by Breogán Gonda and Nicolás Jodal created a tool called GeneXus1 that accomplished what we had conceived in the . 1970s They created an approach in which you could enter data structures for input screens;. with those data structures, GeneXus automatically designed a normalized database and generated the code to navigate, update, and report against that database in order to shorten development time, South America A group of system developers have developed data reconstruction automation tools in the 1980s. Companies led by Breogán Gonda and Nicolás Jodal have developed a tool called GeneXus, which is what we have in the 1970s. The way they created allows us to automatically create specifications for the specification database after entering the data structure and generate code for browsing, updating, and outputting data. But that was the easy part. They designed their tool in such a way that when requirements changed or users came up with something new or different, they could restate their requirements, rerun (recompile), and GeneXus would redesign the database, convert the previous database automatically to the new design, and then regenerate just those programs that were affected by the changes in the database design. They created a closed-loop refactoring cycle based on data requirements. this makes things simple, this tool so that when the user After the demand or system requirements change, only the original definition is changed, re-compiled, and can redesign the database to accommodate new requirements, and generate code that is only affected by the database modification. This is the reconstruction process based on the data-based closed loop.

GeneXus showed us what was really possible using a refactoring framework. For the first time in my experience, developers were freed from having to worry about future requirements. It allowed them to define just what they knew and then rapidly build a system that did just what they had defined. Then, when (not if) the requirements changed, they could simply reenter those changes, recompile the system, and they had a new, completely integrated, minimal system that incorporated the new requirements. GeneXus makes us realize reconstruction Can convince the real thing that we bring. As far as my experience, this will give developers from the concerns about future demand, so that developers can only define all what they know and quickly implement the defined content. Therefore, when the system's demand changes, they only need to simply join those modifications, recompile, you can get a new, fully integrated, and meet the new needs. What does all this mean? What does all this mean?

Refactoring is becoming something of a buzzword. And like all buzzwords, there is some good news and some bad news. The good news is that, when implemented correctly, refactoring makes it possible for us to build very robust systems very rapidly. The bad news is that we have to rethink how we go about developing systems. Many of our most cherished project management and development strategies need to be rethought. We have to become very conscious of interactive, incremental design. We have to be much more willing to prototype our Way to success and to use tools That Will Do Complex Parts of The Systems Development Process (Database Design and Code Generation) for US. Reconstruction is gradually turning into a fashionable word. Like all stylish things, there is a good side and there is a bad side. A good side is: If you can implement correctly, the reconstruction makes us so possible to quickly build a strong system. On the one hand, we have to reconsider how to develop. All development and management strategies used in the original need need to be reconsidered. We must understand the development of interactive, incremental development; we must also be accustomed to successfully developing methods and using tools to complete the complex work in system development (database design and code generation).

In the 1980s, CASE was a technology that was somehow going to revolutionize programming. In the 1990s, objects and OO development were going to do the same. Neither of these technologies lived up to their early expectations. But today, tools like GeneXus really do many of the things that the system gurus of the 1980s anticipated. It is possible, currently, to take a set of requirements, automatically design a database from those requirements, generate an operational database from among the number of commercially available relational databases (Oracle, DB2, Informix, MS SQL Server, and Access), and generate code (prototype and production) that will navigate, update, and report against those databases in a variety of different languages ​​(COBOL, RPG, C, C , and Java). In the 1980s, Case made the development of revolutionary changes. In the 1990s, objects and OO methods also made the development of revolutionary changes. These technologies are not as expected. But now, the tools like Genexus have done something expected in the 1980s. It is indeed possible to automatically perform database design after a given system requirement, generating a commercial relational database (Oracle, DB2, Informix, MS SQL Server, And Access), and generate data to be browsed, updated, and output data Different languages ​​(COBOL, RPG, C, C, C , And Java) code (prototypes and results). This new approach to systems development allows us to spend much more time with users, exploring their requirements and giving them user interface choices that were never possible when we were building things at arm's length. But not everybody appreciates this new world. For one thing, IT Takes A Great Deal of The MyStery Out of the Process. For Another, IT Puts Much More Stress on Rapid Development. New system development methods allow us to exchange, analyze user needs, let users choose Different interaction interfaces, this is impossible when you use yourself to do everything. But not everyone likes this development method. One is because it will greatly open the mystery of the development process. The other is because this also adds pressure to the rapid development.

When people tell you that building simple, minimal systems is out of date in this Internet age, tell them that the Internet is all about speed and service. Tell them that refactoring is not just the best way to build the kind of systems that we need For the 21st century, it is the only way. When people tell you that there is no longer to establish simple and streamlined systems in the Internet era, then tell them that Internet is the world of speed and service, telling them to refactor not only The best way to establish such a system in the 21st century is also the only way. Notes

1 gonda and jodal created a companies Called Artech To Market The Genexus Product. It Currently Has More Than 3000 Customers WorldWide and is Marketed in The US By GeneXus, Inc.

Crystal Light Methods: Comments By Alistair Cockburn Lightweight Crystal Method

Editor's note: In the early 1990s, Alistair Cockburn was hired by the IBM Consulting Group to construct and document a methodology for OO development IBM had no preferences as to what the answer might look like, just that it work Cockburn's approach to the assignment.. was to interview as many project team members as possible, writing down whatever the teams said was important to their success (or failure). The results were surprising. The remainder of this section was written by Cockburn and is based on his "in-process Editor's Note: In the early 1990s, the AliStair Cockburn IBM advisory group worked for an OO (object-oriented) development. IBM believes that the white cat black cat, it is a good cat to the mouse. Cockburn has been in depth in depth in many development groups and wrote the key to the success or failure of the project. As a result, it was shocked. The following is written by cockburn, based on his "actual work" book containing very little methodological book.

In the IBM study, team after successful team "apologized" for not following a formal process, for not using a high-tech CASE tool, for "merely" sitting close to each other and discussing as they went. Meanwhile, a number of failing teams puzzled over why they failed despite using a formal process -? maybe they had not followed it well enough I finally started encountering teams who asserted that they succeeded exactly because they did not get caught up in fancy processes and deliverables, but instead sat close Talk Easily So The Could Talk Easily and Delivered Tested Software Frequently. In the IBM research group, the development team wants to "apologize" to the previously successful group, because they don't abide by a formal process because they do not use a high-tech CASE tool. Or or "Just" because they sit together, discuss how they will do it. At the same time, some failed groups feel very confused, although they use formal processes, they still fail - it is not good enough to comply with these processes. Later I started to meet some successful groups, they claimed that it was because they were not trapped in the flowers and publishing, but everyone sat together, thus making them more easily discussed and often exchanged the test software, and finally It was successful. These results have been consistent, from 1991 to 1999, from Hong Kong to the Americas, Norway, and South Africa, in COBOL, Smalltalk, Java, Visual Basic, Sapiens, and Synon The shortest statement of the results are:. These conclusions from From 1991 to 1999, from Hong Kong to the United States, Norway, and South Africa, in Cobol, Smalltalk, Java, Visual Basic, Sapiens, and Synon are consistently persisted, these shortest descriptions are:

To the extent you can replace written documentation with face-to-face interactions, you can reduce reliance on written work products and improve the likelihood of delivering the system. As far as possible within your range, with face to face communication instead of written documents, This can reduce the dependence of work products written, and increase the possibility of publishing systems

The more frequently you can deliver running, tested slices of the system, the more you can reduce reliance on written "promissory" notes and improve the likelihood of delivering the system. The more often released after the system is running and fragments tested, it The more allow you to reduce the dependence of the "agreement" tag, the more you can increase the possibility of the final release system, the PEOPLE ARE Communicating Beings. Even Intrurted Programmers Do Better with Informal, Face-to-Face Communication Than With Paper Documents. From A Cost and Time Perspective, Writing Takes Longer and is Less Communicative Than Discussing At The Whiteboard. It should be communicated in humanity. Even for the internal contact, the face-to-face communication is used, which is better than the communication effect of the document written on paper. From a cost and time, writing articles will always discuss more time than in the whiteboard, and the effect of communication is worse.

Written, reviewed requirements and design documents are "promises" for what will be built, serving as timed progress markers. There are times when creating them is good. However, a more accurate timed progress marker is running tested code. It is more accurate because IT is not a timed promise, IT is a timed accomplishment. Those who are written and reviewed needs and design documents are just "promise" what to do, we can use it as a sign of project progress. There are a lot of progress marks to be in the original setup. However, more accurate progress marks should be the code after running the test. Because this is not a pre-promised flag, but the true completion of the symbol.

Recently, a bank's IT group decided to take the above results at face value. They began a small project by simply putting three people into the same room and more or less leaving them alone. Surprisingly (to them), the team delivered the system in A Fine, Timely Manner. The Bank Management Team WAS A bit belay? SURELY IT CAN't Be this Simple? Recently, a bank's IT department decides the following results. They launched a small project, using simple ways to put three people in a room, so that they will die. Surprisingly, this group promptly, excellent release system. Bank's management feels a bit confusing. Will it be as simple as it?

It is not quite so simple Another result of all those project interviews was that:... Different projects have different needs Terribly obvious, except (somehow) to methodologists Sure, if your project only needs 3 to 6 people, just put them into A room to points, one........................................................................................................................................ A Rocket, I'll Ask you not to try it. we must remember factors such as team size and demans on the project, such as: Of course not so simple. Another conclusion obtained after interviewing all other projects is: Different projects have different needs. This is very obvious, not dependent on methodology (don't know what). Of course, if your project only needs 3 to 6 people, just let them be in a room. But if you have 45 or 100 people, this is useless. If you want to test through the process of the food and drug management department, you can't start this way. If you want to launch me with a rocket to Mars, I recommend don't try it. We must remember the needs of the team's size and project: as the number of people involved grows, so does the need to coordinate Communications. With the growth of participants, coordinated communication needs more as The Potential for Damage Increases, And The Tolerance for Personal Stylistic Variations Decreases. With the potential destructive growth, the requirements for public inspections are also increasing, while simultaneous differences in the difference due to different personal styles. The degree also reduces Some Projects Depend On Time-to-Market and Can Tolerate DEFECTS (Web Browsers Being An Example); Other Projects AIM for Traceability Or Legal Liability Protection. Some projects demonstrate the release time determined by the market, and for some defects Can be tolerated (web browser is such an example); other projects are committed to provision and legal responsibility.

The result of collecting those factors is shown in Figure 6. The figure shows three factors that influence the selection of methodology:. Communications load (as given by staff size), system criticality, and project priorities summarized according to the collected relevant factors Conclusion As shown in Figure 6. It shows the three factors that choose different methodologies: communication difficulty (decided by the number of members), system key programs, and priority of the project. Figure 6 - The Family of Crystal Methods.

Locate the segment of the X axis for the staff size (typically just the development team). For a distributed development project, move right one box to account for the loss of face-to-face communications. Determined according to the number of members in the X-axis Part (usually just developing groups). If it is a distributed development project, because the opportunity to communicate in the face is reduced, move to the right.

On the Y Axis, Identify The Damage Effect of The System: Loss of Comfort, Loss of "Discretion" Monies, Loss of "Essential" Monies (EG, Going Bankrupt), or Loss of Life. On the Y-axis, confirm that the system is corrupted Impact: The degree of comfort is lowered, obvious economic losses, fundamental economic losses (such as bankrupt), or death.

The different planes behind the top layer reflect the different possible project priorities, whether it is time to market at all costs (such as in the first layer), productivity and tolerance (the hidden second layer), or legal liability (the hidden third layer ). The Box in the Grid Idicates The Class of Projects (for Example, C6) with Similar Communications Load and Safety Needs and can be used to select a methodology. Different aircraft (Plate Panel?) Reflects various items Different key points, whether it cost is a listing time (like the first layer), efficiency and compatibility (hidden second layer), or legal responsibility (hidden third floor). The grid in the grid is determined. In similar communication difficulty and the type of item (for example, C6), you can use methodology.

The grid characterizes projects fairly objectively, useful for choosing a methodology. I have used it myself to change methodologies on a project as it shifted in size and complexity. There are, of course, many other factors, but these three determine methodology selection quite well This grid shows the characteristics of the project and is useful for choosing a methodology. I used to change my methodology when I changed the size and complexity of the project. Of course there are other factors, but these three are used to decide what methodology is good. Suppose it is time to choose a methodology for the project. To benefit from the project interviews mentioned earlier, create the lightest methodology you can even imagine working for the cell in the grid, one in which person-to-person communication is enhanced as much as possible, and running tested code is the basic timing marker. the result is a light, habitable (meaning rather pleasant, as opposed to oppressive), effective methodology. Assign this methodology to C6 on the grid. Suppose now that you want to select an item methodology . Thanks to the interviews mentioned above, you can build a maximum amount of methodology, imagine into the grid in the grid, here, try to improve the communication between people and people, run The code after the test is the most basic progress mark. The result is a simple, conforming to human habits (meaning more pleasant, opposing capping). Specify this method to C6 on the grid.

Repeating this for all the boxes produces a family of lightweight methods, related by their reliance on people, communication, and frequent delivery of running code. I call this family the Crystal Light family of methodologies. The family is segmented into vertical stripes by color ( not shown in figure): the methodology for 2-6 person projects is Crystal Clear, for 6-20 person projects is Crystal Yellow, for 20-40 person projects is Crystal Orange, then Red, Magenta, Blue, etc. All these repeat The lattice, generate a lightweight method, based on their confidence, communication, and release of people's confidence, communication, and the frequency of running code. My name is this family for the Crystal Light method. This family is divided into different vertical stripes: 2-6 people's project methodology: 2-6 people's project method is called Crystal Yellow, 20-40 people Method is called Crystal Orange, then Red, Magenta, Blue, and so on. Shifts in the vertical axis can be thought of as "hardening" of the methodology. A life-critical 2-6-person project would use "hardened" Crystal Clear, and so on. What surprises me is that the project interviews are showing rather Little Difference In The Hardness Requirement, Up to Life-Critical Projects. Switch between vertical directions is known as enhancement. A 2 to 6 people in a short life period should be managed using a strengthened Crystal Clear or its derived approach. What made me surprises is that in such a project, I can't see the contradiction between the additional demand and completing the project on time.

Crystal Clear is documented in a forthcoming book, currently in draft form on the Web. Crystal Orange is outlined in the methodology chapter of Surviving Object-Oriented Projects (see Editor's note below). Crystal Clear from a book to be published now online There is already a draft. The contour of the Crystal Orange is described in the Method of "Surviving Object-Oriented Projects".

Having Worked with The Crystal Light Methods for Several Years Now, i Found A Few More Surprises. After many years, I found more surprises.

The first surprise is just how little process and control a team actually needs to thrive (this is thrive, not merely survive). It seems that most people are interested in being good citizens and in producing a quality product, and they use their native cognitive and communications abilities to accomplish this. This matches Jim's conclusions about adaptive software development (see Resources and References, page 15). you need one notch less control than you expect, and less is better when it comes to delivering quickly. The first surprise Yes, a development team success (not only survive) does not require too much management and control. Most developers are happy to work and write good software, they will use their own understanding and communication skills to complete all. This is exactly the same as the conclusions made by Jim on adaptive software development (see "Resources and References", page 15). You need to have much more control over you expect, especially when you want to release the software as soon as possible, the less you. More specifically, when Jim and I traded notes on project management, we found we had both observed a critical success element of project management: that team members understand and communicate their work dependencies They can do this in lots of simple, low-tech,. Low-Overhead Ways. It is often not Necessary to Introducture Tool-Intensive Work Products to Manage It. More particularly, when I exchange project management, we realized that we all observed successful project management. A key element: developers can understand the work of relevant personnel and communicate. They can complete all this through many simple, low-tech and cheap methods. Usually this does not need to introduce what special tools are managed.

OH, But it is necessary to in Introduce Two More Things Into The Project: Trust and Communication. However, two key elements are required: trust and communication.

A project that is short on trust is in trouble in more substantial ways than just the weight of the methodology. To the extent that you can enhance trust and communication, you can reap the benefits of Crystal Clear, XP, and the other lightweight methods. In a project, lack of trust is more destined to learn a wrong method. From a certain extent, as long as you can strengthen trust and communication, you will be able to benefit from Crystal Clear, XP (Extreme Programming?) Or other lightweight development methods. The second surprise with defining the Crystal Light methods was XP. I had designed Crystal Clear to be the least bureaucratic methodology I could imagine. Then XP showed up in the same place on the grid and made Clear look heavy! What was going on? First Two surprises are that it is consistent with XP when we define the cystal light method. I designed Crystal Clear as the least bureaucratic method I can imagine. Subsequently, XP appeared and showed a summation in the same field, and the Clear in front of it became a heavyweight development method! This is how the same thing?

It turns out that Beck had found another knob to twist on the methodology control panel:.. Discipline To the extent that a team can increase its internal discipline and consistency of action, it can lighten its methodology even more The Crystal Light family is predicated on Allowing Developers The Maximum Individual Preference. XP is Predicated on Having Everyone Follow Tight, Disciplined Practices: This is probably because Beck discovers another switch on the control panel of the method: discipline. To some extent, if a development team enhances internal discipline and ensuring the consistency of action, the methodology can become more lightweight. Crystal Light derived methodology gives the developer's maximum personalization. XP requires everyone to abide by strict disciplined practice:

Everyone must comply with a strict coding standard. The Team Forms a Consens On What Is "Better" Code, So That Changes Convert. About what is good code, the development team should reach a consensus, so all changes are set together. Avoid repetition. Unit tests exist for all functions, and the always pass at 100%. Each function must pass the unit test and must be 100%. All production code is written by two people's code is completed by two developers Tested Function is Delivered Frequently, in The Two- To Four-Week Range. It is one cycle every two weeks. Frequently publish those functions that have been tested. In Other Words, Crystal Clear Illustrates and XP Magnifier The Core Principle of Light Methods: In other words, Crystal Clear demonstrates the core rule of a lightweight method, and XP amplified it:

Intermediate work products can be reduced and project delivery enhanced, to the extent that team communications are improved and frequency of delivery increased. In a certain extent, if that team communications have improved, released frequency is increased, then you can reduce intermediate Product workload can complete the project faster.

XP and Crystal Clear Are Related To Each Other In There Ways: XP and Crystal Clear have the following associations:

XP PURSUESD DISCIPLINE, But it is harder for a Team To FOLLOW.XP By enhanced discipline, it is more difficult to comply with developers. Crystal Clear Permits Greater Individuality Work Habits in Exchange for Some Loss In Productivity.crystal Clear More personalized space, allows relatively loose work habits, but also loses some of the production efficiency. Crystal Clear May Be Easier For A Team To Adopt, But XP Produces Better Results if The Team Can Follow It. Development team can be easier to use the Crystal Clear method, but if the XP can be effectively used, the effect will be better. A Team Can Start With Crystal Clear And Move Itself to Xp. A Team That Falls Off XP Can Back Up To Crystal Clear. The development team can start from Crystal Clear and then transfer to the XP method. The development team can also give up XP and re-use Crystal Clear.

Althstal Clear and XP, The Fundamental Values ​​Are Consistent - Simplicity, Communications, And Minimal Formality. There are many differences between Crystal Clear and XP, but their basic values ​​are consistent - simple, communication and Minimize formulation as much as possible. Editor's note: For more information on the Crystal Clear methodology, see Alistair Cockburn's Web site, listed in the References and Resources section For more information on Crystal Orange, it is covered in the book Surviving Object-Oriented Projects, also listed in the References. Editor Press: If you want to learn about Crystal Clear, see the website of Alistair Cockburn listed in the "Related Resources and References" section, in. If you want to learn about Crystal Orange, you can refer to the "Surviving Object-Oriented Projects" book, and the same information is also listed in the "Related Resources and Reference" section.

Conclusions: Going to Extremes Conclusion: Toward Limit

Orr and Cockburn each describe their approaches and experience with lighter methodologies. But earlier, in describing Chrysler's C3 project, I alluded to the difficulty in extending the use of approaches like XP or even RAD. In every survey we have done of eAD subscribers, and every survey conducted of software organizations in general, respondents rate reducing delivery time as a critical initiative. But it is not just initial delivery that is critical. Although Amazon.com may have garnered an advantage by its early entry in the online bookstore market, it Has Maintained Leadership by Continuous Adaptation To Market Conditions - Which Means Continuous Changes To Software. Orr and cockburn are described. However, when describing Chrysler's C3 project, I indirectly mentioned that there is difficult to extend the use of XP or even RAD. In all surveys made to EAD's subscribers and all software organizations, the rapid response speed, and the release time is a key start. But this is not to say that only the first release is the key. Although Amazon.com has an advantage because of earlier access to the online bookstore market, it must continue to adapt to market conditions in order to maintain its leadership. . Deliver quickly Change quickly Change often These three driving forces, in addition to better software tools, compel us to rethink traditional software engineering practices -... Not abandon the practices, but rethink them XP, for example, does not ask us to abandon good software engineering practices. It does, however, ask us to consider closely the absolute minimum set of practices that enable a small, co-located team to function effectively in today's software delivery environment. rapid release rapid changes. frequent changes. Through these three drives, coupled with better software tools, forcing us to rethink the traditional software engineering practice - not to give up them, but reflect them. For example, XP does not have to abandon better software engineering practices. On the contrary, it requires us to think deeply, and in today's software publishing environment, small collaborative teams can efficiently operate the minimum environmental requirements.

Cockburn made the observation that implementation of XP (at least as Beck and Jeffries define it) requires three key environmental features:. Inexpensive inter-face changes, close communications, and automated regression testing Rather than asking "How do I reduce the cost of change ? "XP, in effect, postulates a low-change cost environment and then says," This is how we will work. "For example, rather than experience the delays of a traditional relational database environment (and dealing with multiple outside groups), The C3 Project Used Gemstone, An Oo Database. Cockburn Observer Discovery, XP (at least Beck and Jeffries defined) implementation requires at least three environmental features: interface modification does not bring expensive price, close exchange And automatic regression test. In fact, XP is not asking "how I should reduce the cost of change", but ask a low-alive cost environment, and then say "we will work". For example, the C3 project uses object-oriented database GemStone instead of using a traditional relational database (and dealing with multiple external groups). Some might argue that this approach is cheating, but that is the point For example, Southwest Airlines created a powerhouse by reducing costs -.. Using a single type of aircraft (Boeing 737s) If turbulence and change are the norm, then perhaps the right question may be: how do we create an environment in which the cost (and time) of change is minimized Southwest got to expand without an inventory of "legacy" airplanes, so its answer might be different than American Airline's answer, but the? Question Remains An Important One. Some people may say that this method is deception, it is true. For example, Southwest Airlines uses the same type of aircraft (Boeing 737) to reduce costs when creating a power chamber. If turbulence and changes are standard, then the correct problem may be: How do we create an environment that leads to the lowest change cost (and time)? Southwest Airlines in the expansion of the aircraft inventory. For US airlines, the answer to this question may be different, but it is still an important issue.

There Are FROM THISS: In this discussion about XP and lightweight methods, we can get the following five main points: for projects this must be managed in high-speted, HIGH -Change Environments, We need to reexamine Software Development Practices and The Assumptions Behind Them. We need to re-examine related software development practices and corresponding assumptions. Practices Such As Refactoring, Simplicity, And Collaboration (Pair Programming, Metaphor, Collective Ownership) Prompt US to Think in New Ways. Improved, simplified and cooperative (paired programming, metaphor, code sharing), etc. Reflections on the new ideas. WE NEED TO RETINK BTHHOW TO Reduce The Cost of Change IN Our EXISTINGENTS AND HOW TO CREATE New Environments That Minimize the cost of change. Not only need to rethink how to reduce changes caused by changes in existing environments, but also need to re- Consider how to create a new environment, which can minimize change costs. In Times of High Change, The Ability To Refactor Code, Data, And Whole Applications Becomes a critical Skill. Ability to code, data, and entire application reconstruction will become a key skill. Matching methods to the project, releating on people first and documentation Later, and minimizing formality islated. Apply the method to the project to go, first rely on people, rely on documents, and minimize formal things, thus Effectively combine the method with the project

Editor's Musings Editor's Pensor (Editor)

Extreme rules! In the middle of writing this issue, I received the 20 December issue of BusinessWeek magazine, which contains the cover story, "Xtreme Retailing," about "brick" stores fighting back against their "click" cousins. If we can have Extreme Retailing, Why NOT Extreme Programming? Extreme rules. In the process of writing this article, I have received a business week magazine issued on December 20. Among them, there is a cover story, "extreme retail", about the "brick" shop counterattacks their cousins ​​"Click". If we can have extreme retail, why is not extremely programmed?

Refactoring, design patterns, comprehensive unit testing, pair programming - these are not the tools of hackers These are the tools of developers who are exploring new ways to meet the difficult goals of rapid product delivery, low defect levels, and flexibility Writing.. about quality, Beck says, "The only possible values ​​are 'excellent' and 'insanely excellent,' depending on whether lives are at stake or not" and "runs the tests until they pass (100% correct)." You might accuse XP Practitioners Of Being Delusional, But Not Of Being Poor-Quality-Oriented Hackers. Reconstruction, Design Mode, Fully Understanding of Unit Tests, Pairing Programming ---- These are not a hacker tool. They are new ways to explore developers in order to solve the product's rapid release, while maintaining less defects and flexibility. Regarding quality, Beck says, "only is valuable: 'excellent' or 'extreme excellent', depending on its influence of software product survival", "and" execution test until they pass (100% correct ). You may be able to accuse the XP's practitioner is blinded, but they are not a hacker who do not pay attention to quality. To traditional methodology proponents, reducing time-to-market is considered the enemy of quality. However, I've seen some very slow development efforts produce some very poor-quality software, just as I've seen speedy efforts produce poor-quality software Althought, I Think IT IS A MUCH More Complicated Relationship Than We would like to think. For supporters of the traditional approach, the shortening time is the enemy of quality. However, I have seen some software that has been slow and very poor in quality, just like the software I have seen so fast, but the quality is low. Although there are some obvious contacts between time and quality, I think this connection is more complicated than what we generally imagine.

Traditional methodologies were developed to build software in environments characterized by low to moderate levels of change and reasonably predictable desired outcomes. However, the business world is no longer very predictable, and software requirements change at rates that swamp traditional methods. "The bureaucracy and inflexibility of organizations like the Software Engineering Institute and practices such as CMM are making them less and less relevant to today's software development issues, "remarks Bob Charette, who originated the practices of lean development for software. conventional method can be used to develop little that the degree of variation And the software of the final result is expected. However, the business world is a change of unpredictable, and the traditional development method has not been able to meet the requirements of the current fast-changing software demand. Bob Charette, the founder of lightweight software development practice, believes that "Due to the organism, stubborn, and practices such as CMM, it makes them developed from today's software development .as BECK POINTS OUT IN the introduction to his book, the individual practices of XP are drawn from well-known, well-tested, traditional practices. The principles driving the use of these practices, along with the integrative nature of using a specific minimal set of practices, make XP A Novel Solution To Modern Software Development Problems. Like Beck, it is pointed out in the introduction written in his book. The independent practices in XP are from famous, very good test, traditional practice. These principles drive practice, with a special practice of special practices, which makes XP a new solution to solving modern software development issues.

But I must end with a cautionary note. None of these new practices has much history. Their successes are anecdotal, rather than studied and measured. Nevertheless, I firmly believe that our turbulent e-business economy requires us to revisit how we develop and manage Software Delivery. While New, The Approaches Offer Alternative Wort Worth Considering. But I have to make an end language with a warning. All of these new practices have no long history, and their success is like the escape matter, not being studied and measured. But I firmly believe that our chaotic e-commerce economy needs us to re-examine how to develop and manage software release. Although these methods are very new, they provide another idea of ​​valuable. In the coming year, we will no doubt see more in print on XP. Beck, Jeffries, Fowler, and Cunningham are working in various combinations with others to publish additional books on XP, so additional information on practices, management philosophy, and project examples Will Be Available. Next year, we can see more about XP publications, Beck, Jeffries, Fowler and Cunningham are collaborating with each other about XP. Therefore, you will see more information about practice, management philosophy and project instances, etc.

Finally, a note on how to continue the discussion of XP and other "extremes": as I announced in the previous issue, we have initiated an eAD discussion forum If you are interested in joining the group, send us an e-mail at. EAD@cutter.com, and we will add you to the discussion group and send logon information. Finally, a tip on how to continue XP and other "extreme things": As I announced in the previous discussion, we create A EAD Forum. If you are interested in joining this group, send us email to EAD@cutter.com, we will join this discussion group and will send login information to you.

转载请注明原文地址:https://www.9cbs.com/read-12869.html

New Post(0)