Talk about "Extreme Programming"

xiaoxiao2021-03-06  93

Extreme Programming

Trial Bigmac [Aka] Translator March-Bird Lucian Yjf Taopin WL Jazz Han Wei Nullgate Simon [AKA]

As We since Explored in Several Issues of Ead, The Two Most Pressing Issues In Information Technology Today: As we explore in the several periods of EAD, the most urgent two issues in today's information technology are:

How do We deliver Functionality To Business Clients Quickly? How can I deliver the function to commercial users?

How do we keep up with NEAR-Continuous Change? How can I keep up with almost contiguous changes?

Change is changing Not only does the pace of change continue to accelerate, but, as the September issue of eAD pointed out, organizations are having to deal with different types of change -.. Disruptive change and punctuated equilibrium Disruptive technologies, like personal computers in the early 1980s, impact an industry (in the case of PCs, several related industries), while a punctuated equilibrium - a massive intervention into an ecosystem or an economy - impacts a very large number of species, or companies The Internet,. Which Has Become The Backbone for E-Commerce and E-Business, HAS DISRUPTED A WIDE RANGE OF INDUSTRIES - MORE A PUNCTUATED Equilibrium Than A DISRUPTION. Change itself is constantly changing. Not only the speed of change is constantly increasing, but also, if the EAD is pointed out in October, the organization is not allowed to cope with various types of changes - the dramatic change and the constant balance of breaking. Producing a dramatic technology, like a personal computer in the 1980s, impacting an industrial (PC and several related industries) and breaking from time to time - an intervention of an ecosystem or a huge impact on the entire economy - Then affect countless species, or, the company. The Internet has become an e-commerce pillar, which has caused a wide range of industries - more is a breaking balance rather than just a dramatic.

When whole business models are changing, when time-to-market becomes the mantra of companies, when flexibility and interconnectedness are demanded from even the most staid organization, it is then that we must examine every aspect of how business is managed, customers are delighted And Products Are Developed. When the entire business model is changing, when "time means the market" is becoming the company's spell, when the adaptability and interconnectability is becoming the need for even the most stretched organization, we will have It is necessary to check every aspect of the following: How does the business are managed? Why is the customer feel happy, and how the product is developed. The Extreme Programming movement has been a subset of the object-oriented (OO) programming community for several years, but has recently attracted more attention, especially with the recent release of Kent Beck's new book Extreme Programming Explained: Embrace Change Do not be. put off by the somewhat "in-your- face" moniker of Extreme Programming (XP to practitioners). Although Beck does not claim that practices such as pair programming and incremental planning originated with XP, there are some very interesting, and I think important, concepts articulated by XP. There's a lot of talk today about change, but XP has some pretty good ideas about how to actually do it. Hence the subtitle, Embrace Change. Extreme programming (Extreme programming) movement has become an object-oriented programming community Some of them have been several years, but until recently, more attention, especially the "Extreme Programming Explained: EMBRACE CHANGE" book published. Never give it a resentment because of the ultimate programming (the industry is referred to as XP). Although Beck does not say like pair programming, the incremental planning is from XP, but still have some very interesting, I think it is also very important concept to borrow XP. There are many discussions on changes, but XP has many very good ideas to do actually do. That is, this subtitle: hug changes.

There is a tendency, particularly by rigorous methodologists, to dismiss anything less ponderous than the Capability Maturity Model (CMM) or maybe the International Organization for Standardization's standards, as hacking The connotation:. Hacking promotes doing rather than thinking and therefore results in low quality THIS An Easy Way To Dismiss Practices That Conflict With One's Own Assumptions About The World. There is a trend, especially in those strict methodologists, hoping to eliminate those "Capability Maturity Model CMM) or The standards of international standardized organizations are not so cumbersome, such as Hacking. Note: Hacking advises action rather than thinking, resulting in lower quality. Remove the practice of conflict with someone on this world, this is a simple way. . Looked at another way, XP may be a potential piece of a puzzle I've been writing about over the past 18 months Turbulent times give rise to new problems that, in turn, give rise to new practices - new practices that often fly In the face of conventional Wisdom But Survive Because The New Reality. There at Least Four Category: From another angle to XP, it is possible to be a potential Part of this one I have been writing in the past 18 months. New problems during the chaotic period, and the latter leads to new practices - new practices openly violating traditional knowledge, but they are surprised because they can better adapt to this new real world. At least four practice methods I think it belongs to this category:

XP - The focus of this issesue of eadxp - EAD's focus

Lean Development - Discussed in the November 1998 Issue of EAD Lightweight Development (Lean Developments) - I have discussed in EAD 1998 November

Crystal Light Methods - Mentioned In The November 1999 Issue of Ead and Further Discussed in this Issue Lightweight Crystal Method (in November 1999, in November 1999) will be further discuss

Adaptive software development - described in the August 1998 issue of eAD (then called Application Development Strategies - ADS) Adaptive Software Development (Adaptive software development) - described in eAD mid-August 1998 (then known as Application Development Application Development Strategies . Strategies - ADS) Although there are differences in each of these practices, there are also similarities: they each describe variations from the conventional wisdom about how to approach software development Whereas lean and adaptive development practices target strategic and project management, XP brings its Differing World View to The Realm of The Developer and Tester. Although there are differences in these practices, they also have similar places: they all describe a different approach to traditional software development. Although lightweight development and adaptive development is targeted by strategic and project management, XP has brought the development method into the field of programmers and testers in different perspectives.

Much of XP is derived from good practices that have been around for a long time. "None of the ideas in XP are new. Most are as old as programming," Beck offers to readers in the preface to his book. I might differ with Beck in one respect:. although the practices XP uses are not new, the conceptual foundation and how they are melded together greatly enhance these "older" practices I think there are four critical ideas to take away from XP (in addition to a number Other Good Ideas: Many of the parts in XP are actually from those excellent development practices that have existing. "There is no idea in XP is brand new. Most ideas generated by the same time," Beck in the preface in his book. But I considered in a certain aspect may differ from Beck: Although the practice used by XP is not brand new, the establishment of the concept and how they are integrated to greatly enhance these "old" practice. I think (except for many other good ideas, also) can extract four key ideas from XP:

THE COST OF CHANGE Change Change

Refactoring Reconstruction

Collaboration collaboration

Simplicity simplific

But First, I Discuss Some XP Basics: The Dozen Practices That Define Xp. But first, let's discuss the basics of XP: The twelve two practical methods for XP.

XP - The BasicsXP- basis I must admit that one thing I like about XP's principal figures is their lack of pretension XP proponents are careful to articulate where they think XP is appropriate and where it is not While practitioners like Beck and Ron Jeffries may.. envision that XP has wider applicability, they are generally circumspect about their claims For example, both are clear about XP's applicability to small (less than 10 people), co-located teams (with which they have direct experience);. they do not Try to Convince People That The Practices Will Work for Teams of 200. I must admit one thing, that is what I like XP should be that it is not other fancy things. People who support XP always point out the XP suitable place and some of his limitations. The XP practitioner BECK and RON Jeffties believe that XP will have a wider range of application prospects. They usually be very cautious about their own requirements. For example: small (less than 10 people), the company is partially (they have direct experience) is very clear for XP adaptability; they have not tried to let people believe that XP can apply to a 200-person team.

The Project engineering The most prominent XP project reported on to date is the Chrysler Comprehensive Compensation system (the C3 project) that was initiated in the mid-1990s and converted to an XP project in 1997. Jeffries, one of the "Three Extremoes" ( with Beck and Ward Cunningham), and I spent several hours talking about the C3 project and other XP issues at the recent Miller Freeman Software Developer conference in Washington, DC, USA. the most famous XP project is Chrysler comprehensive compensation system (known as C3 Project), it started at the mid-1990s in the mid-1990s, and the 1997 evolved into XP. Jeffries is one of the "Ultimate Programming Triple Group" (the other two is the beck with Ward Cunningham). I talked about C3 and other XP projects in Washington DC. ================================= 注解: chrysler comprehensive Compensation System Chrysler comprehensive compensation system ======= =========================

Originally, the C3 project was conceived as an OO programming project, specifically using Smalltalk. Beck, a well-known Smalltalk expert, was called in to consult on Smalltalk performance optimization, and the project was transformed into a pilot of OO (XP) practices after the original project was deemed unreclaimable. Beck brought in Jeffries to assist on a more full-time basis, and Jeffries worked with the C3 team until spring 1999. The initial requirements were to handle the monthly payroll of some 10,000 salaried employees. The system CONSISTS OF AppROXIMATELY 2,000 CLASSES AND 30,000 Methods and Was Readyin A Reasonable Tolerance Period of The Planned Schedule. Initially, C3 is an OO (object-oriented technology) development project, especially it develops in the Smaltalk language. (SMALTALK: Xerox company developed a high-level programming language, it supports an option screen driven application with mouse to help build computer programs that are easy to use.) As a famous SmallTalk expert, Beck is invited to discuss Smaltalk performance optimization has become an experimental project that uses an object-oriented OO (XP) method when the original item is considered to be non-rescuable. Beck has brought Jeffries to help those basic things, Jeffries have been dry in the c3 group until 1999. The most beginning demand is to be a system that is managed for approximately 10,000 employees per month. This system consists of approximately 2,000 classes and 30,000 methods and provides reasonable tolerance in planning.

As we talked, I asked Jeffries how success on the C3 project translated into XP use on other Chrysler IT projects. His grin told me all I needed to know. I've been involved in enough rapid application development (RAD) projects for large IT organizations over the years to understand why success does not consistently translate into acceptance. There are always at least a hundred very good reasons why success at RAD, or XP, or lean development, or other out-of-the-box approaches does not That - But More on this question is talked to us, I asked Jeffries to successfully change C3 into XP and applied to other Chrysler IT projects. He laughed and told me. Over the years, I have developed a lot of RAD systems (fast prototype development) for many large IT organizations, so I know why we can't use successful experience in other projects. For RAD, XP, lightweight development, and others A wide range of applications, there are at least one reason why they succeed.

Practices Practice One thing to keep in mind is that XP practices are intended for use with small, co-located teams. They therefore tend toward minimalism, at least as far as artifacts other than code and test cases are concerned. The presentation of XP's practices have both positive and negative aspects At one level, they sound like rules -.. do this, do not do that Beck explains that the practices are more like guidelines than rules, guidelines that are pliable depending on the situation However, some. , like the "40-hour week," can come off as a little preachy. Jeffries makes the point that the practices also interact, counterbalance, and reinforce each other, such that picking and choosing which to use and which to discard can be tricky One thing that should be remembered is that we should tend to use XP in a small, partial team. In addition to the exception of code and test, minimize some impact. XP practices have both positive performance and negative. In some respects, they sound like a bunch of rules, do this, don't do that. This beck is explained, and XP is more like a guideline, a flexible development policy that relies on the specific environment. However, such as "40 hours a week", etc., it may feel the renewal channel. Jeffries make practical, balanced, and enhanced each other. So what is the tricky thing to choose to use. The planning game. XP's planning approach mirrors that of most iterative RAD approaches to projects. Short, three-week cycles, frequent updates, splitting business and technical priorities, and assigning "stories" (a story defines a particular feature requirement and is displayed in A Simple Card Format ALL DEFINE XP's Approach To Planning. The plan is planned: the features of the implementation of most iterative RAD projects can be reflected in the implementation method of the development plan. Short-term, every three weeks is a loop, frequently updated, according to priority division task and technology, allocate Stories (a Story defines a special functional requirement and records on cards in a simple manner), all of these That is to form a plan in XP.

Small releases "Every release should be as small as possible, containing the most valuable business requirements," states Beck This mirrors two of Tom Gilb's principles of evolutionary delivery from his book Principles of Software Engineering Management:.. "All large projects are capable of Being Divided Into Many Useful Partial Result Steps, "AND" Evolutionary Steps Should Be Delivered on The Principle of The Juiciest One Next. "Summary:" Each version should be as small as possible, and contain the most commercial value, Beck is said. This also reflects Tom Gilb's two points mentioned in his Book: "All large projects can be divided into local, useful small steps" and " evolutionary step is passed to the next stage. "Small releases provide the sense of accomplishment that is often missing in long projects as well as more frequent (and more relevant) feedback. However, a development team needs to also consider the difference between" release "and" releasable "The cost of each release -. installation, training, conversions - needs to be factored into whether or not the product produced at the end of a cycle is actually released to the end user or is simply declared to The publishing of small versions means realization of frequent feedback that is often lacking in large projects.. However, a development team can of course consider "release" the same "can be released". Whether it is the final version release or a simple release version of the release, it is necessary to pay the cost of installation, training, transformation.

Metaphor. XP's use of the terms "metaphor" and "story" take a little wearing in to become comfortable. However, both terms help make the technology more understandable in human terms, especially to clients. At one level, metaphor and architecture are synonyms - they are both intended to provide a broad view of the project's goal But architectures often get bogged down in symbols and connections XP uses "metaphor" in an attempt to define an overall coherent theme to which both developers and business clients can relate.. The Metaphor Describes The Broad Sweep of The Project, While Stories Are Used to Describe Individual Features. Metaphor: The use of "metaphor" and "story" in XP may make people feel uncomfortable. However, the use of these terms can help us understand in a more humane manner, especially for our customers. To a certain extent, metaphor is the same as the architecture - they are all focused on a project from globally. But architectures often trapped in symbols and connecting patches. XP uses "metaphor" to define a comprehensive consistent topic from developers to business customers. Metaphor is used to describe the comprehensive appearance of the project, and Story is used to describe individual specific features. .... Simple design Simple design has two parts One, design for the functionality that has been defined, not for potential future functionality Two, create the best design that can deliver that functionality In other words, do not guess about the future: create the best (simple) design you can today. "If you believe that the future is uncertain, and you believe that you can cheaply change your mind, then putting in functionality on speculation is crazy," writes Beck. "Put in what you NEED WHEN You NEED IT. "Simple design: Simple design contains two parts. First, design for a defined function, not designed to potentially future possible functions. Second, create the best designs that can implement functionality. In other words, it will be the best design that is currently achievable without the future. "If you believe in the future, it is uncertain, and you believe that you can change your ideas, then the consideration of future features is dangerous." Beck wrote. "Only when you really need it,"

In the early 1980s, I published an article in Datamation magazine titled "Synchronizing Data with Reality." The gist of the article was that data quality is a function of use, not capture and storage. Furthermore, I said that data that was not systematically . used would rapidly go bad Data quality is a function of systematic usage, not anticipatory design Trying to anticipate what data we will need in the future only leads us to design for data that we will probably never use;. even the data we did guess correctly on will not be correct anyway XP's simple design approach mirrors the same concepts As described later in this article, this does not mean that no anticipatory design ever happens;.. it does mean that the economics of anticipatory design changes dramatically at. In the 1980s, I published an article "actual synchronization data" article on automation information management, the quality of data is to use the function, not capturing and storage. In addition, I said that if the data is not very systematic, it will become. Data quality is the function of the system, not a pre-designed design. Regardless of whether it is wrong or wrong, try to design a data we have never used, the end result is likely to use them once again. The simple design method of XP reflects the same point of view. As described later in this article, this does not mean that it does not require prediction, but the design made by the predicted content has changed, the cost of it is very huge. . Refactoring If I had to pick one thing that sets XP apart from other approaches, it would be refactoring - the ongoing redesign of software to improve its responsiveness to change RAD approaches have often been associated with little or no design; XP should be. THOUGHT OF AS Continuous Design. in Times of Rapid, Constant Change, Much More Attention Needs to Be Focused On Refactoring. See The Sections "Refactoring" and "Data Refactoring," Below. Reconstruction: If I have to find out one can The things that are distinguished from XP and other methods are the reconstruction - the continuous software re-designed to improve its reaction. The RAD method often rarely does not even correspond to design; XP should be seen as a continuous design. When the change is fast and frequent, there should be more energy on the reconstruction. See "Reconstruction" and "Data Reconstruction" section below.

. Testing XP is full of interesting twists that encourage one to think - for example, how about "Test and then code" I've worked with software companies and a few IT organizations in which programmer performance was measured on lines of code delivered? and testing was measured on defects found - neither side was motivated to reduce the number of defects prior to testing XP uses two types of testing: unit and functional However, the practice for unit testing involves developing the test for the feature prior to.. Writing the code and further stat. Once the code is written, it is immediely subjected to the test suite instant feedback. Test: XP full of thoughtful puzzles. For example: What is "encoding first test"? I have worked with software companies and some IT agencies, where is the number of lines through code to assess the performance of programmers, and the performance of the test is taken by the number of defects found. Both methods cannot encourage the number of defects generated before testing. XP uses two tests: unit testing and function testing. The unit test requires the test method of the corresponding function before writing code, and tests should be automated. When the code is completed, it is immediately tested with the test set, so it can get feedback immediately.

The most active discussion group on XP remains the Wiki exchange (XP is a piece of the overall discussion about patterns). One of the discussions centers around a lifecycle of Listen (requirements) Test Code Design. Listen closely to customers while gathering their requirements. Develop test cases. Code the objects (using pair programming). Design (or refactor) as more objects are added to the system. This seemingly convoluted lifecycle begins to make sense only in an environment in which change dominates. XP most active discussion group It is still Wiki Exchange (XP is part of the Pattern's overall discussion), one of which discusses the life cycle of listening to (demand) -> test -> code -> design. Close to customers listen and collect their needs. Development Test Cases. Complete object coding (using paired programming). Design (or reconstruction) when more objects are added to the system. This seemingly chaotic life cycle is only meaningful in the environment that is often changed. Pair programming. One of the few software engineering practices that enjoys near-universal acceptance (at least in theory) and has been well measured is software inspections (also referred to as reviews or walkthroughs). At their best, inspections are collaborative interactions that speed learning as much as they uncover defects. One of the lesser-known statistics about inspections is that although they are very cost effective in uncovering defects, they are even more effective at preventing defects in the first place through the team's ongoing learning and incorporation of better PROGRAMMING PRACTICES. Pairing programming: Software (or use inspection directly is one of the few software engineering practices that are widely accepted (at least in theory) and effective metrics. In the best case, the inspection of Inspection can accelerate learning while discovering defects. A less known statistics about InsPection is although it is very effective in discovering defects, but through the team's continued learning and collaboration for good development practices, it can prevent defects in the first time.

One software company client I worked with cited an internal study that showed that the amount of time to isolate defects was 15 hours per defect with testing, 2-3 hours per defect using inspections, and 15 minutes per defect by finding the defect before it got . to the inspection the latter figure arises from the ongoing team learning engendered by regular inspections pair programming takes this to the next step -.? rather than the incremental learning using inspections, why not continuous learning using pair programming one I worked on software Company Customers cited an internal research result, indicating that a defect is found for 15 hours during the test phase. It takes 2-3 hours in the Inspection phase, and it takes only 15 minutes before inspection. The subsequent data comes from the continued team of conventional review. Pairing programming will bring this next step - with its incremental learning with inspection, why don't you use the programming to learn? "Pair programming is a dialog between two people trying to simultaneously program and understand how to program better," writes Beck. Having two people sitting in front of the same terminal (one entering code or test cases, one reviewing and thinking) creates a continuous , dynamic interchange. Research conducted by Laurie Williams for her doctoral dissertation at the University of Utah confirm that pair programming's benefits are not just wishful thinking (see Resources and References). "pairing two people at the same time trying to understand how to better programming and A conversation of programming, "writes by BECK. Let two people sit in front of a terminal (a person knocking code or test case, a person's review and thinking) produces a continuous, dynamic communication. Williams conducted a doctoral thesis in Utah, proved that pairing programming is more than just a beautiful idea and is effective. (See Resources and Reference)

Collective ownership. XP defines collective ownership as the practice that anyone on the project team can change any of the code at any time. For many programmers, and certainly for many managers, the prospect of communal code raises concerns, ranging from "I don ' T Want Those Bozos Changing My Code "To" WHO Do I Blame WHEN PROBLEMS ARISE? "Collective Ownership PROVIDES ANOTHER Level To The Collaboration Begun by Pair Programming. Code Sharing: Everyone in the project group can modify other project members at any time The code, this is the code sharing defined in XP. . For many programmers and manager, the idea of ​​total code will cause some doubts, such as "I don't want those idiots to change my code", "Who should I blame?" And so on. The shared code provides support from another level of collaboration in pairing programming. Pair programming encourages two people to work closely together: each drives the other a little harder to excel Collective ownership encourages the entire team to work more closely together:. Each individual and each pair strives a little harder to produce high-quality designs, code, And Test Cases. Grand, All this Forced "TOGETHERNESS" May Not Work for Every Project Team. Pairing programming encourages two people to work closely: everyone motivates another effort to surpass. Common all encourage the entire team more closely collaboration: Each individual and each double strive to produce high quality design, code and test sets. Of course, all of these forced "common" do not necessarily apply all items groups.

Continuous integration Daily builds have become the norm in many software companies -.. Mimicking the published material on the "Microsoft" process (see, for example, Michael A. Cusumano and Richard Selby's Microsoft Secrets) Whereas many companies set daily builds as a minimum, XP practitioners set the daily integration as the maximum - opting for frequent builds every couple of hours XP's feedback cycles are quick:.. develop the test case, code, integrate (build), and test often integration: daily build (build ) In many companies have become specifications, things on publications on the "Microsoft" process are imitated. (See, for example, Michael A. Cusumano and Richard Selby's Microsoft Secrets) Many companies share the daily chain as the minimum requirement, XP practitioners use daily integration as the maximum requirement, and choose to completely chain each two hours. XP feedback cycle rapid: development test set, encoding, integration (chain chain) and test. The perils of integration defects have been understood for many years, but we have not always had the tools and practices to put that knowledge to good use. XP not only reminds us of the potential for serious integration errors, but provides a revised perspective on Practices and Tools. It has been many years of understanding of integrated defects, but we don't always have the corresponding tools and time to use these knowledge. XP not only reminds us that there may be serious integration errors, but also provide a new understanding from the perspective of practical and tools.

40-hour week Some of XP's 12 practices are principles, while others, such as the 40-hour practice, sound more like rules I agree with XP's sentiments here;... I just do not think work hours define the issue I would Prefer A Statement Like, "Don't Burn Out The Troops", "Rather Than A 40-Hour Rule. There Are Situations in Which Working 40 Hours Pure Drudgery And Others In Which The Team Has To Be Pried Away from A 60-Hour Work Week. Just 40 hours per week: XP has 12 basic principles of practice, but sometimes, like only 40 hours of principles per week, it sounds more like rules. I agree with the views in XP. Just don't think there is a need for hard standard for working hours. Compared to it, I prefer a word similar to "not burning the arm". In some cases, work 40 hours is too tired, and in other groups, even 60 hours of work will be taken. Jeffries provided additional thoughts on overtime. "What we say is that overtime is defined as time in the office when you do not want to be there. And that you should work no more than one week of overtime. If you go beyond that, there's something wrong - and you're tiring out and probably doing worse than if you were on a normal schedule I agree with you on the sentiment about the 60- hour work week When we were young and eager, they were probably okay.. It's the Dragging Weeks to Watch for. "Jeffries provide more thinking about overtime:" We said that overtime is defined for us to stay in the office when we don't want to stay in the office. And you should not overtake more than one week. If you exceed you If you have anything, you have a problem - you are too tired, it is possible to do it off when you get on time. I agree with your stay in 60 working. When we are young and full, this may not Question. It is worth noting that the one week of dragging is. "

I do not think the number of hours makes much difference. What defines the difference is volunteered commitment. Do people want to come to work? Do they anticipate each day with great relish? People have to come to work, but they perform great feats By Being Committed to the Project, and commitment Only Arises from a Sense of Purpose. I don't think that the week's working time will cause a big difference. It is a voluntary contribution to the decision. Will people work? Are they full of strength every day? People must work, but they have created great achievements through investment projects, while investment is only a sense of object. .. On-site customer This practice corresponds to one of the oldest cries in software development - user involvement XP, as with every other rapid development approach, calls for ongoing, on-site user involvement with the project team on-site customer: this. The problem raised by the initial software development - user participation. XP, like other rapid development, requiring customers to continuously participate in the project group on site.

. Coding standards XP practices are supportive of each other For example, if you do pair programming and let anyone modify the communal code, then coding standards would seem to be a necessity coding standards:.. XP practices support each other. For example, if you pair programming and let others modify a total code, then the coding criterion looks must be.

Values ​​and Principles worth the same rules On Saturday, 1 January 2000, the Wall Street Journal (you know, the "Monday through Friday" newspaper) published a special 58-page millennial edition. The introduction to the Industry & Economics section, titled "So Long Supply and Demand: there's a new economy out there - and it looks nothing like the old one, "was written by Tom Petzinger." the bottom line: creativity is overtaking capital as the principal elixir of growth, "Petzinger states in. On January 1, 2000, Wall Street Daily (published on Monday to Friday) has released a thousand edition of the annual commemoration with a 58-page layout. Tom Petzinger is labeled Tom Petzinger in the introduction of the industry and financial introduction: "Long-lasting demand and summons: new economic growth points - see the same episode." Petzinger under the bottom: "Creative is replacing the capital of 'Wanjin Medicine' in the primary factor." Petzinger is not talking about a handful of creative geniuses, but the creativity of groups - from teams to departments to companies Once we leave the realm of the single creative genius, creativity becomes a function of the environment and how people interact and collaborate. to produce results. If your company's fundamental principles point to software development as a statistically repeatable, rigorous, engineering process, then XP is probably not for you. Although XP contains certain rigorous practices, its intent is to foster creativity and communication. Petzinger not Talk about the creativity of a small number of genius, but talk about the following groups of creativity - from the group to the department. Once we look down on the individual's individual creation, creativity is the ability of the environment, and people use and mutual assistance to achieve our results. If your company believes that software development is just a statistical repetition test, engraved, technical process, then XP is not suitable for you. Although there is also a strict practice in XP, XP itself is pursuing "creation" and "communication."

Environments are driven by values ​​and principles XP (or the other practices mentioned in this issue) may or may not work in your organization, but, ultimately, success will not depend on using 40-hour work weeks or pair programming -. It Will Depend On WHETHER or NOT The VALUES AND Principles of XP Align With Those of Your Organization. The environment is a system that drives with the value of the same rule. XP (or other similar) may not suit your unit, but it should be clarified that success is not crazy about 40 hours a week or paired programming, nor relying on XP in your unit. Value or rules. BECK Identifies Four Values, And Ten Secondary Principles - But I'll Mention Five That Should Provide Enough Background. Beck pointed out four values, five basic rules, and ten auxiliary rules - but I want to mention It is these five rules.

Communication. So, what's new here? It depends on your perspective. XP focuses on building a person-to-person, mutual understanding of the problem environment through minimal formal documentation and maximum face-to-face interaction. "Problems with projects can invariably be traced back to somebody not talking to somebody else about something important, "Beck says XP's practices are designed to encourage interaction - developer to developer, developer to customer communication: Yes, communication, however, there seems to be no new thing in inside? Communication is mainly to see people's own opinions, the basics of XP construction is human and people, through the most concise documents, the most direct face-to-face communication to obtain understanding of the task environment.

. Simplicity XP asks of each team member, "What is the simplest thing that could possibly work?" Make it simple today, and create an environment in which the cost of change tomorrow is low concise:. XP ask each member of the development team: "What is the most concise method that might achieve?". The simplicity of today can be maintained, can reduce the costs brought about by tomorrow

Feedback "Optimism is an occupational hazard of programming," says Beck Whether it's hourly builds or frequent functionality testing with customers, XP embraces change by constant feedback Although every approach to software development advocates feedback.. "Feedback is the treatment.". - even the much-maligned waterfall model -. the difference is that XP practitioners understand that feedback is more important than feedforward Whether it's fixing an object that failed a test case or refactoring a design that is resisting a change, high-change environments require a Much Different Understanding of Feedback. Feedback: "For programming, optimism is an adventure.", "and feedback is the corresponding solution." Whether it is using repeated construction or frequent user function testing, XP can constantly receive feedback. Although we will say and feedback each time, we will say and feedback - even a very harmful waterfall model - different is that XP practitioners believe that feedback is more important than feedback (feedForward). Whether it is modified to test failure or software rejected by users from new rework, the rapid change in development environment requires a better understanding of feedback.

Courage. Whether it's a CMM practice or an XP practice that defines your discipline, discipline requires courage. Many define courage as doing what's right, even when pressured to do something else. Developers often cite the pressure to ship a buggy product and the courage to . resist However, the deeper issues can involve legitimate differences of opinion over what is right Often, people do not lack courage -. they lack conviction, which puts us right back to other values ​​If a team's values ​​are not aligned,. THE TEAM WON 'Right, "Right," and, WITHOUT CONVICTION, Courage Doesn't Seem So Important. It's hard to work up the energy to fight for something you don't believe in. Courage: Whether you It is a method of using a CMM method or XP, which itself is required to be courageous. Many places define courage as the right to do something, even forced to do other things. Developers often excuses from pressure to emit products with many defects and adhere to this. However, a further should include other correct different things. Usually, people are not lack of courage, but lack of reasons for gaining correct practice, and don't believe this, courage is not as important as it seems. And if you don't have confidence, it is hard to try hard. "Courage is not just about having the discipline," says Jeffries. "It is also a resultant value. If you do the practices that are based on communication, simplicity, and feedback, you are given courage, the confidence to go ahead in A lightweight manner, "as opposed to being weight down by the more cumbersome, design-heavy phalices", Jeffries said that it is a final value. If you work in a mode based on "Communication", "Simple", "Feedback", you will get courage, the more important it is, the more you trust it.

Quality work. Okay, all of you out there, please raise your hand if you advocate poor-quality work. Whether you are a proponent of the Rational Unified Process, CMM, or XP, the real issues are "How do you define quality? "and" What actions do you think deliver high quality "Defining quality as" no defects "provides one perspective on the question; Jerry Weinberg's definition,"? Quality is value to some person, "provides another I get weary of methodologists who use. the "hacker" label to ward off the intrusion of approaches like XP and lean development. It seems unproductive to return the favor. Let's concede that all these approaches are based on the fundamental principle that individuals want to do a good, high-quality job What "Quality" means and how to achieve it - now the the gist of the real debate! Quality work: Ok, if you have a bad work, please leave here. Whether you are a Rational Unified Process, CMM, or XP's Predor, its essential point of view "How do you define quality and" What activities will win high quality ", define" no shortage "quality is a problem direction. Definition of Jerry Weinberg is "Quality is for most people benefit" Managing XP management XPOne area in which XP (at least as articulated in Beck's book) falls short is management, understandable for a practice oriented toward both small project teams and programming. As Beck PUTS IT, "Perhaps The Most Important Job for the Coach IS One of Beck's Components and Foodment Strategy.) For small projects and programming, in XP (at least Beck book In the middle of the lack of management, it is understandable. Just like Beck, "For coach, the most important job is to get toys with food" (Guidance is an integral part of Beck's management strategy)

With many programmers, their recommended management strategy seems to be:.?. Get out of the way The underlying assumption Getting out of the way will create a collaborative environment Steeped in the tradition of task-based project management, this assumption seems valid However. , in my experience, creating and maintaining highly functional collaborative environments challenges management far beyond making up task lists and checking off their completion Like many programmers, they recommend management strategies such as: escape. What is the assumption below? Skating will build a collaborative environment, in traditional task-based management, this assumption is effective. However, according to my experience, creation and maintaining a collaborative environment will challenge management away from the preparation task list and check. Figure 1 - Historical Lifecycle Change Costs.

Figure 2 -.. Comtemporary lifecycle change costs The Cost of Change Change cost Early on in Beck's book, he challenges one of the oldest assumptions in software engineering From the mid-1970s, structured methods and then more comprehensive methodologies were sold based on The "Facts" Shown in Figure 1. I shop Know; I developed, Taught, Sold, And Installed Several of these Methodologies During the 1980s. Beck begins with some of the software engineering in some of the software engineering. "Send a challenge. Structured from the mid-1970s, the later more complex methods, they all based on the "facts" shown in Figure 1, in the 1980s, I have to understand, use, discuss, implement these methods .

Beck asks us to consider that perhaps the economics of Figure 1, probably valid in the 1970s and 1980s, now look like Figure 2 - that is, the cost of maintenance, or ongoing change, flattens out rather than escalates Actually, whether Figure 2. Shows Today's Cost Profile or Not Is Irrelevant - We Have To make it true! if Figure 1 Remains True, Then We are doomed Because of Today's Pace Of Change. Beck gives us a problem, those in the 1970s and 1980s Perhaps the effect can also play the effect (as shown in Figure 1) now has changed (Figure 2), that is, the cost of maintenance (or equivalent to changing changes) is reduced. Not getting higher and higher. In fact, the expenditure of the overhead shown in Figure 2 is actually not important. It is important that we must realize that if the phenomenon of Figure 1 continues to repeat, we only have a dead road, because today's change is really too Fast (that is, the cost of maintenance will be a high price).

. The vertical axis in Figure 1 usually depicts the cost of finding defects late in the development cycle However, this assumes that all changes are the results of a mistake - ie, a defect Viewed from this perspective, traditional methods have concentrated on ". . defect prevention "in early lifecycle stages But in today's environment, we can not prevent what we do not know about -. changes arise from iteratively gaining knowledge about the application, not from a defective process So, although our practices need to be geared toward preventing some defects, they must also be geared toward reducing the cost of continuous change. Actually, as Alistair Cockburn points out, the high cost of removing defects shown by Figure 1 provides an economic justification for practices like pair programming. Figure 1 The Y-axis in the middle is usually used to indicate incomplete costs after the posture of the development cycle. However, this is verifying a hypothesis, that is, all the startings from the later period comes from the previous mistake, and it is a design defect. From this point, traditional methods are too dependent on the early "no mistake" in the software life cycle. But in today's rapidly changing environments, we can't fully prevent what we predigible - the change caused by the growing demand, and this change is impossible to meet and prevent it. Therefore, although we must make certain precancerous prevention measures in the early days, it is more important to reduce the overhead of later changes. As AliStai Cockburn pointed out, it is necessary to correct the correction defective method shown in Figure 1, just give some practical methods (such as pairing programming) from the perspective of saving expenses (such as pairing programming).

In this issue of eAD, I want to restrict the discussion to change at the project or application level - decisions about operating systems, development language, database, middleware, etc., are constraints outside the control of the development team (For ideas. on "architectural" flexibility, see the June and July 1999 issues of ADS.) Let's simplify even further and assume, for now, that the business and operational requirements are known. in this issue of eAD, I intend to discuss located in the project Or changes in the application of the application - the discussion of similar operating systems, programming languages, databases, components, etc. is not discussed. (About software structure, you can refer to the issue of ADS magazine June 1999) In addition, let us further simplify, that is, the user needs of software have been determined. Our design goal is to balance the rapid delivery of functionality while we also create a design that can be easily modified Even within the goal of rapid delivery, there remains another balance: proceed too hurriedly and bugs creep in; try to anticipate every eventuality and. Time Flies. However, Let's Again Simplify Our Problem and Assume We Have Reached A Reasonable Balance of Design Versus Code and Test Time. Our goal is to quickly publish new features, while making software design easy to change. Even under this goal is quickly released, there is still a need to pay between "quick release but BUG" and "faces but the last lasting". Therefore, let me simplify the problems we have to discuss, we assume that we have made a reasonable balance between the three, coding, and testing these three.

With all these simplifications, we are left with one question: how much anticipatory design work do we do Current design produces the functionality we have already specified Anticipatory design builds in extra facilities with the anticipation that future requirements will be faster to implement Anticipatory?.. design trades current time for future time, under the assumption that a little time now will save more time later. But under what conditions is that assumption true? Might it not be faster to redesign later, when we know exactly what the changes are, rather Than Guessing Now? Only on the simplified basis, there is still a tail: how far we have to look at the unknown future when design? The current design has achieved some of the features we think now. The predictive design can achieve future demand faster, that is, the predictability design method is exchanged in the future at present time, if a little bit of time can be exchanged for a lot of time, of course, is cost-effective of. But how can this construction be a reality? Maybe in the future, the whole redesigned is not slow, why bother now? This is where refactoring enters the equation. Refactoring, according to author Martin Fowler, is "the process of changing a software system in such a way that it does not alter the external behavior of the code yet improves its internal structure." XP proponents practice continuous, incremental refactoring as a way to incorporate change If changes are continuous, then we'll never get an up-front design completed Furthermore, as changes become more unpredictable -.. a great likelihood today - then much anticipatory design likely will BE WASTED. This is why we have to rebuild. Reconstructing, Martin Fowler said that it does not change the software's external performance but reforming internal service. The supporters of the XP method practice continuous, incremental reconstruction methods in changing environments. If the change is evolving, it is impossible to have any step to the design method. To put it bluntly, if the change is unpredictable - as the situation in today's society - too much, considering the possible changes in the design, it is entirely a waste.

Figure 3 - Balancing Design and Refactoring, pre-internet.

Figure 4 - Balancing design and refactoring today I think the diagram in Figure 3 depicts the situation prior to the rapid-paced change of the Internet era Since the rate of change (illustrated by the positioning of the balance point in the figure).. was lower, more anticipatory designing versus refactoring may have been reasonable As Figure 4 shows, however, as the rate of change increases, the viability of anticipatory design loses out to refactoring- -.. a situation I think defines many systems today FIG. I think 3 given the situation before the Internet age. Due to the slower speed of the change (the figure is represented by the left of the balance), the early prediction is reasonable. However, in Figure 4, due to the changing speed of changing, there is too much to predict when designing, this situation is now facing many systems.

In the long run, the only way to test whether a design is flexible involves making changes and measuring how easy they are to implement. One of the biggest problems with the traditional up- front-design-then-maintain strategy has been that software systems exhibit tremendous entropy; they degrade over time as maintainers rush fixes, patches, and enhancements into production The problem is worse today because of the accelerated pace of change, but current refactoring approaches are not the first to address the problem Back in the.. "dark ages" (circa 1986), Dave Higgins wrote Data Structured Software Maintenance, a book that addressed the high cost of maintenance, due in large part to the cumulative effects of changes to systems over time. Although Higgins advocated a particular program-design Approach (The Warnier / Orr Approach), One of His Primary Themes Was To Stop The Degradation of Systems over Time By SystemArtain Redesigning Programs During Maintenance Activities. In a long-term project, check a setting Whether you have good flexibility is through changing needs, while see if the original design can be easily achieved. The biggest problem with this traditional "first design, re-maintenance" strategy is that there is a very large entropy (very easy to change, no law). A system will make the system's entropy greater and larger with time, such as time, maintenance, inclination, patch, and enhancement. Now due to the acceleration of external environment, the situation is getting worse. However, the current reconstruction technology is not the first way to try to solve this problem. As early as the so-called "dark period" (Circa 1986), Dave Higgins wrote a book named "Data Structured Software Maintenance", which pointed out that due to the increased cumulative impact of the change in time, maintenance The expenditures needed to be huge, and Higgins proposed a new design method to prevent the negative impact of the entropy increment of the system. The idea of ​​this method is in the maintenance process. There are systems to redesign the procedures.

Higgins's approach to program maintenance was first to develop a pattern (although the term pattern was not used then) for how the program "should be" designed, then to create a map from the "good" pattern to the "spaghetti" code. Programmers would then use the map to help understand the program and, further, to revise the program over time to look more like the pattern. Using Higgins's approach, program maintenance counteracted the natural tendency of applications to degrade over time. "The objective was not to Rewrite The Entire Application, "SAID HIGGINS IN A Recent Conversation," But to Rewrite Those Portions for Which Enhancements Had Been Requested. "Higgins method first makes the program to design a model (although there is no mode yet Method, then establish a mapping between the detailed code design and "Good" mode, the programmer understands the system and modifies the program based on this mapping relationship, making the modified result closer to that mode. This method using Higgins can increase the trend by maintaining any time for the cancellation of the system. Higgins said: "The goal of this method is not to overwrite the entire system, but only rewritten some parts that must be enhanced as needed." Althought "WAS NOT WIDELY PRACTICED, The Ideas Are The Same as The isy is Today Today - The need today, or drive, increased levels of refactoring: one is better languages ​​and Tools, and the Other is rapid change. Although this original "reconstructed" technology is not extensive Practical inspection, its thoughts and current reconstruction are still connected, but now the demand changes faster, larger. However, there are two things drivers, improved modern reconstruction techniques: First, better programming language and development tools; second is a faster change requirement.

Another approach to high change arose in the early days of RAD: the idea of ​​throwaway code The idea was that things were changing so rapidly that we could just code applications very quickly, then throw them away and start over when the time for change arose. Turb-Term Strategy. There is another way to cope with changes in early RAD (fast prototype development) methods: code discards thinking. This idea believes that the environment and demand changes are too fast, so our only way can only prepare new code quickly, and quickly abandon the old code. We think this is not a long time. Refactoring reconstruction Refactoring is closely related to factoring, or what is now referred to as using design patterns Design Patterns:. Elements of Reusable Object-Oriented Software, by Erich Gamma, Richard Helm, Ralph Johnson, and John Vlissides, provides the foundational work . on design patterns Design Patterns serves modern-day OO programmers much as Larry Constantine and Ed Yourdon's Structural Design served a previous generation;. it provides guidelines for program structures that are more effective than other program structures Reconstruction (refactoring) and configuration (to factoring ) Or closely related to the use of design patterns. Erich Gamma, Richard Helm, Ralph Johnson, and John Vlissides Complete "Design Patterns: Elements of Reusable Object-Oriented" made a foundation for design patterns. As the structured design advocated by Larry Constantine and Ed Yourdon, the design model has made great contributions to contemporary object-oriented technical procedures, which brought the gospel to developers. Through design patterns, the structure of the program is more effective than ever.

If Figure 4 shows the correct balance of designing versus refactoring for environments experiencing high rates of change, then the quality of initial design remains extremely important. Design patterns provide the means for improving the quality of initial designs by offering models that have proven effective in the If the design shown in Figure 4 (Refactoring) is objective, the quality of the initial design is particularly important. By providing past has proven to be an effective mode, Design Patterns provides a method of increasing initial design quality.

So, you might ask, why a separate refactoring book? Can not we just use the design patterns in redesign? Yes and no. As all developers (and their managers) understand, messing with existing code can be a ticklish proposition. The cliché "IF IT Ain't Broke, Don't Fix IT", "The Program, as Fowler Comments," The Program MAY NOT BROKEN, But it does hurt. "Fear of Breaking Some Part of the Code . base that's "working" actually hastens the degradation of that code base However, Fowler is well aware of the concern: "Before I do the refactoring, I need to figure out how to do it safely .... I've written down the safe steps in the catalog "Fowler's book, Refactoring:. Improving the Design of Existing code, catalogs not only the before (poor code) and after (better code based on patterns), but also the steps required to migrate from one to the Other., this migration step, now, maybe you will ask, why still needs Do you want an independently reconstructed book? Don't we use the design model to redesign? Yes, it is not possible. As all developers (including managers) know, modifying the original program code is a tricky thing. There is a sentence in development folklore, "if it isn't broken, don't fix it". However, as Fowler mentioned, "The program may have no 'bad", but there is a potential harm. "I am afraid that the reconstruction of code that can also" work "is actually only exacerbated by the decline of code performance. At the same time, Fowler also knows: "Before software reconstruction, you need to find safe practice ... I wrote these security steps into the directory". <> Written by Fowler not onlys how to reconstruct the previous (poor) code and later (better) code based on mode design, but also the steps of code division reconstruction. These steps reduce the chances of errors in the process of reconstruction.

Beck describes his "two-hat" approach to refactoring - namely that adding new functionality and refactoring are two different activities Refactoring, per se, does not change the observable behavior of the software;. It enhances the internal structure When new functionality. needs to be added, the first step is often to refactor in order to simplify the addition of new functionality. This new functionality that is proposed, in fact, should provide the impetus to refactor. Beck with "two-hat" method described weight Things, that is, add new features and reconstructions are two different behaviors. In essence, the reconstruction does not change the external function visible to the software, it just enhances the internal structure of the software. When there is a new feature to be added, the first step is often reconstructed to add software to add more simplified. In fact, this added new feature provides driving force for reconstruction. Refactoring might be thought of as incremental, as opposed to monumental, redesign. "Without refactoring, the design of the program will decay," Fowler writes. "Loss of structure has a cumulative effect." Historically, our approach to maintenance has been " Quick and Dirty, "So Even IN Those Cases Where, IT Degraded over Time. Refactance can be considered to be an incremental design," no heavy design) Structure, programming will rot, "Fowler write," structural defects will bring cumulative effect. " Historically, our method for software maintenance is "Quick and Dirty" (fast but not uncomfortable?), Causing some initial design work to do good projects, will "degrade" over time.

Figure 5 - Software entropy over time Figure 5 shows the impact of neglected refactoring -.. At some point, the cost of enhancements becomes prohibitive because the software is so shaky At this point, monumental redesign (or replacement) becomes the only option , and these are usually high- risk, or at least high-cost, projects. Figure 5 also shows that while in the 1980s software decay might have taken a decade, the rate of change today hastens the decay. For example, many client- Server Applications Hurriedly Builtin The Early To Maintain Than Mainframe Legacy Applications Builtin The 1980s. Chart 5 shows the situation that is not reconstructed: because the software is so unreliable, the upgrade maintenance cost changes to the people So the Monumental design (or replace) has become the only choice, the risk of the project, at least invested, and it is getting bigger and bigger. Figure 5 also shows that in the 1980s, the Software's survival period is about 10 years, and the rapid change in demand today exacerbates the rotation of the software. For example, the C / S application software made in many years in the early 1990s is much higher than the maintenance cost of the maintenance costs left in the 1980s.

Data Refactoring: Comments by Ken Orr Data Reconstruction: Ken Orr Notes Editor's Note:. As I mentioned above, one thing I like about XP and refactoring proponents is that they are clear about the boundary conditions for which they consider their ideas applicable For example , Fowler has an entire chapter titled "Problems with Refactoring." Database refactoring tops Fowler's list. Fowler's target, as stated in the subtitle to his book, is to improve code. So, for data, I turn to someone who has been thinking about Data Refactoring for a long time (although NOT Using That Specific Time). The Following Section On Data Refactoring Was Wrist By Ken Orr. Editor Note: If the XP and Reconstruction Thoughts attract me is that they can clearly understand what they want Consider the boundary condition of the implementation problem. For example, Fowler wrote a chapter "Problems with refactoring". The first question is the database reconstruction. As shown by the subtitle of the book, the goal of Fowler is to improve code quality. To this end, I have consulted some people with deeper research in data reconstruction (or other terms). The following is written by the data reconstruction part by Ken Orr. When Jim Asked on Refactoring, I Had to ask Him What true required meant. It seemed to me to come down to a couple of very Simple Ideas: When Jim wants me to talk about it, I asked him. What does refactor mean? For me, summarize it as a simple point:

Do what you know how to do. Do you will do it.

DO IT Quickly. Speed ​​speed

When a change occurs, go back to redesign.

Go To 1. Back 1

Over the years, Jim and I have worked together on a variety of systems methodologies, all of which were consistent with the refactoring philosophy. Back in the 1970s, we created a methodology built on data structures. The idea was that if you knew what people wanted, you could work backward and design a database that would give you just the data that you needed, and from there you could determine just what inputs you needed to update the database so that you could produce the output required. in the past few years Jim and I have worked together to study Systems Methodologies, and found that all methodologies have a consistent place for Refactoring Philosophy. In the 1970s, we have established a methodology based on data structure. Its main thinking is: After knowing people's needs, the reverse work, design a database that contains only required data, and then determines the input data necessary to update the database, generate the required output data. Creating systems by working backward from outputs to database to inputs proved to be a very effective and efficient means of developing systems. This methodology was developed at about the same time that relational databases were coming into vogue, and we could show our approach would always that create a well-behaved, normalized database. More than that, however, was the idea that approaching systems this way created minimal systems. in fact, one of our customers actually used this methodology to rebuild a system that was already in place. The customer STARTED WITH THEPUTS AND WIKED BACKWARD to Design A Minimal Database with minimal input requirements. The method of constructing a system from the output result reverse engineering to the input is proved to be a very effective and efficient system development method. Almost at the same time, this method has also developed while the relational database begins. Let us build a good, standardized database. In addition, this idea is also applicable to creating minimum systems. In fact, one of our customers has used this method when rebuilding a system and has achieved success. The customer starts from the output and designs a minimum database that meets the minimum input requirements by reverse engineering.

The new system had only about one-third the data elements of the system it was replacing This was a major breakthrough These developers came to understand that creating minimal systems had enormous advantages:.. They were much smaller and therefore much faster to implement, and The new system is only one-third of the data element (Data Elements). This is a big breakthrough. Developers have begun to gradually realize that the minimum system has a huge advantage: the system is smaller and can be achieved faster; the function is more adaptable. Still, building minimal systems goes against the grain of many analysts and programmers, who pride themselves on thinking ahead and anticipating future needs, no matter how remote. I think this attitude stems from the difficulty that programmers have had with maintenance. Maintaining large systems has been so difficult and fraught with problems that many analysts and programmers would rather spend enormous effort at the front end of the systems development cycle, so they do not have to maintain the system ever again. But as history shows, this approach of guessing about the future never works out. No matter how clever we are in thinking ahead, some new, unanticipated requirement comes up to bite us. (How many people included Internet-based e-business as one of their top requirements in systems they were building 10 YEARS AGO?) However, creating the minimum system does not meet the ideas of many analysts and programmers, no matter how far, they always think that they can think of and foresee future demands. I think this causes the reason for the software difficult to maintain. Maintaining a large system is so difficult and full of problems, so that many analysts and programmers prefer to design a "perfect" system in the early stage of the system development, so as to see for one for all. However, it has proved that the predictive future is futment. Whether we have multiple smart, there are many times before our thinking, there will always be some demands that have never been expected. (How many people can write the Internet-based e-commerce as future demand for future demands)

Ultimately, one of the reasons that maintenance is so difficult revolves around the problem of changing the database design. In most developers' eyes, once you design a database and start to program against it, it is almost impossible to change that database design. In a way, the database design is something like the foundation of the system: once you have poured concrete for the foundation, there is almost no way you can go back and change it As it turns out, major changes to databases in large systems happen. very infrequently, only when they are unavoidable. People simply do not think about redesigning a database as a normal part of systems maintenance, and, as a consequence, major changes are often unbelievably difficult. Finally, one of the reasons is that so difficult to maintain, When changing the database design, other problems will follow. In most developers, once the database is designed and started on this, it is almost impossible to change the design of the database. To a certain extent, the design database is like the foundation of the construction system: Once you put the concrete, you can't change it again. Therefore, unless inevitable, there will be a large change in the large system. People cannot use the redesigned database only as a normal part of the system maintenance. Otherwise, it is difficult to imagine the system to make a big change. Enter Data Refactoring enter data reconstruction Jim and I had never been persuaded by the argument that the database design could never be changed once installed. We had the idea that if you wanted to have a minimal system, then it was necessary to take changes or new requirements to the system and repeat the basic system cycle over again, reintegrating these new requirements with the original requirements to create a new system. You could say that what we were doing was data refactoring, although we never called it that. Jim and I Never recognize that once the system starts running, you cannot change the view of the database design. We believe that if you want to keep the system to keep the most streamlined, you must introduce the changes you want to do to the system and repeat The basic development process makes new needs and old needs to become a new system. You may say that we do is data reconstruction, but we never say so.

The advantages of this approach turned out to be significant. For one thing, there was no major difference between development of a new system and the maintenance or major modification of an existing one. This meant that training and project management could be simplified considerably. It Also Meant That Our Systems Time, Since We "Builtin" To the existing system. The benefits of this existing system. First, develop a new system and maintenance or old The difference between the system is not very transformed. This means that managing a project and training will be greatly reduced. At the same time, the development time will also be reduced, because we have different ways to change, one is' Built In '(based on changing), the other is' adding them on' (add changes). Over a period of years, we built a methodology (Data Structured Systems Development or Warnier / Orr) and trained thousands of systems analysts and programmers. The process that we developed was largely manual, although we thought that if we built a detailed-enough methodology IT Should Be Possible To Automate Large Pieces Of That Methodology In Case Tools. In the past few years, we have established a method (structured system design or warnier-forerolic) and trained thousands of systems. Analysts and programmers. Even if we define a variety of descriptions, it is possible to achieve most of the work with the CASE tool, but the development process still requires a lot of manual work.

Automating Data Refactoring automated data reconstruction To make the story short, a group of systems developers in South America finally accomplished the automation of our data refactoring approach in the late 1980s. A company led by Breogán Gonda and Nicolás Jodal created a tool called GeneXus1 that accomplished what we had conceived in the 1970s They created an approach in which you could enter data structures for input screens;. with those data structures, GeneXus automatically designed a normalized database and generated the code to navigate, update, and report against that database In order to shorten the development time, South American system developers have developed data reconstruction automation tools in the 1980s. Companies led by Breogán Gonda and Nicolás Jodal have developed a tool called GeneXus, which is what we have in the 1970s. The way they created allows us to automatically create specifications for the specification database after entering the data structure and generate code for browsing, updating, and outputting data. But that was the easy part. They designed their tool in such a way that when requirements changed or users came up with something new or different, they could restate their requirements, rerun (recompile), and GeneXus would redesign the database, convert the previous database automatically to the new design, and then regenerate just those programs that were affected by the changes in the database design. They created a closed-loop refactoring cycle based on data requirements. this makes things simple, this tool so that when the user After the demand or system requirements change, only the original definition is changed, re-compiled, and can redesign the database to accommodate new requirements, and generate code that is only affected by the database modification. This is the reconstruction process based on the data-based closed loop.

GeneXus showed us what was really possible using a refactoring framework. For the first time in my experience, developers were freed from having to worry about future requirements. It allowed them to define just what they knew and then rapidly build a system that did just what they had defined. Then, when (not if) the requirements changed, they could simply reenter those changes, recompile the system, and they had a new, completely integrated, minimal system that incorporated the new requirements. GeneXus makes us realize reconstruction Can convince the real thing that we bring. As far as my experience, this will give developers from the concerns about future demand, so that developers can only define all what they know and quickly implement the defined content. Therefore, when the system's demand changes, they only need to simply join those modifications, recompile, you can get a new, fully integrated, and meet the new needs. What does all this mean? What does all this mean? Refactoring is becoming something of a buzzword. And like all buzzwords, there is some good news and some bad news. The good news is that, when implemented correctly, refactoring makes it possible for us to build very robust systems very rapidly. The bad news is that we have to rethink how we go about developing systems. Many of our most cherished project management and development strategies need to be rethought. We have to become very conscious of interactive, incremental design. We have to be much more willing to prototype our Way to success and to use tools That Will Do Complex Parts of The Systems Development Process (Database Design and Code Generation) for US. Reconstruction is gradually turning into a fashionable word. Like all stylish things, there is a good side and there is a bad side. A good side is: If you can implement correctly, the reconstruction makes us so possible to quickly build a strong system. On the one hand, we have to reconsider how to develop. All development and management strategies used in the original need need to be reconsidered. We must understand the development of interactive, incremental development; we must also be accustomed to successfully developing methods and using tools to complete the complex work in system development (database design and code generation).

In the 1980s, CASE was a technology that was somehow going to revolutionize programming. In the 1990s, objects and OO development were going to do the same. Neither of these technologies lived up to their early expectations. But today, tools like GeneXus really do many of the things that the system gurus of the 1980s anticipated. It is possible, currently, to take a set of requirements, automatically design a database from those requirements, generate an operational database from among the number of commercially available relational databases (Oracle, DB2, Informix, MS SQL Server, and Access), and generate code (prototype and production) that will navigate, update, and report against those databases in a variety of different languages ​​(COBOL, RPG, C, C , and Java). In the 1980s, Case made the development of revolutionary changes. In the 1990s, objects and OO methods also made the development of revolutionary changes. These technologies are not as expected. But now, the tools like Genexus have done something expected in the 1980s. It is indeed possible to automatically perform database design after a given system requirement, generating a commercial relational database (Oracle, DB2, Informix, MS SQL Server, And Access), and generate data to be browsed, updated, and output data Different languages ​​(COBOL, RPG, C, C, C , And Java) code (prototypes and results). This new approach to systems development allows us to spend much more time with users, exploring their requirements and giving them user interface choices that were never possible when we were building things at arm's length. But not everybody appreciates this new world. For one thing, IT Takes A Great Deal of The MyStery Out of the Process. For Another, IT Puts Much More Stress on Rapid Development. New system development methods allow us to exchange, analyze user needs, let users choose Different interaction interfaces, this is impossible when you use yourself to do everything. But not everyone likes this development method. One is because it will greatly open the mystery of the development process. The other is because this also adds pressure to the rapid development.

When people tell you that building simple, minimal systems is out of date in this Internet age, tell them that the Internet is all about speed and service. Tell them that refactoring is not just the best way to build the kind of systems that we need For the 21st century, it is the only way. When people tell you that there is no longer to establish simple and streamlined systems in the Internet era, then tell them that Internet is the world of speed and service, telling them to refactor not only The best way to establish such a system in the 21st century is also the only way. -------------------------------------------------- ------------------------------

Note12gonda and jodal created a company caled artech to market the genexus product. It.. .. it. .......................................................................................

-------------------------------------------------- ------------------------------

Crystal Light Methods: Comments by Alistair Cockburn lightweight Crystal Method Editor's note:. In the early 1990s, Alistair Cockburn was hired by the IBM Consulting Group to construct and document a methodology for OO development IBM had no preferences as to what the answer might look like, just that it work. Cockburn's approach to the assignment was to interview as many project team members as possible, writing down whatever the teams said was important to their success (or failure). The results were surprising. The remainder of this Section Was Written By Cockburn and is based on his "in-process" Book on minimal methodology Note: In the early 1990s, the AliStair Cockburn IBM consultant group was working, and a set of work was developed for OO (object-oriented) development. method. IBM believes that the white cat black cat, it is a good cat to the mouse. Cockburn has been in depth in depth in many development groups and wrote the key to the success or failure of the project. As a result, it was shocked. The following is written by cockburn, based on his "actual work" book containing very little methodological book.

In the IBM study, team after successful team "apologized" for not following a formal process, for not using a high-tech CASE tool, for "merely" sitting close to each other and discussing as they went. Meanwhile, a number of failing teams puzzled over why they failed despite using a formal process -? maybe they had not followed it well enough I finally started encountering teams who asserted that they succeeded exactly because they did not get caught up in fancy processes and deliverables, but instead sat close Talk Easily So The Could Talk Easily and Delivered Tested Software Frequently. In the IBM research group, the development team wants to "apologize" to the previously successful group, because they don't abide by a formal process because they do not use a high-tech CASE tool. Or or "Just" because they sit together, discuss how they will do it. At the same time, some failed groups feel very confused, although they use formal processes, they still fail - it is not good enough to comply with these processes. Later I started to meet some successful groups, they claimed that it was because they were not trapped in the flowers and publishing, but everyone sat together, thus making them more easily discussed and often exchanged the test software, and finally It was successful. These results have been consistent, from 1991 to 1999, from Hong Kong to the Americas, Norway, and South Africa, in COBOL, Smalltalk, Java, Visual Basic, Sapiens, and Synon The shortest statement of the results are:. These conclusions from From 1991 to 1999, from Hong Kong to the United States, Norway, and South Africa, in Cobol, Smalltalk, Java, Visual Basic, Sapiens, and Synon are consistently persisted, these shortest descriptions are:

To the extent you can replace written documentation with face-to-face interactions, you can reduce reliance on written work products and improve the likelihood of delivering the system. As far as possible within your range, with face to face communication instead of written documents, This can reduce the dependence of work products written, and increase the possibility of publishing systems

The more frequently you can deliver running, tested slices of the system, the more you can reduce reliance on written "promissory" notes and improve the likelihood of delivering the system. The more often released after the system is running and fragments tested, it The more allow you to reduce the dependence of the "agreement" tag, the more you can increase the possibility of the final release system, the PEOPLE ARE Communicating Beings. Even Intrurted Programmers Do Better with Informal, Face-to-Face Communication Than With Paper Documents. From A Cost and Time Perspective, Writing Takes Longer and is Less Communicative Than Discussing At The Whiteboard. It should be communicated in humanity. Even for the internal contact, the face-to-face communication is used, which is better than the communication effect of the document written on paper. From a cost and time, writing articles will always discuss more time than in the whiteboard, and the effect of communication is worse.

Written, reviewed requirements and design documents are "promises" for what will be built, serving as timed progress markers. There are times when creating them is good. However, a more accurate timed progress marker is running tested code. It is more accurate because IT is not a timed promise, IT is a timed accomplishment. Those who are written and reviewed needs and design documents are just "promise" what to do, we can use it as a sign of project progress. There are a lot of progress marks to be in the original setup. However, more accurate progress marks should be the code after running the test. Because this is not a pre-promised flag, but the true completion of the symbol.

Recently, a bank's IT group decided to take the above results at face value. They began a small project by simply putting three people into the same room and more or less leaving them alone. Surprisingly (to them), the team delivered the system in A Fine, Timely Manner. The Bank Management Team WAS A bit belay? SURELY IT CAN't Be this Simple? Recently, a bank's IT department decides the following results. They launched a small project, using simple ways to put three people in a room, so that they will die. Surprisingly, this group promptly, excellent release system. Bank's management feels a bit confusing. Will it be as simple as it?

It is not quite so simple Another result of all those project interviews was that:... Different projects have different needs Terribly obvious, except (somehow) to methodologists Sure, if your project only needs 3 to 6 people, just put them into A room to points, one........................................................................................................................................ A Rocket, I'll Ask you not to try it. we must remember factors such as team size and demans on the project, such as: Of course not so simple. Another conclusion obtained after interviewing all other projects is: Different projects have different needs. This is very obvious, not dependent on methodology (don't know what). Of course, if your project only needs 3 to 6 people, just let them be in a room. But if you have 45 or 100 people, this is useless. If you want to test through the process of the food and drug management department, you can't start this way. If you want to launch me with a rocket to Mars, I recommend don't try it. We must remember the needs of the team's size and projects: as the number of people involved grows, so does the need to coordinate Communications. With the growth of participants, the demand for coordination communication is more

As The Potential for Damage Increases, And The Tolerance for Personal Stylistic Variations Decreases. With the potential destructive growth, the requirements for public inspections are also increasing, while simultaneous pairs of personal styles The degree of accessibility caused by difference is also lowered.

Some Projects Depend On Time-to-Market and Can TOLERATE DEFECTS (Web Browsers Being An Example); Other Projects AIM for TRACEABILITY OR LEGAL LIABILITY PROTECTION. Some projects depends on the release time determined by the market, and tolerate for some defects ( The web browser is such an example); other projects are committed to provision and legal responsibility.

The result of collecting those factors is shown in Figure 6. The figure shows three factors that influence the selection of methodology:. Communications load (as given by staff size), system criticality, and project priorities summarized according to the collected relevant factors Conclusion As shown in Figure 6. It shows the three factors that choose different methodologies: communication difficulty (decided by the number of members), system key programs, and priority of the project. Figure 6 - The Family of Crystal Methods.

Locate the segment of the X axis for the staff size (typically just the development team). For a distributed development project, move right one box to account for the loss of face-to-face communications. Determined according to the number of members in the X-axis Part (usually just developing groups). If it is a distributed development project, because the opportunity to communicate in the face is reduced, move to the right.

On the Y Axis, Identify The Damage Effect of The System: Loss of Comfort, Loss of "Discretion" Monies, Loss of "Essential" Monies (EG, Going Bankrupt), or Loss of Life. On the Y-axis, confirm that the system is corrupted Impact: The degree of comfort is lowered, obvious economic losses, fundamental economic losses (such as bankrupt), or death.

The different planes behind the top layer reflect the different possible project priorities, whether it is time to market at all costs (such as in the first layer), productivity and tolerance (the hidden second layer), or legal liability (the hidden third layer ). The Box in the Grid Idicates The Class of Projects (for Example, C6) with Similar Communications Load and Safety Needs and can be used to select a methodology. Different aircraft (Plate Panel?) Reflects various items Different key points, whether it cost is a listing time (like the first layer), efficiency and compatibility (hidden second layer), or legal responsibility (hidden third floor). The grid in the grid is determined. In similar communication difficulty and the type of item (for example, C6), you can use methodology.

The grid characterizes projects fairly objectively, useful for choosing a methodology. I have used it myself to change methodologies on a project as it shifted in size and complexity. There are, of course, many other factors, but these three determine methodology selection quite well This grid shows the characteristics of the project and is useful for choosing a methodology. I used to change my methodology when I changed the size and complexity of the project. Of course there are other factors, but these three are used to decide what methodology is good. Suppose it is time to choose a methodology for the project. To benefit from the project interviews mentioned earlier, create the lightest methodology you can even imagine working for the cell in the grid, one in which person-to-person communication is enhanced as much as possible, and running tested code is the basic timing marker. the result is a light, habitable (meaning rather pleasant, as opposed to oppressive), effective methodology. Assign this methodology to C6 on the grid. Suppose now that you want to select an item methodology . Thanks to the interviews mentioned above, you can build a maximum amount of methodology, imagine into the grid in the grid, here, try to improve the communication between people and people, run The code after the test is the most basic progress mark. The result is a simple, conforming to human habits (meaning more pleasant, opposing capping). Specify this method to C6 on the grid.

Repeating this for all the boxes produces a family of lightweight methods, related by their reliance on people, communication, and frequent delivery of running code. I call this family the Crystal Light family of methodologies. The family is segmented into vertical stripes by color ( not shown in figure): the methodology for 2-6 person projects is Crystal Clear, for 6-20 person projects is Crystal Yellow, for 20-40 person projects is Crystal Orange, then Red, Magenta, Blue, etc. All these repeat The lattice, generate a lightweight method, based on their confidence, communication, and release of people's confidence, communication, and the frequency of running code. My name is this family for the Crystal Light method. This family is divided into different vertical stripes: 2-6 people's project methodology: 2-6 people's project method is called Crystal Yellow, 20-40 people Method is called Crystal Orange, then Red, Magenta, Blue, and so on. Shifts in the vertical axis can be thought of as "hardening" of the methodology. A life-critical 2-6-person project would use "hardened" Crystal Clear, and so on. What surprises me is that the project interviews are showing rather Little Difference In The Hardness Requirement, Up to Life-Critical Projects. Switch between vertical directions is known as enhancement. A 2 to 6 people in a short life period should be managed using a strengthened Crystal Clear or its derived approach. What made me surprises is that in such a project, I can't see the contradiction between the additional demand and completing the project on time.

Crystal Clear is documented in a forthcoming book, currently in draft form on the Web. Crystal Orange is outlined in the methodology chapter of Surviving Object-Oriented Projects (see Editor's note below). Crystal Clear from a book to be published now online There is already a draft. The contour of the Crystal Orange is described in the Method of "Surviving Object-Oriented Projects".

Having Worked with The Crystal Light Methods for Several Years Now, i Found A Few More Surprises. After many years, I found more surprises.

转载请注明原文地址:https://www.9cbs.com/read-102554.html

New Post(0)