The Demand for Software Quality Software Quality Need
A Conversation with Bertrand Meyer, Part I Bertrand Meyer Interview
Bi Bill Venners Interviewer: Bill Venners
October 27, 2003: 2003-10-27
Summary
Summary
Bertrand Meyer Talks with Bill Venners About The Increasing Importance of Software Quality, The Commercial Forces ON Quality, And The Challenges of Complexity.
Bertrand Meyer and Bill Venners discuss the importance of software quality, the market's impact of software quality and the challenge of software complexity.
Bertrand Meyer is a software pioneer whose activities have spanned both the academic and business worlds. He is currently the Chair of Software Engineering at ETH, the Swiss Institute of Technology. He is the author of numerous papers and many books, including the classic Object Oriented Software Construction (Prentice Hall, 1994, 2000). In 1985, he founded Interactive Software Engineering, Inc., now called Eiffel Software, Inc., a company which offers Eiffel-based software tools, training, and consulting.
Bertrand Meyer is a software pioneer that is active in the second border of academic and business. He is currently chairman of the Software Engineering Association of the Swiss Institute of Technology. He has written a quantity of vocabulary papers and books, such as classic "object-oriented software constructs" (Prentice Press, 1994, 2000). In 1985, he founded an interactive software engineering company. The company has renamed Eiffel Software, providing software tools, training and consulting services based on Eiffel language.
On September 28, 2003, Bill Venners conducted a phone interview with Bertrand Meyer. In this interview, which will be published in multiple installments on Artima.com, Meyer gives insights into many software-related topics, including quality, complexity, design by contract And Test-Driven Development. in this Initial Installment, Meyer Discusses The Increasing Importance of Software Quality, The Commercial Forces ON Quality, And The Challenges of Complexity.
September 28, 2003, Bill Venners did a telephone interview on Bertrand Meyer. In this interview (content will be announced in Artima.com, Meyer has made a lot of software issues, such as software quality, software complexity, contract design, and test drive development. In this first part, Meyer discusses the importance of software quality, market is the challenge of software quality and software complexity. The Importance of Software QUALITY
The importance of software quality
Bill Venners: In a 2001 interview with InformIT, you said, "The going has been so good that the software profession has been able to treat quality as one issue among many Increasingly it will become the dominant issue.." Why?
You said in 2001 informit to your interview: "The current situation is very good, so that quality has become a topic of many people in the software world. It will become the most important topic of people." Why is this? " ?
Bertrand Meyer: As the use of computers pervades more and more of what society does, the effects of non-quality software just becomes unacceptable Software is becoming more ambitious, and we rely on it more and more Problems that could be dismissed quite easily.. BEFORE Are Now COMING to the forefront.
Computer applications have spread all over the social life, so software from quality is not accepted. Software is developing rapidly, and our dependence on software is also increasing. Today, we have to face some problems that have been long- but not valued.
THERE IS A VERY REVEALING quote by Alan Perlis in his preface to the mit book on scheme
I think that it's extraordinarily important that we in computer science keep fun in computing. When it started out, it was an awful lot of fun. Of course, the paying customer got shafted every now and then, and after a while we began to take their complaints seriously. We began to feel as if we really were responsible for the successful, error-free perfect use of these machines. I do not think we are. I think we're responsible for stretching them, setting them off in new Directions, And Keeping Fun in The House.alan Perlis A preface to the Massachusetts Institute of Textbooks - Abelson and Sussman's "Computer Programming Structure and Analysis" - have an inspiring. He wrote:
I think computer science is critical to maintain the interestability of "calculation". It is really interesting when it has just happened. Of course, users often pay for it; after a while, we have begun to seriously treat users' complaints. So we believe that he should successfully operate the computer, no mistakes are responsible. I don't look like this. I think we should work hard to extend the function of the computer, so that they have new development directions, let our room full of fun.
That is typical of the kind of attitude that says "Sure, we can do whatever we like. If there's a problem we'll fix it." But that's simply not true anymore. People depend on software far too fundamentally to accept this kind of attitude. in a way we had it even easier during the dot-com boom years, between 1996 and 2000, but this is not 1998 anymore. The kind of free ride that some people were getting in past years simply does not exist anymore.
A typical attitude is: "You can believe it, we can do anything we want to do. If there is any problem, we will fix it." But this is precisely possible. People's dependence on the software is so large, and they are not likely to accept such attitude. Earlier in the .com era (1996 to 2000), such concepts also barely stood their feet, but it was impossible until 1998. This kind of hitchhiking attitude that can be easily accepted in the past is nowhere.
The Harvard Business Review published an article in May 2003, "IT Does not Matter" by Nicholas Carr, that stated that IT has not delivered on its promises. It is a quite telling sign of how society at large is expecting much more seriousness and is holding us to our promises much more than used to be the case. Even though it may still seem like we do have a free ride, in fact that era is coming to a close. People are watching much more carefully what we're DOING AND WHETER THEY'RE GETTINE RETURNS Quality. "Harvard Business Comments" published Nicholas Carr's article "IT numbness", the article pointed out: IT did not honor it committed to. This issued an obvious signal: the whole society requires us to have much serious attitude towards yours. Despite the current view, we seem to be able to hitch in (irresponsible), but this era will end quickly. It is increasingly concerned about whether our expenditure is concerned about whether our expenditure can be rewarded. This core is quality.
Commercial Forces on Software Quality
Market on software quality
Bill Venners: In your paper, The Grand Challenge of Trusted Components, you write "There is a quality incentive, but it only leads to the acceptability point: the stage at which remaining deficiencies do not endanger [the product's] usefulness to the market. Beyond That Point, Most Managers Consider That Further QUICKLY DIMINISHING RETURN INVESTMENT. "How do Commercial Pressures Affect Software QUALITY?
Your paper "Trusted Components' Great Challenge" Talks: "The motivation of quality improvements exists, but it is only allowed to reach this level: there is a defect in the market that is not crisisible to product effectiveness. Most managers believe that the quality improvement measures exceeding this range can only produce a sharp drop in investment. "What is the market affecting the quality of the software?
Bertrand Meyer: Commercial pressures affect software quality partly positively and partly negatively It is almost tempting to use as an analogy the Laffer Curve, which was popular for a while in so-called Reaganomics I'm not an economist, and I hear that.. the theory has been by now discredited, so I really do not want to imply that the Laffer Cruve is fundamentally true in economics. nevertheless, the Laffer Curve is the idea that if you tax people at zero percent, the state suffers because it doesn 't get any revenue. If you tax people at 100%, it's in the end no better, because if people are not making any money, they will stop working, and the state will also not get any revenue. It's a rather simplistic argument . Although it is pretty clear the Laffer Curve has an element of truth, I'm not sure how precise or accurate it is in economics. But as an analogy, it describes well the commercial pressures on software quality. market forces for software quality Impact, both positive and negative. This is similar to the River curve theory in the Range Economic Policy. I am not an economist, I also heard that this theory is currently unfundle, so I didn't hit the River curve in the economics. However, the River curve tells us that if you pay taxes at 0%, the country will suffer from disasters because it does not have any income; if it is increased to 100% (can not be high), then people cannot get Any income, so they will refuse to work, so that the country will not have any income. This is a quite simple discussion. Although the River curve has quite obvious rationality at this point, I will undoubtedly determine its correctness and accuracy in economics. However, as a class ratio, it can describe the market forces for software quality.
If you produce a software system that has terrible quality, you lose because no one will want to buy it. If on the other hand you spend infinite time, extremely large effort, and huge sums of money to build the absolutely perfect piece of software, then it's going to take so long to complete and it will be so expensive to produce that you'll be out of business anyway. Either you missed the market window, or you simply exhausted all your resources. So people in industry try to get to that magical middle ground where the product is good enough not to be rejected right away, such as during evaluation, but also not the object of so much perfectionism and so much work that it would take too long or cost too much to complete. If you The software product quality is very bad, you will get any income, because no one is willing to buy it. Conversely, if you exhaust the time, exhaustion of human, material resources and financial resources to construct absolutely perfect software, then its excessive development cycle and high cost will determine you must exit the market. You don't miss market opportunities, it is eventually exhausted all resources. Therefore, people in the software industry must work hard to find a balanced point: the product is good enough, so it will not be negated immediately in the stage of the assessment; not to pursue the ten beautiful, refined, otherwise it will Can't achieve the goal of time and money.
THE Challenge of Complexity
Complex challenge
Bill Venners: you said in your book, Object Oriented Software Construction, "The Single Biggest Enemy of Reliability And Perhaps of Software Quality In General IS Complexity." COULD you Talk A bit About That?
Your masterpiece "Object-Oriented Software Construction" said: "Reliability, perhaps with software quality, the only biggest enemy is usually complex." Can you disclose this? "
Bertrand Meyer: I think we build in software some of the most complex artifacts that have ever been envisioned by humankind, and in some cases they just overwhelm us The only way we can build really big and satisfactory systems is to put a hold on complexity. , to maintain a grasp on complexity. Something like Windows XP, which is 45 million lines of code or so, is really beyond any single person's ability to comprehend or even imagine. The only way to keep on top of things, the only way to Have Any Hope for a Modicum of reliability, is to get rid of unnecessary complexity and time the remaining complexity through means Possible. I think there are some complex parts that reach the human imagination limit. At some time, these parts let us Unattended. The only way to build large, satisfactory systems is not to complicate continuously, and must maintain control of complexity. For example, the Windows XP system contains approximately 4,500 lines of lines, which is completely unique to understand and even imagine. If you want to keep control to them, or if you want to have a little reliability, the only way is to eliminate the parts that are not necessary, and try to keep the complexity control of the rest.
Taming complexity is really fundamental in the Eiffel approach. Eiffel is there really to help people build complex, difficult things. You can certainly build easy or moderately difficult systems using Eiffel better than using other approaches, but where Eiffel really starts to shine is when you have a problem that is more complex than you would like and you have to find some way of taming its complexity. This is where, for example, having some relatively strict rules of object modularity and information hiding is absolutely fundamental. The kinds of things that you find in just about every other approach to circumvent information hiding do not exist in Eiffel. Such strict rules sometimes irritate programmers at first, because they want to do things and they feel they can not, or they have to write a little more Code to Achieve the Result. But The Strictness IS Really A Major Guard Against The Catastrophes That Start To Come Up When You're Scaling Up Your Design. Complex Control is a fundamental principle in Eiffel language programming. Eiffel is more helpful to people, highly difficult development. Of course, you can build simple, difficult, moderate systems, can even be better than using other tools, but when the problem is complex to more than your will, you can't control its complexity, EIFFEL will True big color. For example, one of its basic principles is to have a corresponding stringent provision for object modularity and hidden information. In many other languages, you can find a simple way to get hidden information, but do not exist in Eiffel. At first, these strict rules may be anger programmers because they can't do what they want, or to write more code to achieve the purpose. But when you need to expand your original design, these rules become a strong guard against a great disaster.
For example, in just about every recent object-oriented language, you have the ability, with some restriction, of directly assigning to a field of an object: xa = 1, where x is an object, a is a field Everyone who has. been exposed to the basics of modern methodology and object technology understands why this is wrong. and then almost everyone says, "Yes, but in many cases I do not care. I know exactly what I'm doing. They are my objects and My Classes. I Control All The Accesss To The Modification of Field A. "And On The Surface, People Correct. in the Short Term ON A Small Scale, IT's True. WHO CARES? For example, in a much more object-oriented language, despite some restrictions, you can assign a field directly to a field: xa = 1, where x is an object, A Is its field. Anyone with modern methods and object technology is understood why this is wrong. Almost everyone will say: "It is indeed wrong, but in most cases I won't care. I certainly know what I am doing. They are my objects and classes, I should control all of their access interfaces, so you Don't bother me. Don't force me to add some code to pack the modification process of field A. "The surface looks, they are right. There is no problem in the short term, in a small range. Who will I do it?
But direct assignment is a typical kind of little problem that takes up a completely new dimension as you start having tens of thousands, hundreds of thousands, or millions of lines of code; thousands or tens of thousands of classes; many people working on the project ;. the project undergoing many changes, many revisions, and ports to different platforms This kind of thing, direct assignment of object fields, completely messes up the architecture So it's a small problem that becomes a huge one..
But directly assigning such a typical small problem in you have 10,000, 100,000 or even a million lines of code, there are thousands of classes, many people participate in a project, the project experiences many changes, a lot of modifications, and needs to face different platforms When you bring things to a completely different direction. Direct assignments like object fields will completely disturb the entire architecture. In the end, a small problem will bring a big trouble. The problem is small in the sense that fixing it is very easy in the source. You just prohibit, as in Eiffel, any direct access to fields and require that these things be encapsulated in simple procedures that perform the job-procedures which, of course May Then Have CONTRACTS. SO IT'S REALLY A Problem That Is Quite Easy To Kill in The Bud. But if you don't kill it in the bud, The IT Grows to a proportion where it can kill you.
Such a small problem, you can easily fix it in the source code. You only need to prohibit direct access to the object fields like the Eiffel language, and must be done on the encapsulation of the simple process of completing this work - of course, such a process may still need some contracts. This is a problem that is easy to eliminate in the bud phase, but if you don't do this, it will thrive, and finally you will die.
Another example is overloading:. Giving the same name to different operations within the same class I know this is controversial People have been brainwashed so much that overloading is a good thing that it is kind of dangerous to go back to the basics and say that. it's not a good thing. Again, every recent language has overloading. Their libraries tend to make an orgy of overloading, giving the same name to dozens of different operations. This kind of apparent short-term convenience buys a lot of long-term complexity , because you have to find out in each particular case what exactly is the signature of every variant of an operation. The mechanisms of dynamic binding as they exist in object technology and of course in Eiffel are much more effective than overloading to provide the kind of Flexibility That People Really Want In the end.
Another example is overload: Different operations are implemented in the same class (method) in the same class. I know that it is controversial for this issue. People have been brainwash, firmly believe that overload is a very good thing, but in essence, it is dangerous, not a good thing. Like the object field directly, the current various languages support overload. In all kinds of libraries prepared, overloading is flooding, using a name (method) to achieve many different operations. On the surface, it has its convenience in the short term, but in the long run, the cost of rising complexity is paid, because you must figure out the exact meaning of each variable in different operations. In fact, the dynamic binding mechanism in the object technology (of course also included EIFFEL can provide people's desired, better flexibility than heavy load. So these are examples of cases in which being a little more careful in the language design can make a large contribution to the goal of taming complexity. It's also sometimes why people have not believed our claims about Eiffel. The use of Eiffel is quite simple , but the examples that we publish are simple not necessarily because the underlying problems are simple, but because the solutions are. Eiffel is really a tool for removing artificial complexity and finding the essential simplicity that often lies below it. What we realize now is that sometimes people just do not believe it. They do not believe in simple solutions. They think we must be either hiding something or that the language and methods do not actually solve the real practical problems of software development, because they know there has To Be More Complexity There. As this Horrible Cliche Goes, "IF IT Looks Too Good to Be True," Which is Certainly The Stupidest Utterance Ever ProffeeD by Humankind. This is the kind of cliche we hear from many people, and it's just wrong in the case of Eiffel. If you have the right tools for approaching problems, then you can get rid of unnecessary complexity and find the essential simplicity behind it.
So, there are many examples to prove that if you are cautious when you program the language design, you can greatly approach the goal of complex control. This may sometimes be why people don't believe in the reasons for our commitment to EIFFEL. The use of Eiffel is very simple; the examples we publish are also very simple, but the reason is not simple in the problem itself, but our solution is simple. Eiffel is actually a stripped person and finding a tool that is often hidden, essentially simplicity. But we now find that people don't believe this, do not believe there is a simple solution. They think that we have hidden some things, this language and these methods do not really solve the practical problems in software development, because they believe that it should be more complex. There is such a vibrant crying: "If it is so good, then it is inevitable." This may be the most stupid saying in history. Many people have this argument, but in front of Eiffel, it is wrong. As long as you use the right tool to solve the problem, you can remove unnecessary complex and find hidden, essentially simple. This is the key issue that anyone building a significant system is facing day in and day out: how to organize complexity both by removing unnecessary, artificial, self-imposed complexity, and by organizing what remains of inevitable complexity This is where the concepts of. Inheritance, Contracts, Genericity, Object-Oriented Development in General, And Eiffel in Particular, Can Play A Role.
Anyone who is building large systems, the central issue of the center of every day is: How to eliminate unnecessary, artificial, self-suite, and control the rest, unacceptable complexity. On this issue, inheritance, contract, generics, and object-oriented development can generally - especially Eiffel - play an important role.
Bill Venners: It sounds to me that you're talking about two things: getting rid of unnecessary complexity and dealing with inherent complexity I can see that tools, such as object-oriented techniques and languages, can help us deal with inherent complexity.. But How Can Tools Help US Get Rid Of Self-Imposed Complexity? What Did You Mean By "getting at the simplicity behind the complexity?"
According to my understanding, you should be discussed two things: remove unnecessary complexity and inevitable complexity. I can find some tools such as object-oriented technology and language to help us handle inevitable complexity. But how do tools help us remove the complexity of your find? What does you mean by "Simple to find out the complex back"?
Bertrand Meyer:. Look at modern operating systems People bash the complexity of Windows, for example, but I'm not sure the competition is that much better There's no need for any bashing of any particular vendor, but it's clear that some of these. systems are just too messy. A clean, fresh look at some of the issues would result in much better architecture. On the other hand, it's also true that an operating system-be it Windows XP, RedHat Linux, or Solaris-has to deal with Unicode, with providing a user interface in 100 different languages. Especially in the case of Windows, it has to deal with hundreds of thousands of different devices from lots of different manufacturers. This is not a kind of self-inflicted complexity that academics like to criticize In the real world, we have to deal with requirements that are imposed on us from the outside So there are two kinds of complexity:.. inherent complexity, which we have to find ways to deal with through organization, through information Hiding, Through Modularization; And Artificial Complexity, Which We Should Just Get Rid of by Simplifying The Problem. Let's see some operating systems. For example, people violently slammed the complexity of Windows, but I don't think other competitors do better. People did not attack any manufacturers, but it was obvious that some systems did show more confusion. If you re-examine some of the problems mentioned, you can indeed design a better architecture. But from another aspect, the complexity of an operating system cannot be avoided. Windows XP, Redhat Linux and Solaris must handle Unicode, which must provide user interfaces for hundreds of languages. Especially Windows, it must be compatible with a large number of manufacturers who have difficulty counting different devices. This is not the complexity of the kind of self-assessment of the academic criticism. In the real world, we have to face the various requirements of the outside world. Therefore, complexity can be divided into two categories: inevitably complexity, it requires us to deal with it by optimizing organization, analyzing hidden information and modularization, etc.; the other is human complexity, we should simplify Solve the problem to eliminate it.