Billy Holliseelysian Consulting
March 24, 2004
Summary: While Web applications have received the bulk of the interest over the last few years, improvements in the client mean that it's time to re-investigate client-side development In this new column, .NET in the Real World, authored by Microsoft. Regional Directors, Billy Hollis Looks At Smart Clients And How You Can Use The To Build Applications Today. (6 Printed Pages)
Related books
Once upon a time dinosaurs ruled the earth;. They were called mainframes If your hair is not as gray as mine, you might not remember that world Let me tell you a bit about how it worked In that world, the big mainframe.. System Did All the processing. It assembled a page Out to a device caled a terminal. The Most Common Type of Terminal Was Called A 3270.
THEY HAD No Processing Power to Speak of. All A User Could Do With Fields and Navigate Around The Screen.
When the user was done with a page of data, he or she pressed a button on the terminal and sent the page's information back to the mainframe. Then the mainframe took in the page's information and processed it, and got ready to send out a new Page.
Then PCs came along, and then they got hooked to networks, and finally somebody said, "Hey, we can do better than that mainframe stuff. We can use our PCs' processing power to make the user interface more intelligent and remember lots of things About How The User Needs To Work. That Would Help The Users Do Their Tasks, And Be a Lot More Productive.
Thus was a new type of system born. The best of these systems did a great job of distributing the work among the users' PCs and the big central machine, which was rechristened a server. These client-server systems allowed both PCs and large central systems to each do what it did best. Client-server displaced many of the mainframes, and people liked it much better. The users were productive and happy.Then along came the Internet. At first it was just used to browse static, hyperlinked pages . But then some very bright young fellows said, "Hey, look what we can do! We can have an application run on this Internet. We'll have a large central Web server that will do all the processing. And it will send out pages to users, who will view the page in a browser, which will let them navigate around the page and enter some data. Then the users will press a button and send the page's information back to the big system, which will process it and send Back a new page. Won't what be great! "
The browser did not have much intelligence built into it. It was supposed to be a "thin client," meaning the PC was not supposed to do anything except run this browser. Thus PCs with more processing power than the mainframes of twenty years Earlier WERE Reduced to Being Pretty Analogs of 3270 Terminals.
Why Did THIS HAPPEN? Why Did We Take this Giant Leap Backwards? As with so many qustions of the "why is it done" "with Browser-based systems, we could for the first Time Allow Users All Over the World to access our systems, and this..................... ..
The alternative of a "rich client" or "smart client" required us to install some special software out on the user's system. This used to be easy, back in the days of DOS. We just copied it out there and it ran. But then COM came along-about the same time as the browser, in fact-and we learned the term DLL hell. So we gave up on getting our software to run on the user's machine, and just used the browser, because we could not afford to do anything else. After all, something beats nothing. An inferior alternative we can afford beats a superior alternative we can not afford.But let's shift an underlying assumption. Let's assume deployment of software to a user's system becomes cheaper-perhaps very Close to the zero cost of using a browser. What happens then?
Billy's First Law of Software Development states: Users outnumber programmers and tech support reps Users are the reason our systems exist If they learn they can get application software interfaces that are intelligent, and help them do their job better, how long do you think.. they are going to put up with clunky interfaces? How long will they be happy with pages built with a protocol that was not even designed for application interfaces, but for browsing hyperlinked pages? I believe that the answer is, "not very long."
Consider the possibilities. If a system has one thousand operators, and smart client software makes them just five percent more effective, then the cost savings can be huge. Assuming a loaded employee cost of $ 50,000, a five-percent productivity enhancement for one thousand operators is fifty full-time equivalents, which comes out to $ 2.5 million. and that does not even take into account lower training costs, lower error rates, less user frustration and stress, and other potential by-products of smart client software. (by THE WAY, THAT FIVE PERCENT ASSUMPTION Above is quite conservative. I've seen production improvements in The Ten- To Twenty-Percent Range, Depending on The task.) Your Choice
If this analysis is correct, then how does that affect you decision makers in the world of software development and information technology I think it leads to an interesting choice:? You can be reactive or proactive about adapting smart client systems And you can pay the. Price or review..
On one hand, you can lay back and keep doing things in 1990's fashion, depending heavily on browser-based software. Depending on what industry you are in and how fast your organization moves on new technology, that strategy might not get you in trouble. But in any fast-moving industry, in which technology can confer a competitive advantage, such a strategy carries significant, career-affecting risk. Someday, a person who really calls the shots on technology in your organization may come to you and say, " hey, this browser-based stuff is junk. The users hate it, and it slows them down. I've seen other systems that have smart user interfaces that let them do their jobs better and faster, and save their companies tons on money. I want some of what. Get it for me-and we need it yesterday. "" WE'VE Might Start the Conversation with The Management of Your Department ... "if this doesn 'T Sound Very Attractive, You Can Adopt A Proactive Strategy. You CAN start by deciding whether that new application you need to write for 500 users should have an old-fashioned browser interface, or a hot, sexy, new intelligent user interface. You can be the hero that creates software that improves the productivity of hundreds or thousands Of Users, Maybe Saving Millions Of Dollars. You can your users Go, "Wow! i Didn't Know You Could Have Internet Applications this nice! You're a genius!"
Getting start
IF you like the "genius" Choice, THEN you need to learn the INS AND OUTS OF SMART Client Development. There Are Four Major PIECES OF Technology Needed to make A Smart Client Architecture Work:
A forms package to create the intelligent client software. Data transport technology to communicate data between the smart client and the central server, either on the local network or across the Internet. A means to deploy smart client software to the client machines, in many cases across the Internet. Security technology to protect the system from malicious access.Right now, the Microsoft® .NET Framework is the clear choice of platform to attain all of these capabilities. It is not the only choice as a platform for distributed smart client systems But It Obviously Has A Large Lead over Alternative Such As Java. Let's Look at The Capabilities of .Net in Each of The Areas Above To See wh.
Forms on the client
The .NET Framework includes one of the most advanced forms engines available. It's called Microsoft® Windows® Forms. It's completely object oriented, has a wide variety of visual controls (both from Microsoft and third parties) available for it, and the Microsoft® Visual Studio® Development Environment Includes A Great Visual Designer To Quickly Create Windows Forms Interfaces.
Using the event-driven paradigm of Windows Forms, plus the ability to store as much state information locally as necessary, user interfaces can be much more responsive. Heads-down data entry tasks can be done with interfaces specially designed for that purpose. Sophisticated data validation can help users get the data right the first time, instead of needing to reload pages just to see if the data is right. (Yes, yes, I know. Browsers can do some of that too. But it takes so much programming to Make Them Do It Right That The Effort is SelDom Worth IT.)
Windows Forms interfaces can include tutorial windows, tooltips, translucent help forms, dynamic controls that adapt their behavior to the user, and much more. And they are still typically faster to develop than equivalent Web pages.Microsoft is investing heavily in this area of technology . The next generation of UI technology has already been announced. The code name for the project is "Avalon," and it's part of the next release of Microsoft® Windows®, code-named "Longhorn," currently under development. Avalon adds even .
To take advantage of these technologies, developers will need to learn more about user interface design principles. If they've only done browser-based programming, they may not realize how much there is to learn about UI design. They can not make your Systems Include Responsive, Intelligent Interfaces, Unsteil The Know How To Write Responsive, Intelligent Interfaces.
There is one limitation to note. Windows Forms is a part of the .NET Framework, and is therefore available only on Windows platforms. However, if your organization has standardized on the Microsoft® Windows Server System ™ for client workstations, as many have, .
Applications NEED DATA
The vast majority of corporate and commercial applications manipulate data, which is usually stored in some central server. For a smart client application to be viable, it must be practical to get the data from the server to the client, allow the client to make changes To the data, and the send the data back again.
Microsoft's newest data access technology, Microsoft® ADO.NET, is designed for just such a scenario. Unlike previous data access models, it was created for distributed use. XML-based containers of data can be created on the server and transported to the client . The client can use the container to work with the data without maintaining a connection to the server, making changes and additions, and then send the container back to the server.There are two primary technologies than can be used to do the actual transport of data between client and server machines. One is Web services. The advantages of Web services include ease of implementation and configuration, and cross-compatibility to many kinds of servers. Web services can even talk to non-Microsoft servers, though you'll have To do a bit More work to create an appropriate data container.
If all the systems involved in your application can run Microsoft .NET, then another alternative is called .NET remoting. This has some performance and security advantages, but is more difficult to configure. Large-scale smart client systems for internal users often depend on Remoting, While Systems with Users Outside The Organization Are More Likely To Use Web Services.
In either case, it's quite feasible for a smart client application to handle data much more intelligently than a browser-based system could hope to do. For example, if the smart client loses its link to the server, it can maintain a local copy of .
Low Cost Deployment
Remember that we started this discussion by asking about the effect of making deployment of smart client systems as cheap or nearly as cheap as browser-based systems. Now, it's hard to get much cheaper in deployment costs than a browser-based application, which has essentially zero deployment costs for each additional user. But it's possible to reduce smart client deployment costs enough to be competitive. And remember, with the potential large savings of a smart client through productivity improvements, taking on some additional deployment cost can make good economic sense .The key technology allowing cheap deployment is the copy-and-run capability of the .NET Framework. There's no need to perform complex registration of system components as in COM. Just copy the components onto the disk and the Framework does the rest. and WITH SIDE-BY-SIDE EXECUTION OF MULTIPLE DLL VERSIS, DLL INCOMPATIBILITIES Are Banished.
A limited form of automatic Internet deployment is already included in the .NET Framework, and a more advanced version, code-named "ClickOnce," is slated for the next version. In the meantime, I've found it easy to create custom deployment Systems Take Advantage Of Copy-Run To Easily and Cheaply Produce Self-Updating Applications.
Just as browsers must be installed on the client systems to allow browser-based applications to run, the .NET Framework must be installed on client machines for .NET-based smart clients to run. At present, this does mean taking extra responsibility, because it's not automatically installed for the operating systems (Windows XP, Windows 2000, and Windows 98 / Me) used by most client machines. However, installation of the .NET Framework is free for these systems. over time, as systems turn over, we 'll see the .NET Framework become ubiquitous because Microsoft intends to include it with future operating system products. in fact, they've already started by including it in Microsoft® Windows Server ™ 2003.Making It All Secure
We've learned in the past few years just how malicious some people on the Internet can be, and we've responded by building more robust security into our systems. But it's clear that new security capabilities are needed for a distributed smart client system.
Fortunately, the design of the .NET Framework includes robust security principles. Besides eliminating vulnerabilities such as buffer overruns, there is a new form of security called code access security. It awards security privileges to pieces of code, based on information about the code, Su Wrote It. This Security Sits ON top of the Normal User-based security, Allowing Another Layer of Protection.
I would argue that a properly designed smart client system is more secure than a typical browser-based system, because the interface between the client and server systems can be more carefully controlled, and because the executable parts of the system can receive only the privileges they NEED. But it's true true true this designing such advanced security means Learning Some New Techniques And Concepts.Your Action Plan
You Don't Have To Rush Into This Smart Client World. Any Time In The Next Two Years OR SO WILL PROBABLY BE FIN 'Get There First, Of Course. The Users Aren't Clued In Yet And Deployment, WHILE Economical Reasonable In Cost, Still Requires You To Take Responsibility For Getting The .NET Framework on The Client Machines.
But these are temporary conditions. There is no doubt in my mind that smart client applications will displace a lot of browser-based applications over the next few years. The reason I have no doubt is that I've already had five clients decide to do IT, And They Are All thrilled with the results.
That does not mean the shift was painless for them. It required developers to learn new distributed architectures and new technologies. In many cases, they had to get better at object-oriented development and user-interface design.