Summary: Discussion Application Design for Microsoft .NET and the required changes: review from the use of Microsoft Windows DNA to build an architecture knowledge in the N-layer application, how to use these uses Microsoft .NET Framework to construct these Knowledge, and recommendations for applications that use XML Web services in architectures. (Page Print Page)
Introduction
Today, N-layer applications have become the standards for building enterprise software. For most people, N-layer applications are applications that are divided into multiple independent logical parts. The most common option is divided into three parts: indicating that business logic and data, of course, there may be other division methods. The N-layer application was originally in order to solve the problem associated with the traditional client / server application, however, as the web era arrived, this architecture began to become the mainstream of the new development project.
Microsoft Windows? DNA technology has become a very successful basis for N-layer applications. The Microsoft .NET framework also provides a solid platform for building N-layer applications. However, the change caused by .NET causes the architecture designer to reconsider certain knowledge about designing N-layer applications in the WINDOWS DNA field. More importantly, the basic support of the XML Web service built into the .NET frame allows developers to build new applications that break through traditional N-layer methods. To learn how to better build the architecture of the .NET application, you need to understand which changes have occurred in this new field, and how to make full use of these changes.
This article will discuss these issues. First review the key architectural knowledge of studying N-layer applications in using Windows DNA. Then, then apply these knowledge to the process of using the .NET framework to construct the application, to test them. The last part provides some suggestions for the architecture of applications that use XML Web services.
Back to top
Windows DNA environment
It is useful to break the application into multiple logical parts. Dividing a large software into several small parts will be more beneficial for software build, reuse and modification. It is also helpful to adapt to different technologies or different business organizations. At the same time, there are some comprehensive factors that need to be considered. Although modular and repetitive use is very effective, they may cause applications that cannot be safe, easy to manage, and quickly as other methods. This section reviews some basic architecture knowledge obtained from the universal experience of building N-layer applications using Windows DNA technology.
Write business logic
Windows DNA applications typically use one or more of the following three implementations to implement its business logic:
• ASP page
• COM components, other services provided by COM
• Stored procedures running in DBMS
Generally speaking, excessive business logic written in the ASP page is not a good way. Because you have to use a simple language (for example, Microsoft Visual Basic? Script (VBScript)), it will be interpreted in each execution, which will affect performance. And the code in the ASP page is not well maintained, mainly because business logic usually mix together with the representation code that created the user interface.
In view of this, it is recommended that the business logic is implemented as a COM object when writing intermediate business logic. This approach is slightly complicated than writing a purely ASP application, but you can use full-featured languages to generate compiled executables, so the result is much faster. This code can also be completely separated from the code code included in the COM object to represent the representation in the ASP page, making the application easier to maintain.
From COM to COM , its architecture is different. However, as many Windows DNA architecture designers know, unless you really need, you should not use the core services provided by COM , such as transactions, real-time (JIT) activation, role-based security and thread services. The COM or similar service provided by other development platforms will naturally lead to slower, more complicated. It is only possible to use COM in the following situations: • Distributed transactions that need to span different resource managers (eg, Microsoft SQL Server (TM) and Oracle).
• Applications can effectively utilize role-based security.
• Enhance the thread characteristics of Microsoft Visual Basic? 6.0.
• JIT activation can improve performance; browser clients rarely happen because the ASP page is active through JIT.
• The configuration advantage of COM greatly simplifies the deployment of the application.
The third way to write business logic is to create some code as a stored procedure running in the Database Management System (DBMS). Although the main reason for using the stored procedure is to separate the details of the database architecture with the business logic to simplify the management and improvement of the code, but the code and data are close to the optimization performance. Those applications that must be independent of DBMS (eg, applications created by independent software vendors) usually avoid using this method because it locks the application to a particular database system. Writing and debugging of stored procedures may be more difficult than writing and debugging more than COM objects, and this method will reduce the chance of reused code because COM objects are usually easier to reuse more than storage procedures. But most custom applications are still connected to DBMs originally created, so the performance advantage of using the stored procedure is still large. In view of this, those who must run a good Windows DNA application usually use stored procedures for some or all of business logic.
Build a client
Windows DNA supports both local Windows clients written in languages such as Visual Basic, and also supports the browser client. The browser client is limited, especially when Microsoft Internet Explorer and NetScape are used as a browser. Therefore, applications typically have a browser client and a local Windows client. The interface provided by the browser client is limited, but it can easily access the Internet, and the Windows client provides full-featured interface. Using downloadable Microsoft ActiveX? Controls can create a more complex browser interface, but must ensure that the browser is Internet Explorer, and the user is willing to trust the application's creator.
Manage the status in the browser application
ASP applications can use several different mechanisms to maintain information between client requests on the server. However, there is a strict rule in Windows DNA. If the application balances the load between two or more machines, it is absolutely not to use the ASP "session" object to store the status of each client. ASP's "Session" object is locked on a machine and therefore cannot be used for applications that load balanced.
ASP "Session" objects and ASP "Applications" objects have other restrictions. Use any of them to store the ADO recordset, greatly reduce scalability because it limits the ability of the application to develop multi-threaded. Therefore, it is not a good way to store records in any of these two objects.
Distributed communication
In Windows DNA, it is very easy to send components that are running on different machines: DCOM is the only choice. From the architecture, DCOM is a simple extension of COM. But in fact, DCOM has many other meaning. These include:
• Due to its own own protocol, it is very direct to communicate with the remote COM object using DCOM. • DCOM will be a very secure protocol as long as the configuration is correct. But it is not easy to implement this configuration, so the agreement is not easy to use. Nevertheless, DCOM itself can still provide good distributed authentication, data integrity, and data confidentiality, especially in the Windows 2000 domain.
• Since DCOM needs to open any port, it is not suitable for use with the firewall. So, for applications that must be communicated over the Internet, DCOM can generally not be used.
Access storage data
The data access architecture that can be built using ADO can be divided into two categories: lightweight and heavy duty. Light ADO clients keep the database connection as possible and write to the database using the stored procedure. The light client uses one of the following three ways to retrieve data:
• By using read-only, only the previous cursor is fill the record set;
• Output parameters by storing process;
• Use data streams (in a newer version of ADO).
Heavy-duty clients keep the database connection for a long time. Such applications rely on open connections and those server-side games that are allowed to connect to do:
• Enable records to directly access changes made by other users or applications;
• Enable conservative locking;
• Minimize data volume copied to the ADO client as much as possible to reduce network traffic. Unlike the lightweight client, the client using the server-side target can keep the query result in the database until the data is truly required. In addition, this method is less than the metadata represented by the recording set, and retains more data in the database.
Light applications are the most scalable because they use the rare resource for the database connection. In contrast, heavy applications must maintain long-term effective database connections because this is required for server-side cursors. This greatly limits the scalability of the application, especially not applicable to the Internet server application. Although the use of ADO development of heavy-duty applications may be simpler, it is usually not the best choice.
ADO is not particularly suitable for hierarchical data such as XML documents. ADO has complex the functionality of this work and is not easy to understand. Similarly, ADO only provides limited support to the XML feature accessing SQL Server 2000. Therefore, Windows DNA applications typically avoid using ADO to apply hierarchical data.
Pass data to the client
For all N-layer applications, it is a key link to effectively move data from the intermediate layer to the client. The Windows DNA application can use the ADO to disconnect the record set using the DCOM and Windows clients. This option can also be used for browser clients when making sure the browser is Internet Explorer. It is more difficult to send data to any browser. One method is to explicitly convert the data to XML, and then send data and all necessary script code to the browser.
Back to top
.NET environment
.NET supports traditional N-layer applications, web service applications, and applications that combine both elements. This section first introduces how .NET affects the N-layer application, then introduces several major architectural problems in the process of building a Web service application.
Bind the N layer application with .NET
Some questions described in the previous section also apply to the Windows DNA application and the application built using the .NET framework. For example, only COM (in the .NET framework is called a corporate service) is only available when satisfying one or more conditions listed above. Similarly, build business logic to store procedures to achieve better performance in many N-layer applications.
At the same time, there is a new version of new technologies and prior art in the .NET framework. These enhancements have brought a variety of changes to the Optimization architecture of the N-layer application. This section will be described in the previously described classification .NET framework how to change the decision made when the architecture designer is created when the N-layer application is created. Write business logic
Unlike three options for creating N-layer business logic in Windows DNA (ASP page, COM components, and stored procedures), .NET framework only provides two methods: assembly and stored procedures. For browser applications, you can use the Microsoft ASP.NET .ASPX page to create an assembly. Unlike ASP, in this case, the ASP.NET writes business logic is usually a better way.
One of the reasons is the intrinsic code option of ASP.NET. In a conventional ASP page, mixing business code with a maintenance manner and representing code is not an easy task, and the .aspx page uses the code to complete the two code. Among them, Windows DNA applications may use ASP pages and COM objects to carry out maintenance, and applications built using the .NET framework can only use asp.net. Also, business logic included in the .aspx page can be written in any .NET-based language, not just a simple scripting that is supported by traditional ASP pages. Moreover, ASP.NET is compiled and not an explanation page, so ASP.NET application speed can be very fast. Applications built using the Windows DNA may use both ASP pages and COM objects to achieve performance sufficient to meet demand, while using .NET upgrades may only allow only ASP.NET to build the same application. Ultimately, using the ASP.NET cache to reduce business logic access to database access to common data to significantly improve performance.
However, it is important to point out that for the code included in the .aspx page, even in the use of code, it is difficult to use more than the standard assembly. For example, the code from the Windows Form Client Access. The code in the pageASPX page will encounter a lot of problems.
Build a client
Using the .NET framework reduces the requirements of the Windows client, usually only one browser client is available. One reason is that the ASP.NET web control allows you to build and / or purchase reusable browser graphical user interface (GUI) elements, making it easier to build more useful browser clients. And you can download the component-based component-based components to the Internet Explorer client, and then use some trust to run these components (without using the all-not trust model required by the ActiveX control), which helps to build better users. interface.
Manage the status in the browser application
Since the ASP "session" object is bound to a machine, it does not exerts the need, and this limit is avoided using the .NET framework. Unlike ASP, the ASP.NET "session" object can be shared by two or more machines. Thus, you can use the Session object to maintain the status in the Web server domain of the load balance to make it more useful. Moreover, since the content of the "session" object can be selected in the SQL Server database, this mechanism can be used in an application that must always maintain the status of each client in the event of a fault.
Another important change affecting the ASP.NET application architecture is that the dataset can be stored in the "Session" and "Application" object without including threads, this is different from the ASP. That is, the rules that cannot be stored in Windows DNA cannot store the records in these objects in these objects do not apply to the data set in the .NET framework. This makes the storage of query results more simple and more natural.
Distributed communication
Compared to Windows DNA, the .NET framework provides more options between the distributed components of the application, including: • .NET Remoting, TCP channels and HTTP channels are available;
• ASP.NET supports XML web services that can be called through SOAP pages in the .asmx page;
• DCOM required to communicate with remote COM objects.
The more options, the more the choice of the architecture, which means that there is more factors that need to be considered when choosing. The architecture issues to be understood when using the .NET framework to create a distributed application, including:
• Communication directly with remote COM objects requires DCOM without using .NET Remoting. This communication should be avoided as much as DCOM is quite complicated. In some cases, it is necessary to process an existing COM object by hosted code, although this requires the required COM interoperability reduces performance.
• .NET Remoting TCP channel does not provide built-in security. Unlike DCOM, it does not provide strict authentication, data integrity or data confidentiality services. But it is not an inequalitude, the TCP channel is more easier than the DCOM.
• DCOM does not work well with the firewall, and the .NET Remoting HTTP channel is different from it, it is designed to be effectively communicated on the Internet. Moreover, since SSL can be used, this option provides a safe path to data. Typically, for intranet communication, TCP channels are preferred; for Internet communication, it is more suitable for HTTP channels or ASP.NET SOAP support.
• .NET Remoting HTTP channel and ASP.NET support for XML Web services can implement SOAP. However, these two implementations are distinct, and each way has their specific purposes. .NET Remoting focuses on retaining the exact semantics of the public language runtime, so it is the best choice when the remote system is also running the .NET framework. ASP.NET focuses on providing an absolute standard XML web service, so it is the best choice when the remote system is based on the .NET-based platform or any other platform. And ASP.NET is fast than the .NET Remoting HTTP channel. However, HTTP channels also have advantages, which allows parameters to be delivered by reference and real asynchronous callbacks, which is the function that SOAP support in ASP.NET.
Access storage data
The ADO can easily build heavy clients, but the client's scalability is poor, and ADO.NET is different from ADO, which is more suitable for building a lightweight client. The ADO.NET client reads only the only read only cursor. It does not support stateful server-side games, so its programming mode encourages a short time connection. The client that directly reads and handles data can use the ADO.NET DataReader object, which does not provide a cache for the returned data. Alternatively, you can read the data into the Dataset object, which is a cache as the data returned from the SQL query and other sources. However, different from the ADO record set, the data set cannot explicitly maintained an open connection with the database.
As mentioned earlier, the heavy-duty method of the ADO also has some other problems. These issues can be solved in ADO.NET as follows:
• For the ADO.NET client that stores data in the data set and requires access to changes made by other users or applications, they need to include explicit code to make these changes. This code typically needs to open a database connection for each of the performed checks.
• Although ADO.NET does not directly support conservative lock, the client can still get the same effect by using an ADO.NET transaction or in the stored procedure. • Unlike ADOs, ADO.NET does not allow some query results to be reserved in the database (you can use server-side cursors). Although ADO.NET is indeed less than the metadata retrieved in ADO, the application should still be designed to pass all query results from the database to the ADO.NET client.
Another change affecting architecture selection in ADO.NET is its support for processing hierarchical data (especially XML documents). Converting ADO.NET DataSet to XML is very straightforward because just access the XML function of SQL Server 2000. Therefore, layered data that may have been mandatory with the relationship model in the Windows DNA environment can now access in its original form. For more information, see the Related Readings section.
Pass data to the client
Effectively transmitting data to the client is equally important for applications built on the .NET framework and applying applications built using Windows DNA. An important changes are that the ADO.NET data set can be automatically serialized into XML, allowing the data transfer between the layers more simple. Although this method can also be used in Windows DNA, .NET is more simple and straightforward through XML information.
XML Web Service Architecture
During building a distributed application, you can use the SOAP, Web Service Description Language (WSDL), and other techniques of the XML web service through a variety of ways. E.g:
• Use SOAP instead of just use HTTP to connect to the web client of the N-layer application. After the connection is established, the client may be any device capable of performing SOAP calls. After that, the client can provide more features for their users because it can directly call methods in the remote server.
• It may be that the N-layer application built on the platform based on the .NET framework is connected to another application built on other platforms, such as Java Applications Servers.
• Connect two large applications, or connect a Enterprise Resource Planning (ERP) system with another ERP system or any other application. As shown in these examples, the XML web service is not only available for N-layer applications, but its application range is very broad.
Regardless of how to use, XML Web services will bring many new architectural problems. The most important difference between the XML Web Services and the N-layer application usually used may be, the XML web service is a loose coupling. Unfortunately, this term has different meaning for different users. In this article, it refers to a communication application with the following features:
• The application is largely independent of each other and is usually controlled by different organizational.
• Not completely reliable. You cannot guarantee that each communication application is available at all times.
• Its interaction can be synchronized or asynchronous. The web service client may block the wait for a request response, or you can do something else after sending a request, and then check the response later.
These basic features provide a wide range of architectural principles for applications using XML Web services. Although some problems may be resolved in future work, such as Microsoft's Global XML Web Services Architecture (GXA) Specifications, but currently, users who create a valid XML Web service application must understand these issues. These include:
• Safety may be more complicated. Pre-planning end-to-end authentication and effective authorization is very important. The end-to-end data integrity and data confidentiality are also important for some applications. It may be necessary to map between different security mechanisms, of course, it is best to avoid this. For more information, see the Related Readings section. • Interoperability may become a problem. Due to the relatively immature specification, the SOAP implementation of different suppliers cannot work well. For more information, see the Related Readings section.
• Modify an existing application so that you can have problems when accessible through an XML web service. When the program never intended to be used together is connected together, there will always be problems such as speed, scalability, and security. Existing applications are usually constructed as a server, so handling some small requests will easily get rid of them. Reduce the number of requests and increase the data contained in each request may increase the performance of the application. In addition, existing applications typically cannot handle the expected load, such as the load that may be generated when the Internet is disclosed. If possible, use some kind of queuing mechanism to store the request before the request is responded, this may be helpful.
• It is very important to adjust the fault. In particular, only one semantic request is required, and it is usually taken particularly care. For example, the request may time out, thereby triggering a retry, but the original request may be just because some reason is delayed. If there is a problem with a single call to perform two remote web services, you must create a mechanism to solve this problem.
• It is not possible to provide end-to-end transactions dependent on distributed locking (leaping the organizational boundary). Most organizations do not allow "external" applications to lock data, so it is impossible to implement a two-stage submission style transaction. It can only be considered for any necessary rollback to use compensation transactions.
• Since the received data may span the application and organizational boundaries, each end of the web service communication may need to carefully check the data. Although the application's creator may trust the data generated by the data generated by the other parts of their own applications, they can't hold the same trust in other applications. The information received may even contain malicious code, so you must be carefully checked.
• SOAP and its portable XML definitions are very large. Passing too many data in a call might get a low-bandwidth network. Conversely, the data transmitted in a call is too small to invade the application that handles these requests. Although this is very difficult, it is still important to find the right balance point. For more information, see the Related Readings section.
Back to top
summary
The architecture is the key. Selecting the correct architecture for applications (especially applications distributed in multiple systems). If the wrong architecture is selected, no matter how excellent developers, it is usually not fixed during the implementation process. The wrong decision will result in less performance reduction, security reduction, and the application available when the application needs to be updated.
Windows DNA has laid a solid foundation for the N-layer app, and Windows developers can build applications based on their knowledge you have learned from the DNA field to apply most of them into new .NET environments. However, it is learned that the changes suggested in this article will help create faster, safer, and more functional applications. For N-layer applications and applications with new Web services, .NET provides a lot of features.
Back to top
Related reading
Windows DNA environment
Architecture Decisions for Dynamic Web Applications: Performance, Scalability, And Reliability
Duwamish Online Application Architecture
.NET environment
Performance Comparison: Transaction Control
Performance Comparison: Data Access Techniques
Questions in the .NET Architectural Sample ApplicationsXML Web Services Architecture
Real Soap Security
Designing Your Web Service for Maximum Interoperability
Dime: Sending Binary Data With your soap messages
Go to the original English page
Write .Net PetShop and Duwamish ADO.NET Database Programming
Overview
ADO.NET provides us with powerful database development capabilities, and its built-in multiple objects provide different options for our database. However, while allowing us to choose, many beginners are also confused, should I use DataReader or use DataAPter? I just want to read a small part of the data. Do I have to Fill full DataSet? Why can't DataReader provide a data update method like RecordSet? What is the benefit of DataSet?
In this article, I will make some simple analysis and comparison of Database programming mode of .NET PETSHOP and Duwamish database programming mode. If you have more questions, I believe that after reading this article, you can make a database programming mode that is best suited for your application according to the specific needs.
.NET PETSHOP and DUWAMISH simple introduction
I believe that everyone must have heard the famous "pet shop war", yes, one of the protagonists in this article is the winner .Net Petshop, Microsoft's number of speeds and 1/4 code is far ahead of J2EE-based PetStore pet Shop. Although SUN has complained here, accusing this "Wars" has water, but in any case, .Net PetShop is definitely a classic .NET instance tutorial, at least for us to catch up with J2EE "shortcut =": ), Its download address is: http://www.gotdotNet.com/team/compare
.NET PETSHOP Pet Online Store Home
And Duwamish is a simple, but an extremely complex online bookstore .NET full application example, as a Microsoft's official Sample, which provides two language versions of C # and VB.NET, and there is a lot of Detailed Chinese information, if printing, is home to travel, sleeping into the toilet must have. what? Didn't you hear it? Oh, if you have installed Visual Studio .NET, it is quietly lying quietly on your hard drive, but it is not installed, you can find and install it in your vs.net Enterprise Samples directory, for example : C: / Program Files / Microsoft Visual Studio .NET / Enterprise Samples / Duwamish 7.0 Cs.
DUWAMISH online e-bookstore homepage
Structure brief
Both stores use N-layer applications (there is no doubt that the application architecture of the N-layer structure should be absolutely the first choice for your development .NET application, even if you just want to be a web counter), the difference is that PETSHOP uses The most common three-layer application structure is a representation layer, an intermediate layer, and a data layer. DUWAMISH uses a four-layer application structure and separates from different items, which are representative layers, business appearance layers, service rules layers, and data layers. As for the advantages and disadvantages of these two structures, and why we don't make a detailed discussion, because the focus of this article is not here. We mainly analyze the model of their database programming. Write .Net PetShop and Duwamish ADO.NET Database Programming
2002-10-18 · 卢 彦 ·· ASP.NET Chinese Professional Network
Previous 1 2 3 Next Page
DUWAMISH Data Access Analysis
First, let's take a look at the Duwamish bookstore, which uses DataAdapter and DataSet data storage mode, which is different, which makes a subcatenification extension of DataSet as a data carrier, which is to use custom DataSet to perform layers. Data transfer, the following is a custom DataSet example:
Public Class BookData: DataSet
{
Public boxdata ()
{
//
// Create the Tables in the dataset
//
BuildDataTables ();
}
Private void buildData tables ()
{
//
// Create the Books Table
//
DataTable Table = New DataTable (Books_Table);
DatacolumnCollection columns = Table.columns;
Column.add (pkid_field, typeof (system.int32));
Column.add (type_id_field, typeof (system.int32));
Columns.add (Publisher_ID_Field, Typeof (System.Int32));
Column.add (publication_year_field, typeof (system.int16));
Column.add (isbn_field, typeof (system.string));
Column.add (image_file_spec_field, typeof (system.string));
Column.add (Title_field, TypeOf (System.String));
Column.add (Description_field, TypeOf (System.String));
Column.add (unit_price_field, typeof (system.decimal);
Column.add (unit_cost_field, typeof (system.decimal);
Column.add (item_type_field, typeof (system.string));
Columns.add (Publisher_name_field, TypeOf (System.String));
This.Tables.Add (Table);
}
.........
}
We can see it has a buildDataTables method, and in the constructor, this, the custom Books table is bundled with this Dataset, and saving column mapping after saving, this is a good idea, how can I not Think? :) Solved the data structure, then look at the code implementation of the data layer, in Duwamish, 5 classes in the data layer, Books, Categories, Customers, and Orders, each class is only responsible for access to the data . Below is a sample code for one of the classes:
Private sqldataadapter dscommand;
Public BookData getBookByid (int bookid)
{
Return FillBookData ("GetBookbyid", "@bookid", bookid.toToString ());
}
Private BookData FillbookData (String Paramname, String Paramvalue)
{
IF (dscommand == null)
{
Throw new system.ObjectdisposedException (gettype (). fullname;
}
BookData Data = New BookData ();
Sqlcommand command = dscommand.selectCommand;
Command.comMandText = CommandText;
Command.commandtype = commandType.StoredProcedure; // Use stored proc for perf
SQLParameter Param = New Sqlparameter (paramname, sqldbtype.nvarchar, 255);
Param.Value = paramvalue;
Command.Parameters.Add (param);
DScommand.Fill (DATA);
Return Data;
}
Here is the code of the data layer, we can see that duwamish uses DataAdapter to populate the data into the custom DataSet and then return the Dataset. I am very strange that it can see the specific data access method of getbookbyid in the data access layer. Although there is still an abstract FillBookData method, there are three layers above, the bottom is doing this What do you do? The answer is data inspection, and the upper layer is basically doing some very strict data legitimacy checks (of course, there will be some complicated transaction logic, but not much), the sample code is as follows:
Public CustomerData getCustomerbyemail (String EmailAddress, String Password)
{
//
// check preconditions
//
Applicationassert.checkcondition (EmailAddress! = String.empty, "Email Address Is Required,
Applicationassert.LineNumber);
Applicationassert.checkcondition (Password! = String.empty, "Password is Required", ApplicationArt.LineNumber;
//
// Get the customer dataset
//
CustomerData DataSet;
Using (DataAccess.customers CustomersDataAccess = New DataAccess.customers ())
{
DataSet = CustomersDataAccess.loadCustomerbyemail (EmailAddress);
}
//
// Verify The Customer's Password
//
DataRowCollection Rows = dataset.tables [CustomerData.customers_table]. ROWS;
IF ((rount.count == 1) && rows [0] [CustomerData.Password_field] .equals (password))
{
Return DataSet;
}
Else
{
Return NULL;
}
}
In this method, it is actually only available for data access.
DataSet = CustomersDataAccess.loadCustomerbyemail (EmailAddress);
In this way, it is the data layer that is directly called. All other are conducting legitimate checks, we can understand how important the system robustness needs to be considered for a real enterprise development.