N-layer distributed application development based on .NET Framework

xiaoxiao2021-03-06  44

Keywords: Distributed, DCOM / CORBA, Web Service, Net Framework, N-layer model, client / server, data transfer, remote communication

Topic: Establishing a maintenance, scalable site, developing high efficiency, highly scalability applications, create N-layer distributed applications, implement cross-platform, integration across the Internet, is the task in front of countless developers. Traditional development methods and technology are facing difficulties.

Many new technologies launched by .NET Framework provide a relatively simple solution for the implementation of the above tasks. Among them, SOAP-based Web Service has a significant advantage of traditional DCOM / CORBA when processing distributed applications, combines web-based ASP.NET page development technology and SQL Server data storage technology (or XML documentation), under .NET Developing N-layer applications is no longer difficult. 1. Distributed Processing Overview Distributed Processing is to distribute the application logic to 2 or more computers, physically or logically separate units. This concept is not a new thing, and it has been widely used in large projects. However, the Internet has given a new feature for distributed processing. The features of the Internet can make hundreds of thousands of computers as a task, so that distributed processing is possible on a larger scale, and spanned Traditional B / S (client / server) model. Distributed processing thoughts have experienced a long process, developers in different IT era, and suppliers at all levels have made a lot of work, making the agreements that support distributed processing are extremely rich. 1. DCOM / CORBA, before .NET Framework, the main protocol based on component-based distributed computing is CORBA (Common Object Request Broker Architecture, General Object Request Agent Structure), which comes from Object Management Group, and Microsoft DCOM (Distributed Component Object Model, Distributed Component Object Model). DCOM is connected. The DCOM client holds a connection to the DCOM server. This connection process has led to the existence of technical problems. For example, a client may hold a reference information, which generates calls when the user clicks the button. For a long time, the server will be idle because of the request of the client. When the client crashes, it will generate serious consequences when the server is requested. In addition, on the Internet, the DCOM or CORBA server is used by thousands of clients. Since each client has a connection with the server, it should be protected for a client that rarely uses the server or no longer use the server. Server resources. Although DCOM has a way to deal with these issues, it has added many complexity, which is also one of the issues that the web service is trying to solve. 2. Web services With the launch of Microsoft .NET FrameWork, the distributed processing has a new technology, which is a Web Service (Web Service). Web services can provide data for another application, not just a browser, but also to perform operations by external data to allow other clients to use standard protocols (such as HTTP) that work in the same port and transport layers. Second, the distributed processing technology under Web Service-.net Framework In .NET Framework, the web service refers to an application logic unit that can be accessed by the program by the standard Web protocol in a stand-alone. The developers of the .NET Framework share the Web service in an open standard and can be used for any platform. Make it the potential of integration technology across platforms and cross-suppliers. After implementing the Web Service and the Web Services architecture, users can use many of the existing technologies on the Internet. The key to the success of the Web service is that it is based on open standards such as major suppliers such as Microsoft, IBM, and Sun. 1. DCOM / CORBA is facing difficult solutions --Web Services DCOM and CORBA are very good when using software and closely managed LANs with software and closely managed LAN.

However, they are not from the heart when creating a cross-platform, across the Internet, and adaptable to the scalability of the Internet. They are not designed for completing these goals. Most companies face the reality that they have a variety of platforms from multiple suppliers. Difficulties between application systems running on different platforms. Based on the traditional distributed architecture in cooperation with business partners. The issue of DCOM and CORBA is that users must basically rely on a vendor. If you want to use DCOM, you must use Microsoft Windows to run the server and client. Although there are DCOM implementations on other platforms, they have not been widely adopted. Although CORBA is made by multiple suppliers, interoperability can only be completed in a simple manner. As for integration between DCOM and CORBA, it is not necessary. From a technical perspective, the Web service attempts to solve problems encountered when using techniques such as CORBA and DCOM. These problems include how to use firewalls, protocol complexity, integration of heterogeneous platforms, etc. 2. Application Web services in distributed processing is an excellent distributed processing technology. The following figure shows the general situation of distributed processing under the .NET Framework. As a client application can be a traditional Windows Form application, web-based ASP.NET applications, cellular mobile applications, etc., can also be another web service program. These client applications constitute a representation layer (left column in the figure) in the n-layer model for data display. The middle column is the intermediate layer, handles business logic; the right column is the data layer, processing data storage. With the continuous standardization of XML-based named Simple Object Access Protocol SOAP (Simple Object Access Protocol), Web services are becoming a way to interact with other servers and applications. 3, the client consumption web service under the N layer model demonstrates the case of the client consumption web service. The client can be a web application, another web service, a word handler such as Microsoft Word, and the like. The consumer of the web service calls the method on the web service named Method (). The actual call is propagated to the lower layer, and the SOAP message is transmitted through the network, and propagates to the web service to the upper layer. The web service executes and responds (if any). The two-way notice or publish / subscription feature between the Web service and the client is possible, but must be done manually. The client can realize its own web service and deliver the web service in the call to the server. The server can save the reference, and then call the client. Third, the N-layer architectural design of the N-layer architecture based on .NET Framework is object-oriented, and the modular component design requires convenient to modify the various parts of the application. One good way to accomplish this is to work on the layer, separating the main functions of an application to different layers or levels. .NET Framework provides a rich support for creating maintenance, scalable layer patterns, making N layers enough to replace traditional client / server mode and closely intensify with the Internet. 1. The hierarchical model is essentially, and the layer represents the main functionality of an application. In general, we divide the application function into three aspects, corresponding to 3-layer architecture mode. They are data layers, business layers and presentation layers. Data layers: Components or services that contain data storage and interact with it. These components and services are functional and intermediate layers independent of each other (although it is physically independent of each other - they can be on the same server). Intermediate layer: including one or more component services, which apply business rules to implement application logic and complete data processing required for application running. As part of this process, the intermediate layer is responsible for processing data from data storage or sending data stored.

Indicates the layer: information is obtained from the intermediate layer and displayed to the user. This layer is also responsible for interacting with the user, comparing the returned information and returns the information to the intermediate layer. It can be seen that the data layer obtains more original data from the database, and the business layer converts the data into meaningful information that meets the business rules, indicating that the layer converts information into content meaningful for the user. This hierarchical design is useful because each layer can be modified independently. We can modify the business layer, constantly accept the same data from the data layer, and transfer the data to the representation layer without worrying about ambiguity. We can also modify the representation, so that the modification of the site appearance does not have to change the business layer logic below. 2. Common N-layer model design already knows that layers in an N-layer application are not defined by the physical structure (hardware) of the running application. The layer is a logical function of the application run and defines the different task stages that the application will execute. The N-layer design here is very different from the classic client / server architecture. 1) Design a simple 3-layer simplest N layer model is 3 floors. Here, we already have a server and client separated by a network. The server contains data storage and component access components that make up the data layer, which already constitutes the business logic of the intermediate layer. The client only needs to provide an interface to the application as a representation layer. In this simplest case, we may have a relational database or a component or stored procedure for accessing data. Then we should have an ASP.NET page that access components or stored procedures to extract information, processing, and format information, make it suitable for specific clients, and then transfer information to the client through the network. The matter to be done by the client is to display information, collect the user's input and send information to the intermediate layer. 2) Designing a 3rd floor closer to reality However, the previous example is only a very small application of the general application, and it is difficult to meet in the real world. Data storage is typically located on a specially selected hardware. It may be on a Windows-based group of servers running SQL Server, but may also be on a non-Windows platform or Oracle or other database server. In this case, the separation between the data layer and the intermediate layer is more apparent - they are connected between them. Also, business logic is limited to executing all intermediate layer data processing. 3), design N-layer is obvious, the above situation assumes two things: First, the client is a low-end device (so do not participate in the actual data processing required in the application); additional business rules . However, these assumptions do not conform to the actual application. For example, we usually expect business rules in other places rather than in the middle. It is appropriate to implement a business logic in the early stage of extracting data procedures, of course, we can also implement business logic in the components that access the data store. This business logic "Pack" can therefore be stored on the same server with data, or even in a packet) on another intermediate routing server. In addition, in order to make full use of some of the "fat client" to reduce the hysteresis caused by network load and due to access path cycles, we can place some business logic on the client. The following figure shows this change, where the business logic has been peeled off from the intermediate layer and located on a data server or client. It can be seen that there is no intermediate storage and almost no intermediate data processing, so it is more efficient. 4) Design a more realistic N layer, we use one or more separate servers to maintain the data store we are using, at this time, the distribution of business logic is more dispersed. The figure below shows three machines separated by two networks. It can be seen that the current business logic is divided into three districts: Some will run on the same server and the other will run on the intermediate layer server, and some will run on the client.

It can be seen that the exact definition of each layer is not easy. The true meaning of the "middle layer" is the business logic itself, and the different elements of business logic can be unpredictable in different servers. 3, .NET Framework under the interlayer (remote) Transport object and technology. NET Framework implements many new technologies to support multi-layer distributed processing, which provides a wealth of libraries, objects and methods to make different layers (physics Data transmission between separation or only logically separated is simpler. 1) Support for remote data transfer: l ADO.NET DataSet object L ADO.NET DataTable object l xmlDACUMENT object L XmlDATADOCUMENT object 2), supporting remote data transfer class / method: l Serialization class Serialization class describes a data Converting the process of objects that can be copied to the format of another process. The previously mentioned objects of remote transmission have the ability to serialize the entire content so that it can be transmitted through a channel. This channel can pass directly to TCP / IP or via HTTP. Of course, they can also release serialization on the other end, so the client gets a complete copy of an original object. L System.Runtime.Remoting class system.Runtime.Remoting Namespaces The objects provided can be used to create agents for objects to implement remote data transfer. In this case, the object is retained on the server and the client only receives a reference to a proxy object. This agent object indicates the original server-based object (this is how we remotely use a DataReader), and the following figures are illustrated: For the client, this agent provides the same methods and properties as the original object. However, when the client is interacting with the proxy object, the call is automatically serially connected and transmitted to the server on the server via the channel (network). Then, any response and results are transferred back to the client through the channel. Both remote technologies allow clients to use objects created on the server. We are able to serialize a DataSet object or an XML document while we can also serialize other objects such as a HashTable or array. 4, data processing and object selection in the n layer model first needs to consider what you want to do from the data extracted from the data store? The impact of this question will have the basic choice of the basic choice of objects to be larger than anything else, and even to some extent, we want to complete the performance of the task. 1) If only the data used to display If it is only necessary to display data with a fixed format as a terminal user, it is generally not necessary to remotely transfer data. We don't have to transfer all the data to the client online - we can only pass the final display information of any format that customers can accept. In this case, the "Reader" object provides only technologies that are only ideal and optimal and optimal. When used with server controls that can implement server-side data, we can get an efficient way to display data. 2) Data that requires remote transmission. However, there is a problem if we need remote transfer data. These fast and efficient "Reader" objects can only be remotely transferred as a reference. DataReader is still on the server when transferring a DataReader to a client, but the client's application can also use it. In this case, we actually transmit data remotely, but use a remote transfer object. This is existed in many cases. To implement remote transfer of data, data should be stored in an object that can store (or hold) data.

And allow the code to extract data as needed without the additional strokes that enter the data store, and read multiple times. In ADO.NET, this object is a DataSet object (or DataTable object). When using XML, there are several objects to choose from. We can remotely transfer XMLDocument and XmLDATADOCUMENT objects. Both objects have the ability to maintain content, and can be transmitted between one application's layers. Fourth, the N-layer distributed data processing architecture model is to further understand how different layers are divided into different layers, need to determine how data is displayed and whether updating data is needed and returning to the server. 1. All are all displayed on the server to display data on the client, and the most common situation is to perform all data processing on a group or multiple sets of servers. The data layer and the intermediate layer are limited to the server, and the client only provides an interface. For a web browser, the usual format is HTML. For a cellular telephone or similar device, it may be represented in WML format, and so on. The figure below uses a stored procedure or SQL statement to extract the required data and then use ASP.NET to process, or perform a web service. In addition, the data is also extracted from the data store in the form of an XML fragment and then processes the data and supplied to the client. If you store data in an XML document, or store data in such a format: Data as an XML external data layer, then we have some other options. The figure below shows how to extract and process XML data to deliver it to the client. In addition, the extraction of the data is actually with a "Reader" object and can use different techniques to process data and provide data to the client. 2. Extended Intermediate Layers Although data extraction and processing often occurs in one object, such as an ASP.NET page, but in order to effectively utilize the benefits of using component-based design, it is often necessary to provide a more fine architecture. We should have business rules that are applied to data before displaying data or transfer it to the client. In other words, it may be because of the reason, or it may also be to achieve distributed processing, or only provide reusability and make the application's maintenance easier. For example, there should be multiple pages that access a data store and extract a series of consumers. By establishing this process in a component that provides data from an ASP.NET page or other data to the client, we can provide a layer that extracted data. Then, we need to change the data storage or data structure in some ways, or change the rules to access it, we can replace the components with a new version. As long as the interfaces of the component are still unchanged, all applications that use this interface will see the same output from the component and continue to run as before. However, methods of assembling data internally for extracting and processing data from data storage can be modified as needed. The figure below shows this architecture. Obviously, the process can use multiple components. If the extraction of the data is quite complex, or the same data is used in multiple places, it is also meaningful to further decompose data processing (decomposition to more component layers). For example, a component can be used as a series of rows containing all must-contain rows (in keyword order), which can be used in different order, or some columns of only data. Data source. 3. Mobile data is processed to the client, to obtain data sent to the client, we will use the client script (JavaScript or VBScript, and WMLScript), a client component written in a language of Java or a specific platform, or use Writing clients such as Visual Basic 6.0, C , Delphi, etc.

转载请注明原文地址:https://www.9cbs.com/read-68100.html

New Post(0)