.NET Data Access Architecture Guide (Transfer from MSDN, Collection)

xiaoxiao2021-03-06  21

.NET Data Access Architecture Guide

Release Date: 6/17/2004

| Update Date: 6/17/2004

Related Links

Patterns & Practices INDEX

Application Architecture for .NET: Designing Applications and Services

Alex Mackman, Chris Brooks, Steve Busby, Ed Jezierski, Jason Hogg, Roberta Leibovitz (Modilated Computation) and Colin Campbell (Mode)

Microsoft Corporation

Abstract: This article provides guidelines for implementing ADO.NET-based data access layers in multi-layer applications based on .NET. It focuses on a range of common data access tasks and programs and introduces a guide to help you choose the most suitable method and technology.

Introduction

If you want to design a data access layer based on the .NET-based application, you should use Microsoft® ADO.NET as a data access model. ADO.NET feature, support for loosely coupled multi-layer web applications and Web services data access requirements. Like other functions, ADO.NET provides a variety of ways to solve specific problems.

The information provided by the ".NET Data Access Architecture Guide" can help you choose the most suitable data access method. It describes this purpose by describing a wide range of common data access programs, providing performance tips and recommended best implementation policies. This guide also provides answers to common problems, including: Where is the best place to store database connection strings? How to implement the connection pool? How to deal with transactions? How to implement paging to use users to scroll in a large number of records?

This guide focuses on how to use ADO.NET and access Microsoft SQL? Server® 2000 through the SQL Server .NET data provider (one of the two data providers that came with ADO.NET). At appropriate, this guide will clearly indicate the differences you should understand when you use the OLE DB .NET data provider to access other data sources that support OLE DB.

For specific implementations of data access components developed using the guides and best implementation strategies discussed herein, see Data Access Application Block. The Data Access Application Block contains the source code for this implementation, where you can use these code directly in the .NET-based application.

".NET Data Access Architecture Guide" is divided into the following sections:

This page

ADO.NET Introduction Management Database Connection Error Processing Performance Connection Processing BLOB BLOB By Data Set Database Update Strong Type Data Set Object Processing Empty Data Field Transaction Data Points Appendix

Target reader of this document

This document provides guidance for application designers and corporate developers that need to generate .NET-based applications. If you are responsible for designing and developing a data layer based on .NET multi-storey application, read this document.

Preparatory knowledge

To use this guide to generate .NET-based applications, you must have experience using ActiveX? Data Objects (ADO) and / or OLE DB development data access code and SQL Server experience. You must learn how to develop managed code for the .NET platform, and you must understand the fundamental changes in the introduction of the ADO.NET data access model. For more information on .NET development, see http://msdn.microsoft.com/net.

new features

This document has been updated so that the part of the execution dataset update, uses type DataSet and the use of empty data fields.

As pointed out in the body, some of this guide is especially suitable for Microsoft Visual Studio 2003 Development System and .NET Framework SDK 1.1. Download ".NET Data Access Architecture Guide"

Click to download ".NET Data Access Architecture Guide" from MS.com Download Center

Introduction to ADO.NET

ADO.NET is a data access model based on .NET-based applications. It can be used to access relational database systems such as SQL Server 2000, Oracle and other data sources with OLE DB or ODBC providers. To a certain extent, ADO.NET represents the latest developments in ADO technology. However, ADO.NET introduces some major changes and innovations, intended to solve the loose coupling characteristics of the web application and the characteristics that are not related to each other. For comparisons for ADO and ADO.NET, see the MSDN article "ADO.NET for the ADO Programmer", the URL is: http://msdn.microsoft.com/library/default.asp? Url = / library / en-us /DndotNet/html/adonetprogmsdn.asp.

One of the main changes introduced by ADO.NET is the combination of DataTable, DataSet, DataAdapter, and DataReader objects, replaced the ADO Recordset object. DataTable represents a collection of bank in a single table, which is similar to the Recordset. DataSet represents a collection of DataTable objects while including relationships and constraints that bind together. In fact, DataSet is a relationship structure with built-in XML support.

One of the main features of DataSet is that it does not understand the basic data source that may be used to fill it. It is a discontinuous, independent entity for indicating data sets, and can be passed between components through different layers of multi-layer applications. It can also serialize as an XML data stream that makes it very suitable for data transmission between different types of platforms. ADO.NET uses the DataAdapter object to transfer the data to the DataSet and the underlying data source, or from the data source. The DataAdapter object also provides an enhanced bulk update feature associated with RecordSet.

Figure 1 shows a complete DataSet object model.

Figure 1.1.DataSet object model

.NET data provider

ADO.NET relies on the service of the .NET data provider. These providers provide access to underlying data sources and include four main objects (Connection, Command, DataReader, and DataAdapter).

Currently, ADO.NET comes with two types of providers: Bridge providers and Native providers. By Bridge provider (such as providing providers provided for OLE DB and ODBC), you can use a database for previous data access technology. Native providers (such as SQL Server and Oracle providers) are usually able to provide performance in performance, partly in less than an abstract layer.

SQL Server .NET data provider. This is a provider for Microsoft SQL Server 7.0 and a higher version database. It is optimized to access SQL Server, and it communicates with SQL Server by using SQL Server's native data transfer protocol.

When you connect to SQL Server 7.0 or SQL Server 2000, you always use this provider.

Oracle .NET data provider. The .NET Framework Data Provider for Oracle supports data access to the Oracle data source via Oracle client connection software. This data provider supports Oracle client software version 8.1.7 and later. •

OLE DB .NET data provider. This is a hosted provider for OLE DB data sources. Its efficiency is slightly lower than the SQL Server .NET data provider because it communicates with the database via the OLE DB layer. Note that the provider does not support the OLE DB provider MSDasql for an open database connection (ODBC). For ODBC data sources, change to the ODBC .NET data provider (later will be described later). For a list of OLE DB providers compatible with ADO.NET, see http://msdn.microsoft.com/library/en-us/cpguidnf/html/cpconadonetProviders.asp.

Other .NET data providers currently being in the test phase include:

ODBC .NET data provider. The .NET Framework Data Provider for ODBC uses the native ODBC Driver Manager (DM) to support data access by means of COM interoperability.

Used to retrieve the hosted provider of XML from SQL Server 2000. XML for SQL Server Web Update 2 (currently in the test phase) contains a managed provider that is specifically used to retrieve XML from SQL Server 2000. For more information on this update, see http://msdn.microsoft.com/library/default.asp?url=/nhp/default.asp?contentid=28001300.

For detailed overview of different data providers, see ".NET Framework Data Provider" in the ".NET Framework Developer Guide", the URL is: http://msdn.microsoft.com/library/default.asp? URL = / library / en-us / cpguide / html / cpconadonetproviders.asp.

Namespace organization structure

Type (class, structure, enumeration, etc.) associated with each .NET data provider is located in its respective namespaces:

System.data.sqlclient. Contains the SQL Server .NET data provider type.

System.data.OracleClient. Contains Oracle .NET data provider.

System.Data.Oledb. Contains the OLE DB .NET data provider type.

System.Data.odbc. Contains an ODBC .NET data provider type.

System.data. Contains types independent of the provider, such as DataSet and DataTable.

In their respective associated namespaces, each provider provides the implementation of Connection, Command, DataReader, and DataAdapter objects. Sqlclient implements the prefix "SQL", and the prefixed "OLEDB" implemented by the OLEDB. For example, the SQLClient implementation of the Connection object is SQLConnection, while the OLEDB implementation is OLEDBConnection. Similarly, the two implementations of the DataAdapter object are SqlDataAdapter and OLEDBDataAdapter, respectively.

In this guide, the examples used are taken from the SQL Server object model. Although the Oracle / OLEDB and ODBC provide similar functions in Oracle / OLEDB and ODBC although not described here. General programming

If you might want to face different data sources, and you need to move your code from a data source to another, consider programming to support IDBConnection, IDBCommand, iDataReader, and iDBCommand, iDataReader, and IDBDataAdapter interface in System.Data namespace. All implementations of Connection, Command, DataReader and DataAdapter objects must support these interfaces.

For more information on implementation .NET data provider, see http://msdn.microsoft.com/library/en-us/cpguidnf/html/cpconimplementingnetDataProvider.asp.

It should also be noted that if the application uses a single object model to access multiple databases, OLE DB and ODBC bridge providers can be used. In this case, it is necessary to consider how much flexibility is required to have a database compared to the performance of the application, and what extent requires a database-specific function.

Figure 2 illustrates the data access stack and illustrates the relationship between ADO.NET and other data access technologies (including ADO and OLE DB). It also illustrates two hosting providers and primary objects in the ADO.NET model.

Figure 1.2. Data Access Stack

For more information about Ado evolution to ADO.NET, see article "Introducing Ado : Data Access Services for the Microsoft .NET Framework" (MSDN Magazine, NURP 2000), URL: http://msdn.microsoft .com / MSDNMAG / ISSUES / 1100 / Adoplus / Default.aspx.

Stored procedure and direct SQL

Most of the code snippets displayed in this document use the SQLCommand object to call the stored procedure to perform database operations. In some cases, you will not see the SQLCommand object because the stored procedure name is passed directly to the SqlDataAdapter object. Inside, this will still cause a SQLCommand object.

You should use the stored procedure rather than embedded SQL statements, the reasons are as follows:

The stored procedure can usually improve performance because the database optimizes the data access plan used by the stored procedure and can cache the plan for future reuse.

The security protection of each stored procedure can be set in the database. The client does not have to have access to the base table, you can get the permission to perform the stored procedure.

The stored procedure can simplify maintenance work because modifying the stored procedure is often easier than the hardcoded SQL statement in the deployed component.

The stored procedure adds additional abstraction levels for the underlying database architecture. The details of the client to the stored procedure to the stored procedures are isolated from each other, and the infrastructure is also isolated from each other.

The stored procedure can reduce network traffic because the SQL statement can be executed in bulk instead of sending multiple requests from the client.

SQL Server online documentise strongly recommends that you do not use "SP_" as a name prefix to create any stored procedures because this name has been assigned to the system stored procedure. SQL Server always looks for stored procedures starting with SP_ in order:

1.

Find the stored procedure in the primary database.

2.

Find the stored procedure based on any qualifier (database name or owner) provided.

3.

Use DBO as the owner to find stored procedures (if not specified).

Property and constructor parameters

You can set the specific attribute value of the ADO.NET object by constructing the function parameters, or you can set the property value directly. For example, the following code snippet is functionally equivalent. // Use constructor arguments to configure command Object

SQLCommand cmd = new sqlcommand ("Select * from products", conn);

// the Above line is functionally equivalent to the folload

// Three Lines Which Set Properties Explicitly

SQLCommand cmd = new sqlcommand ();

cmd.connection = conn;

cmd.commandtext = "SELECT * from Products";

From a performance perspective, the difference between these two methods is insignificant because it is more efficient than if the .NET object settings and acquisition properties are more efficient than performing similar operations for COM objects.

Which way to choose depends on personal preferences and coding styles. However, clear settings for attributes can indeed make the code more easily understand (especially when you are not familiar with the ADO.NET object model) and debug.

Note, Microsoft Visual Basic? Developer developers have been warned to avoid using the "DIM X AS New ..." constructor to create objects. In the COM field, the above code will cause short-circuit phenomena in the COM object creation process, resulting in some minor and major errors. However, in the .NET field, this is no longer a problem.

Back to top

Manage database connections

Database connections represent a key, expensive, and limited resource, especially in multi-layer web applications. It is necessary to properly manage connections because the method you take may significantly affect the overall scalability of the application. At the same time, we must carefully consider where to store the connection string. It is necessary to use configurable and safe locations.

Work hard when managing database connections and connecting strings:

Help implement scalability of the application by multiplexing the database connection pool in multiple clients.

Adopt configurable, high-performance connection pool strategies.

Use Windows authentication when accessing SQL? Server.

Avoid simulation in the intermediate layer.

Safely store the connection string.

Try to open the database connection and close it as much as possible.

This section discusses the connection pool and helps you choose the appropriate connection pool policy. This section will also consider how to manage, store, and manipulate database connection strings. Finally, this section will give two encoding modes, which can be used to ensure that the connection is reliably turned off and returned to the connection pool.

Use the connection pool

Through the database connection pool, the application can reuse the existing connections in the pool without having to repeatedly establish a new connection with the database. This technique can significantly improve the scalability of the application, because a limited number of database connections can provide services for large number of clients. At the same time, the technology can also improve performance due to the large amount of time required to establish a new connection.

Data access technologies such as ODBC and OLE DB provide a variety of forms of connection pools to varying degrees. These two methods are largely transparent to the database client application. The OLE DB connection pool is often referred to as a session or resource pool.

For a general discussion of pool mechanism within Microsoft Data Access Components (MDAC), see "Pooling in the Microsoft Data Access Components, URLs http://msdn.microsoft.com/library/en-us/dnmdac/html /POOLing2.ASP.

The ADO.NET data provider provides a transparent connection pool that is different for each provider for each provider. This section discusses the connection pool for the following providers:

SQL Server .NET data provider

Oracle .NET data provider

OLE DB .NET data provider

ODBC .NET data provider

SQL Server .NET data provider pool mechanism

If you are using the SQL Server .NET data provider, use the connection pool support provided by this provider. This is a mechanism for supporting transaction and very efficient mechanisms that the provider internally implemented, which exists in the hosted code. The pool is created based on the domain of each application and will not be destroyed before the application domain is uninstalled.

This form of connection pool can be transparently used, but it should be known that the pool management and various configuration options that can be used to fine-tune the pool.

In many cases, the default connection pool settings for the SQL Server .NET data provider may be sufficient for your application. In the process of developing and testing .NET-based applications, it is recommended that you simulate the planned communication mode to determine if you need to modify the connection pool size.

Developers who need to generate scalable high-performance applications should minimize the time using the connection, only when retrieving or updating data is reserved. When the connection is closed, it will be returned to the connection pool and can be used. In this case, the actual connection to the database will not be cut; however, if the connection pool is disabled, the actual connection to the database will be turned off.

Developers should be very careful, do not rely on the garbage collector to release the connection, because the connection may not be able to close when the exiting range is referenced. This is a common root source that connects leaks. This causes the connection exception when requesting a new connection.

Configure SQL Server .NET data provider connection pool

You can configure the connection pool using a set of name-value pairs (provided by the connection string). For example, you can configure whether the connection pool is enabled (enabled by default), the maximum capacity and minimum capacity of the pool, and the length of the queuing request to open can be blocked. Below is an example connection string for configuring the maximum capacity and minimum capacity of the pool.

"Server = (local); integrated security = sspi; data = northwind;

Max pool size = 75; min pool size = 5 "

Once the connection is connected and created, multiple connections are added to the pool to increase the number of connections to the minimum number of configured. Subsequently, it is possible to add a connection to the pool until the maximum number of pools is reached. When the maximum number is reached, the new request to open will be queued to wait for a configurable time.

Select pool size

It is very important to establish a maximum threshold for a large-scale system that manages concurrent requests for thousands of clients. You need to monitor the performance of the connection pool and the application to determine the best pool size of the system. The optimum size also depends on hardware used to run SQL Server.

During the deployment process, you may want to reduce the default maximum pool size (current to 100) to help find the connection leak.

If you set up a minimum pool size, a single performance overhead will be generated because the pool is filled with the pool to achieve this size, although several clients are connected to the client will benefit. Note that the process of creating a new connection is performed in order, which means that the server will not be submerged at the same time when the pool is initially filled.

For more information on monitoring the connection pool, see the monitoring connection pool section in this document. For a complete list of connecting pool connection string keywords, see the "Connection Pool Support" section of the ".NET Framework Developer Guide" section, the URL: http: // msdn. Microsoft.com/library/default.asp?url=/library/en-us/cpguide/html/cpconconnectionpoolingforsqlservernetDataProvider.asp. More information

When using the SQL Server .NET data provider connection pool, please pay attention to the following aspects:

The connection is piloted by a fully matched algorithm on the connection string. The pool mechanism is also sensitive to space between the name-value pair. For example, the following two connection strings will cause two separate pools because the second connection string contains additional space characters.

SqlConnection conn = new SQLCONNECTION

"Integrated Security = SSPI; Database = Northwind");

Conn (); // pool a is created

SqlConMection conn = new SqlConnection

"Integrated Security = SSPI; Database = Northwind");

Conn (); // Pool B Is Created (EXTRA SPACES in String)

The connection pool is divided into a plurality of transaction proprietary pools and a pool corresponding to the connection that is not registered in the transaction. For threads associated with a particular transaction context, the corresponding pool is returned (the pool is included in the transaction) connection. This makes it a transparent process using the registered connection.

The pool mechanism of OLE DB .NET data provider

The OLE DB .NET data provider is connected to the pool connection by using the base OLE DB resource pool. There are several options for configuring the resource pool:

You can use the connection string to configure, enable or disable the resource pool.

You can use the registry.

You can configure the resource pool with a programming.

To avoid deployment issues related to the registry, please do not use the registry to configure the OLE DB resource pool.

For more information about the OLE DB resource pool, see "Resource Pool" in Chapter 19, "Ole DB Services" in Chapter 19, "OLE DB Services", URL: http: //msdn.microsoft.com/library/default. ASP? URL = / library / en-us / OLEDB / HTM / OLPRCORE_CHApter19.ASP.

Use pool object management connection pool

As a Windows DNA developer, you encourage you to disable the OLE DB resource pool and / or ODBC connection pool, and use the COM object pool as a technology of pooling database. There are two main reasons:

The pool size and threshold can be clearly configured (in the COM directory).

Performance is improved. The performance of the pool object method is 50% higher than that of this unit.

However, because the SQL Server .NET data provider is internal to use pool mechanism, you no longer need to develop your own object pool mechanism (when using this provider). Therefore, you can avoid complex tasks associated with manual business registration.

If you are using the OLE DB .NET data provider, you may need to consider using the COM object pool to take full advantage of excellent configuration and improve performance. If you develop pool objects for this purpose, you must disable the OLE DB resource pool and automatic transaction (for example, by including "OLE DB Services = -4" in the connection string). You must implement the transaction registration in your own pool object. Monitor connection pool

To monitor the application using the connection pool, you can use the Event Profiler tool that SQL Server, or the performance monitor tool that comes with the Microsoft Windows® 2000 operating system.

Monitor connection pool using SQL Server Event Profiler

1.

Click Start, point to Programs, point to MicrosoftSqlServer, and then click Profiler to start the event probe.

2.

On the File menu, point to New, and then click Trace.

3.

Provide connection details, then click OK.

4.

In the Trace Properties dialog, click the Events tab.

5.

In the Selected Event Classes list, make sure the Audit login and the Audit Logout event are displayed under Security Audit. To make tracking more clear, remove all other events from this list.

6.

Click RUN to start tracking. When the connection is established, you will see the Audit login event; when the connection is closed, you will see the Audit logout event.

Monitor connection pools using Performance Monitor

1.

Click Start, point to Programs, point to Administrative Tools, and then click Performance to start the performance monitor.

2.

Right click on the graphic background, and then click AddCounters.

3.

In the Performance Object drop-down list, click SQL Server: General Statistics.

4.

In the list of display, click User Connections.

5.

Click Add and click Close.

Management security

Although the database connection pool increases the overall scalability of the application, this means that you can no longer manage security at the database level. This is because, to support the connection pool, the connection string must be exactly the same. If you need to track the database operations of each user, consider adding a parameter to pass the user ID and manually record user operations in the database. You need to add this parameter to each operation.

Using Windows Authentication

When connecting to SQL Server, Windows authentication should be used because it provides a lot of benefits:

1.

Safety is easier to manage because you use a single (Windows) security model, not a separate SQL Server security model.

2.

Avoid embedding the username and password into the connection string.

3.

The username and password will not be passed through the network in a clear text.

4.

Improve the login security by using the password expiration date, the minimum length, and after multiple invalid login requests.

More information

When using Windows Authentication to access SQL Server, follow the following guidelines:

Consider performance compromise. Performance tests have shown that the speed is slower than using SQL Server authentication when using Windows authentication to open the pool database. The .NET Runtime 1.1 version has reduced the amplitude of SQL Server security exceeds Windows authentication, but SQL Server authentication is still too fast. However, although the overhead of Windows authentication is still relatively high, the performance reduction in performance is negligible compared to the time spent in the execution command or stored procedure. Therefore, in most cases, the use of Windows authentication in security is better than the performance of the performance slightly dropped. Before making a decision, please evaluate the performance requirements of the application.

Avoid simulation in the intermediate layer. Windows authentication requires a Windows account for database access. Although the use of simulations in the intermediate layer seem to be reasonable, avoid doing this because this will fail the connection pool and have a serious impact on the scalability of the application.

To resolve this issue, consider simulating a limited number of Windows accounts (instead of authentication users), each account represents a specific role.

For example, you can use the following methods:

1.

Create two Windows accounts, one for reading, one for writing. (, You may need a separate account to mirror application-specific roles. For example, you may need to use an account for Internet users, use another account for internal operators and / or administrators.)

2.

Map each account to the SQL Server database role and set the necessary database permissions for each role.

3.

Using application logic in the data access layer, determine the Windows account to be simulated before performing the database operation.

Note Each account must be a domain account, and Internet Information Services (IIS), and SQL Server are located in the same domain or in the credible domain. Alternatively, you can create a matching account (with the same account name and password) on each computer.

Use TCP / IP to the network library. SQL Server 7.0 and later provide Windows authentication support for all network libraries. The benefits of configuring, performance, and scalability are available using TCP / IP. For more information on using TCP / IP, see Connecting section through the firewall here.

For general guidance for developing security ASP.NET and web applications, see the following Microsoft Patterns & Practices Guide:

Volume I, "Generate Security ASP.NET App: Authentication, Authorization and Security Communication (http://www.microsoft.com/practices)

Volume II, "Improve Web Application Security: Threats and Countermeasures" (will be posted on the following URL: http://www.microsoft.com/practices)

Storage connection string

To store a database connection string, there can be multiple options that have different levels of flexibility and security. Although the optimal performance can be provided to the connection string in the source code, the file system cache ensures that the performance degradation of the string to the file system is insignificant. In almost all situations, people prefer additional flexibility provided by external connection strings (it supports administrator configuration).

When you choose to connect a string storage method, the two most important things that need to be aware of are safe and configured, then the performance is followed.

You can choose the following location to store database connection strings:

In the application configuration file; for example, the web.config of the ASP.NET web application

In a Universal Data Link (UDL) file (support only by OLE DB .NET data provider)

In the Windows Registry

In the custom file

In the COM directory, the method is to use build strings (for service components only)

Access SQL Server is accessed using Windows authentication, avoiding usernames and passwords in the connection string. If your security requires more stringent measures, consider the storage connection string in an encrypted format.

For ASP.NET Web applications, the connection string is stored in an encrypted format in the web.config file, which represents a secure, configurable solution.

Note You can set the persist security info named value to false in the connection string to prohibit the security-sensitive detail (such as password).

The following sections discusses how to use various options to store the connection string and describe the relative advantages and disadvantages of various methods. These content help you make wise choices based on your own specific application solutions.

Note Various configuration settings that are connected to complex hierarchical data from the database via "Configuring Application Management" blocks. For more information, see http://msdn.microsoft.com/practices.

Profile using XML application

You can store the database connection string in the custom setting section of the application configuration file using the element. This element supports any key-value pair, as shown in the following code segment:

Value = "server = (local); integrated security = sspi; database = northwind" />

Note Elements appear under Elements, and is not followed behind .

advantage

Easy to deploy. The connection string is deployed by regular .NET XCOPY deployment with the configuration file.

Easy to access in programming. With the AppSettings property of the ConfigurationSettings class, you can easily read the configured database connection strings at runtime.

Support dynamic updates (ASP.NETs are limited only). If the administrator updates the connection string in the web.config file, when the string is accessed (for stateless components, this may be the next time the client uses the component to perform data access requests), the changes will take effect. .

Disadvantage

safety. Although the ASP.NET Internet Server Application Programming Interface (ISAPI) Dynamic Link Library (DLL) Prohibits the client directly accessing the file with the .config file extension, you may still want to avoid clear text using NTFS permissions. The form is stored in the front-end web server. To obtain additional security, store the connection string in the configuration file in an encrypted format.

More information

You can retrieve custom application settings using the static appSettings property of the System.Configuration.configurationSettings class. The following code snippet has been described, which uses a custom key called DBCONNSTR in the previous example: use system.configuration;

Private string getDBaseConnectionstring ()

{

Return configurationSettings.appsettings ["dbconnstr"];

}

For more information on configuring the .NET Framework application, see http://msdn.microsoft.com/library/en-us/cpguidnf/html/cpconconfiguringnetFrameworkApplications.asp.

Use UDL files

OLE DB .NET Data Provider supports the Universal Data Link (UDL) file name in its connection string. You can transfer the connection string by using the build parameters of the OLEDBConnection object, or set the connection string by using the Connectionstring property of the object.

Note SQL Server .NET Data Provider does not support UDL files in its connection string. Therefore, this method can only be used when you use the OLE DB .NET data provider.

For the OLE DB provider, you want to reference the UDL file by the connection string, use "File Name = Name.UDL".

advantage

Standard method. You may have been managed using the UDL file.

Disadvantage

performance. Each time you open a connection, you must read and analyze the connection string containing UDL.

safety. The UDL file is stored in plain text. You can ensure the security of these files by using NTFS file permissions, but doing so may bring the same problem with .config files.

SQLClient does not support UDL files. This method does not support the SQL Server .NET data provider. This provider is used to access SQL Server 7.0 and higher.

More information

To support management, make sure the administrator has read / write access to the UDL file and is used to run the application's identity with read access. For ASP.NET web applications, the application assistance process is run by default by using the System account, although you can override this configuration by using the computer global configuration file (Machine.config) element. You can also simulate the element of the web.config file (you can use the specified account freely).

For web applications, make sure not to place the UDL file in the virtual directory, otherwise the file will be downloaded through the web.

For more information on the above features and other security-related ASP.NET functions, see "Authentication in ASP.NET: OET Security Guidance", URL http://msdn.microsoft.com/library/en-us/ DNBDA / HTML / AuthaspdotNet.asp.

Use Windows Registry

You can also use custom keys to store connection strings in the Windows registry, although doing so as not encouragement due to deployment issues.

advantage

safety. Access to the selected registry key can be managed by using the Access Control List (ACL). To get a higher level of security, consider encrypting data.

Easy to access in programming. You can use the .NET class to support reading a string from the registry.

Disadvantage

deploy. You must deploy relevant registry settings together with your application, which offset the advantages of XCOPY deployment to a certain extent. Use custom files

You can use a custom file to store the connection string. However, this technology does not have any advantages, and it is recommended not to use this technology.

advantage

no.

Disadvantage

Additional encoding. This method requires additional coding work and forces you to explicitly process concurrency issues.

deploy. This file must be copied with other ASP.NET application files. You should avoid placing this file in the ASP.NET application directory or subdirectory to prevent it from being downloaded through the web.

Use build parameters and COM directory

The database connection string can be stored in the COM directory and automatically passed it to your object through the object build string. COM will immediately call the construct method of the object immediately after instanting the object while providing the configured build string.

Note This method is only valid for service components. This method should only be considered only when your managers use other services such as distributed transaction support or object pools.

advantage

management. Administrators can easily configure the connection string by using the Component Services MMC snap-in.

Disadvantage

safety. The COM directory is considered unsafe storage area (although you can limit access by COM roles), it is not allowed to save the connection string in express form.

deploy. The items in the COM directory must be deployed with .NET-based applications. If you are using other enterprise services, such as distributed transactions or object pools, store database connection strings in this directory will not bring any additional deployment overhead because the COM directory must be deployed to support those other services.

Components must accept services. You can only use build strings for service components. Build strings should not be simply derived from class (allowing your component to receive services) from the class (for your component acceptance).

Important: Make sure the security of the connection string is critical. For SQL authentication, the connection string contains the username and password. If an attacker uses the source code vulnerability on the web server and obtains access to the configured storage, the database will be easily attacked. To avoid this, you should encrypt the connection string. For instructions on different methods that can be used to encrypt the plain text connection string, see "Improving Web Application Security: Threats and CounterMeasures" (this article will be published in http://www.microsoft.com/practices).

More information

For more information on how to configure the .NET class to build an object, see How to Enable the .NET class object structure in the appendix.

For more information on developing service components, see http://msdn.microsoft.com/library/en-us/cpguidnf/html/cpconwritingservicedComponents.asp.

For general guidance for developing security ASP.NET and web applications, see the following Microsoft Patterns & Practices Guide:

Volume I, "Generate Security ASP.NET App: Authentication, Authorization and Security Communication (http://www.microsoft.com/practices)

Volume II, "Improve Web Application Security: Threats and Countermeasures" (will be posted on the following URL: http://www.microsoft.com/practices)

Connection usage mode

No matter which .NET data provider you use, you must always follow the following principles:

Open the database connection as soon as possible.

Use the connection with as short as possible.

Turn off the connection as early as possible. The connection will not return to the pool before closing the connection via a Close or Dispose method. This connection should be turned off even if you have detected that the connection has entered the disconnect state. This ensures that the connection is returned to the pool and mark it as an invalid connection. The object pool program periodically scans the pool to find those objects that have been marked as invalid.

To ensure that the connection is closed before the method returns, consider one of the methods clarified in the following two code examples. The first method uses the FINALLY block. The second method uses a C # Using statement, which ensures the DISPOSE method of the object.

The following code ensures that the Finally block is closed. Note that this method is valid for Visual Basic .NET and C # because Visual Basic .NET supports structured exception handling.

Public void dosomework ()

{

SqlConnection Conn = New SqlConnection (Connectionstring);

Sqlcommand cmd = new sqlcommand ("commandproc", conn;

cmd.commandtype = commandtype.storedProcedure;

Try

{

Cn.open ();

cmd.executenonquery ();

}

Catch (Exception E)

{

// Handle and log Error

}

Finally

{

CONN.CLOSE ();

}

}

The following code shows an alternative method that uses a C # Using statement. Note that Visual Basic .NET does not provide a USING statement or any equivalent function.

Public void dosomework ()

{

// USING Guarantees That Dispose Is Called on Conn, Which Will

// close the connection.

Using (SqlConnection Conn = New SqlConnection (Connectionstring))

{

Sqlcommand cmd = new sqlcommand ("commandproc", conn;

Fcmd.commandtype = commandtype.storedProcedure;

Cn.open ();

cmd.executequery ();

}

}

You can also apply this method to other objects that must be turned off to perform any other task using the current connection, such as SqlDataReader or OLEDTATAREADER.

Back to top

Error handling

ADO.NET errors are generated and processed by the basic structured exception handling support inherent in the .NET framework. Therefore, the method of processing an error within the data access code is the same as other places in the application. It can be detected and processed by standard .NET abnormal processing syntax and techniques.

This section explains how to develop robust data access code and explain how to handle data access errors. At the same time, this section also provides specific exception handling guidance related to SQL Server .NET data provider.

.NET abnormal

The .NET data provider converts the database-specific error condition to a standard exception type for processing in your own data access code. Database-specific error details are provided to you through the properties of the relevant exception objects.

All .NET exception types are derived from the Exception class from the System namespace. The .NET data provider triggers an exception type with a provider. For example, whenever SQL Server returns an error condition, the SQL Server .NET data provider triggers the SQLException object. Similarly, the OLE DB .NET data provider triggers an exception of the OLEDBEXCEPTION type, which contains the details disclosed by the base OLE DB provider. Figure 3 shows the .NET data provider exception hierarchy. Note that the OLEDBEXCEPTION class is derived from ExternalException (it is a base class for all COM Interop). The ERRORCODE attribute of the object stores COM HRESULT generated by OLE DB.

Figure 1.3..NET data provider exception level

Capture and handling .Net exception

To handle data access abnormal conditions, place the data access code inside the TRY block and use the CATCH block to capture any exceptions generated by the appropriate filters. For example, when using the SQL Server .NET data provider to write a data access code, you should capture an exception of the SQLException type, as shown in the following code:

Try

{

// Data Access Code

}

Catch (SQLException Sqlex) // More Specific

{

}

Catch (Exception EX) // Less Specific

{

}

If you provide multiple Catch statements with different filter criteria, remember to order these statements from its own type of special extent to low sorting. In this way, the most special type of CATCH block is performed for any given abnormality type.

The SQLException class discloses attributes containing detail details. These properties include:

The Message property contains the text describing the error.

Number properties, which contains erroneous numbers that uniquely identify incorrect types.

The State property contains additional information about the incorrect call status. It is usually used to indicate specific appearance locations of a particular error condition. For example, if a single stored procedure can generate the same error in multiple lines, the status of the error should be used to identify the specific appearance position of the error.

ERRORS collection, it contains detailed error messages about the errors generated by SQL Server. The ERRORS collection will always contain at least one SQLError type object.

The following code snippet clarifies how to use the SQL Server .NET data provider to process SQL Server error status:

Using system.data;

Using system.data.sqlclient;

Using system.diagnostics;

//Method Exposed by a data access layer (dal) Component

Public String getProductName (int Productid)

{

SqlConnection conn = null;

// Enclose All Data Access Code forin a Try Block

Try

{

CONN = New SqlConnection

"Server = (local); Integrated Security = SSPI; Database = Northwind");

Cn.open ();

SQLCommand cmd = new SQLCommand ("LookuppproductName", Conn;

cmd.commandtype = commandType.storedProcedure; cmd.parameters.add ("@ productID", productID;

Sqlparameter parampn =

CMD.Parameters.Add ("@ ProductName", SqldbType.varchar, 40);

PARAMPN.DIRECTION = parameterDirection.output;

cmd.executenonquery ();

// the finally code is executed before the method Returns

Return parampn.value.tostring ();

}

Catch (SQLEXCEPTION SQLEX)

{

// Handle Data Access Exception Condition

// log specific Exception Details

LOGEXCEPTION (SQLEX);

// Wrap the Current Exception in A More Relevant

// Outer Exception and Re-Throw the New Exception

Throw new dalexception

"Unknown ProductID:" ProductId.toString (), SQLEX);

}

Catch (Exception EX)

{

// Handle Generic Exception Condition.

Throw EX;

}

Finally

{

IF (conn! = null) conn.close (); // EnsuRES connection is closed

}

}

// Helper Routine That Logs Sqlexception Details To The

// Application Event Log

Private void logxception (SQLEXCEPTION SQLEX)

{

Eventlog el = new eventlog ();

El.Source = "CustomApplog";

String strmessage;

StrMessage = "Exception Number:" SQLEX.NUMBER

"(" SQLEX.Message ") HAS Occurred";

El.writentry (StrMessage);

FOREACH (SQLERROR SQLE IN SQLEX.ERRORS)

{

StrMessage = "Message:" SQLE.MESSAGE

"Number:" SQLE.NUMBER

"Procedure:" SQLE.Procedure

"Server:" SQLE.Server

"Source:" Sqle.Source

"State:" Sqle.State

"Severity:" SQLE.CLASS

"LINENUMBER:" SQLE.LINENUMBER;

El.writentry (StrMessage);

}

}

Inside the SQLException Catch block, the code first uses the LOGEXCEPTION auxiliary function to record exception details. This function uses the Foreach statement to enumerate the details of the provider within the ERRORS collection and record the error details into the error log. The code in the CATCH block then packages the unique exception in the DALEXCEPTION type in the DALEXCEPTION type, and the latter is more meaningful for the caller of the getProductName method. The exception handler uses the throw keyword to transfer the exception transfer call. More information

For a complete list of members of the SQLException class, see http://msdn.microsoft.com/library/en-us/cpref/html/frlrfsystemDataSqlClientsqlexceptionMemberstopic.asp.

For detailed guidance on developing custom exceptions, records, and packaging .NET and uses various methods to disseminate exceptions, see http://msdn.microsoft.com/library/default.asp?url=/library/en-us /Dnbda/html/exceptdotnet.asp.

Generate errors from the stored procedure

Transact-SQL (T-SQL) provides a raiserror function (please pay attention to spelling), which can be used to generate a custom error and return it to the client. For the ADO.NET client, the SQL Server .NET data provider intercepts these database errors and converts them to the SQLError object.

The easiest way to use the RaisError function is to include the message body as the first parameter, then specify severity and status parameters, as shown in the following code snippet.

Raiserror ('Unknown Product ID:% S', 16, 1, @ProductID)

In this example, an alternate parameter uses a portion of the current product ID as part of the error message. The second parameter is the severity of the message, and the third parameter is the message state.

More information

To avoid hard coding of the message body, you can use the sp_addMessage system stored procedure or use the SQL Server Enterprise Manager to add your own message to the SysMessages table. You can then reference the message using the ID passing to the RaisError function. The message ID of your defined must be greater than 50,000, as shown in the following code snippet.

Raiserror (50001, 16, 1, @ProductID)

For more information on the RaisError function, find RaisError in the SQL Server online book index.

Suitable severity level

Please take the wrong severity level carefully and pay attention to the impact of each level. Error severity levels range from 0 to 25, used to represent the type of problem that SQL Server 2000 has encountered. In the client code, you can get the severity of the error by checking the Class property of the SQLError object, which is located in the ERRORS collection of the SQLException class. Table 1 indicates the impact and meaning of various severity levels.

Table 1. Error severity level - influence and meaning

Severity Level Close the connection to generate SQLException meaning 10 and lower no indicative messages, do not necessarily represent an error condition. 11 - 16 No is an error that can be corrected by the user, for example, by revising the input data retry operation. 17 - 19 No is a resource or system error. 20 - 25 is a fatal system error (including hardware errors). The client's connection is terminated.

Control automatic transaction processing

The SQL Server .NET data provider causes SQLEXCEPTION for errors that have exceeded 10 in any severeness. When a component in Auto (COM ) transaction detects SQLException, the component must ensure that it is agreed to abort the transaction. This may not be an automatic process, depending on whether or not the method is tagged with the AutoComplete property. For more information on processing SQLException in the context of automatic transaction processing, see the section in this document.

Retrieving indicative message

Severe level 10 and lower levels are used to represent indicative messages and will not cause SQLException.

Search indicator message:

Create an event handler and subscribe to the INFOMessage event disclosed by the SqlConnection object. The commission of this event is shown in the following code snippet.

Public Delegate Void SqlinFomessageEventhandler (Object Sender,

SQLINFOMESSAGEVENTARGS E);

Message data can be obtained by passing the SQLinFomessageEventArgs object of the event handler. This object discloses an ERRORS attribute that contains a set of SQLError objects - each indicator message corresponds to an object. The following code snippet clarifies how to register an event handler for logging an indication message.

Public String getProductName (int Productid)

{

SqlConnection conn = null;

Try

{

CONN = New SqlConnection

"Server = (local); Integrated Security = SSPI; Database = Northwind");

// Register a Message Event Handler

Conn.infomessage = New SqlinfomessageEventHandler (MessageEventhandler);

Cn.open ();

// setup Command Object and Execute IT

.

}

Catch (SQLEXCEPTION SQLEX)

{

// log and handle exception

.

}

Finally

{

IF (conn! = null) conn.close ();

}

}

// Message Event Handler

Void MessageEventHandler (Object Sender, SqlinfubessageEventArgs E)

{

Foreach (SQLERROR SQLE IN E.Error)

{

// log sqlerror proties

.

}

}

Back to top

performance

This section describes some common data access schemes, for each solution, provides detailed information on the highest performance and scalability of ADO.NET data access code. In appropriate places, compare performance, function and development work. This section takes into account the following functional solutions:

Retrieve the result set and iterates the retrieved line.

Retrieve a single row with a specified primary key.

Retrieve the single item from the specified line.

Check if there is a row with a specific primary key. This is a variety of single-finding scenarios, and it is enough to return a simple Boolean type value.

Search multi-line

In this scenario, you want to retrieve data in a form of forms and iterate the retrieved row to perform operations. For example, you might want to retrieve a set of data to handle this data in a discontinuous manner, and pass it as an XML document to the client application (probably passing through the web service). Alternatively, you may also want to display this data in the form of an HTML table. To help determine the most appropriate data access method, consider the additional flexibility you need (discontinuous) DataSet object, or you need a SqlDataReader object (it is ideal for companies in the user (B2C) Web application The original performance provided by the data display). Figure 4 shows these two basic solutions.

Note for SQLDataAdapter internally populated DataSet using SqlDataReader to access data.

Figure 1.4. Multi-line data access

Several choice

You have the following options when retrieving multiple rows from the data source:

Use the SqlDataAdapter object to generate DataSet or DataTable.

Use SqlDataReader to provide read-only, only data streams.

Use XMLReader to provide read-only, only data streams.

Choosing SQLDATAREADER or DataSet / DataTable is fundamentally a problem that focusing performance or paying attention to functionality. SqlDataReader provides optimal performance; DataSet provides additional features and flexibility.

Data binding

All three objects described above can act as data sources of data binding controls, although DataSet and DataTable can serve as data sources than SqlDataReader more controls. This is because Dataset and DataTable implement ILISTSOURCE (generated ILIST), while SqlDataReader implements IENUMERABLE. Some WinForm controls that support data binding needs to implement the data source of ILIST.

The reason for this difference is the type of scheme for designing various object types. DataSet is a rich, discontinuous structure that applies to both of the Web and Desktop (WinForm). On the other hand, the data reader optimizes the web application that needs to be optimized.

Please check the data source requirements for the specific control type you want to bind.

Pass data between application layers

DataSet provides a relationship view for data that can be operated as an XML and enables you to pass discontinuous cache data copies between application layers and components. However, SQLDataReader provides the best performance because it avoids the performance and memory overhead associated with DataSet creation. Keep in mind that the creation of the DataSet object can cause multiple sub-objects (including DataTable, DataRow, and Datacolumn objects) and the creation of a collection object for containers for these sub-objects.

Use data set

In the following cases, use the DataSet filled by the SqlDataAdapter object:

You need a discontinuous memory residential data cache so that it can be passed to other components or layers within the application.

You need data in memory in memory to perform XML or non-XML operations.

You have to process the data retrieved from multiple data sources (such as multiple databases, tables, or files).

You want to update the retrieved or part of the line and use the SqlDataAdapter's bulk update function.

You want to support the data binding controls to support ILIST data sources.

Note For more information, see "Designing Data Tier Components and Passing Data Through Tiers" on the MSDN website, the URL is http://msdn.microsoft.com/library/default.asp?url=/library/en-us/ DNBDA / HTML / boagag.asp. More information

If you use SqlDataAdapter to generate DataSet or DataTable, please pay attention to the following:

No need to explicitly open or close the database connection. The SQLDataAdapterFill method opens the database connection and then closes the connection before it returns. If the connection has been opened, the Fill will keep the connection open.

If you need to use this connection for other purposes, consider opening it before calling a Fill method. This way, you can avoid unnecessary open / close operations to improve performance.

Although you can perform the same command multiple times you can repeatedly use the same SQLCOMMAND object, do not reuse the same SQLCommand object to perform different commands.

For instructions how to use SqlDataAdapter to populate DataSet or DataTable code examples, see How to use SqlDataAdapter in the appendix to retrieve multiple rows.

Use SqldataReader

In the following cases, use the SQLDataReader obtained by calling the ExecuteReader method of the SQLCommand object:

You have to handle a lot of data - more difficult to maintain in a single cache.

You want to reduce the amount of memory usage.

You want to avoid creating overhead associated with DataSet.

You want to perform data binding with controls that provide support for the IENUMERABLE.

You want to simplify and optimize data access.

The row you want to read contains a binary large object (BLOB) column. You can use SqlDataReader to extract BLOB data from the database in the manageable block area, rather than once it is extracted. For more information on processing BLOB data, see Handling the BLOB section in this document.

More information

If you use SqlDataReader, please note the following:

When the data reader is active, the base connection to the database will remain open and cannot be used for any other purpose. Call as early as possible to CLOSE on SqlDataReader.

Each connection can only have a data reader.

You can explicitly shut down the connection after using the data reader, or pass the Commandbehavior.CloseConnection enumeration value to the ExecuteReader method, connect the connection life with the SqlDataReader object. This means that the connection should be turned off after turning off the SqlDataReader.

When using the reader to access the data, if the basic data type of the column knows, the type of accessor method (such as getInt32 and getString) should be used because they can reduce the number of type conversions that need to be performed when reading column data. .

To avoid extracting unwanted data from the server to the client, if you want to close the reader and discard any result, call the CANCEL method of the command object should be called before calling the reader. Cancel makes sure the results are discarded on the server without unseagoing itself to the client. Otherwise, the CLOSE that calls the data reader will cause the reader to extract the rest of the result to empty the data stream.

If you want to get the output or return value returned from the stored procedure, and you are using the EXECUTEREADER method for the SQLCommand object, you must use the Close method on the reader before the output and return values. •

For instructions on how to use SqlDataReader, see How to use SqlDataReader in the appendix to retrieve multiple rows.

Use XMLReader

In the following cases, use the XMLReader obtained by calling the ExecutexmlReader method of the SQLCommand object:

You want to retrieve the data as an XML, but you don't want to create DataSet to bring performance overhead and do not require discontinuous data cache.

You want to use the functionality of the SQL Server 2000 for XML clause, which can be used to retrieve XML fragments from the database in a flexible manner (ie XML document without root elements). For example, using this method can specify an accurate element name, and you can specify whether to use an element or attribute-centered architecture, whether it should be returned by XML data, and so on.

More information

If you use XmlReader, please note the following:

When you read the data from the XmlReader, the connection must be kept open. The ExecuteExmlReader method of the SQLCommand object is currently not supported, so you must explicitly close the connection after using the reader.

See how to use XMLReader using XMLReader, how to retrieve multiple rows in the appendix.

Search a single line

In this scenario, you want to retrieve a single line of data containing the specified column from the data source. For example, you have a customer ID and want to find relevant customer details, or you have a product ID and want to retrieve product information.

Several choice

If you want to use the single line execution data binding from the data source, you can use SqlDataAdapter to populate DataSet or DataTable with the same manner described in the "Multi-Row Retrieval and Iterative" schemes discussed above. However, unless you need a DataSet / DataTable function, you should avoid creating these objects.

If you need to retrieve a single line, use one of the following options:

Use stored procedure output parameters.

Use the SqlDataReader object.

These two options can avoid unnecessary overhead due to creating result sets on the server and create DataSet on the client. The relative performance of each method depends on the pressure level and whether the database connection pool is enabled. When the database connection pool is enabled, the performance test indicates that the stored procedure method is higher than that of the SqlDataReader method at high pressure conditions (simultaneous existence of more than 200 connections).

Use stored procedure output parameters

Use the stored procedure output parameters when you want to retrieve the single line from the multi-layer web application that has enabled the connection pool.

More information

See how to use the code example of the stored procedure output parameter, see How to use the stored procedure output parameter to retrieve the single line in the appendix.

Use SqldataReader

In the following cases, use SqlDataReader:

In addition to data values, you also need metadata. You can get column data using the GetSchematable method of the data reader.

You do not use the connection pool. If the connection pool is disabled, SqlDataReader is a nice choice under all pressure conditions; performance test has indicated that its performance is about 20% higher than the storage process method under the condition of 200 browser connections.

More information

If you use SqlDataReader, please note the following: •

If you know that the query returns only a single row, use the commandbehavior.singlerow enumeration value when calling the SQLCOMMAND object method, and uses the COMMANDBEHAVIOR.SINGLEROW. Some providers such as OLE DB .NET Data Provider use this prompt to optimize performance. For example, the provider performs binding by using an IROW interface (if the interface is available) rather than the more overhead. This method has no effect on the SQL Server .NET data provider.

If your SQL Server command contains output parameters or returns, they will not be available before shutting down DataReader.

When using the SqlDataReader object, always retrieve the output parameters through the type of Accessor method (such as getString and getDecimal) of the SqlDataReader object. This avoids unnecessary type conversion.

The .NET Framework 1.1 contains an additional DataReader property called HASROWS, use this property, you can determine if it has returned before you read from the DataReader.

For explaining how to use the SqlDataReader object to retrieve a single row code example, see How to use SqlDataReader in the appendix to retrieve a single line.

Search a single item

In this scenario, you wish to retrieve a single data item. For example, you may want to find a single product name (given product ID) or a single customer credit rating (given a customer name). In such a scenario, you usually do not want to generate DataSet overhead when retrieving individual items, and don't want to generate DataTable overhead.

You may also want to simply check if there is a specific row in the database. For example, when a new user is registered on the website, you need to check if the selected username has existing. This is a special case of single search, but in this case, the simple Boolean type return value is enough.

Several choice

When you retrieve a single data item from the data source, consider the following options:

The ExecuteScalar method for using the SQLCommand object is used by the stored procedure.

Use stored procedures to output or return to parameters.

Use the SqlDataReader object.

The ExecuteScalar method returns the data item directly because it is dedicated to only a query that returns a single value. The code it needs is less than the code required by the stored procedure output parameters and the SqlDataReader method.

From a performance point of view, you should use the stored procedure output or return parameters, because the test indicates that the stored procedure method under low pressure conditions and high pressure conditions (from the simultaneous browser connection less than 100 to the same time there are 200 browsers Connection) can provide consistent performance.

More information

When searching for a single item, please note the following:

If the query usually returns multiple columns and / or rows, execute the query by ExecuteQuery will return only the first column of the first row.

For instructions how to use ExecuteScalar's code example, see How to Use ExecuteScalar in the appendix to retrieve the single item.

For explaining how to use stored procedures output or return parameters to retrieve a single item code example, see How to use stored procedures output or return parameters in the appendix to retrieve a single item.

For instructions how to use the SqlDataReader object to retrieve a single item code example, see How to use SqlDataReader to retrieve a single item in the appendix.

Back to top

Connection via a firewall

You will often need to configure the Internet application to connect to SQL Server via a firewall. For example, many web applications and key architectural components of their firewalls are peripheral networks (also known as DMZ, Non-pipelines), which are used to isolate the front-end Web server with internal networks. When connecting to SQL Server through a firewall, a special configuration is required for firewalls, clients, and servers. SQL Server provides a client network utility and server network utility program to help configure it.

Select the network library

The SQL Server TCP / IP network library can be simplified when connecting through a firewall. This is the default configuration for SQL Server 2000 installation. If you are using a lower version of SQL Server, make sure that TCP / IP is configured to configure TCP / IP as a default network library by using the Client Network Utility and Server Network Utility.

In addition to the benefits of the configuration, the use of TCP / IP library results also means you:

Because performance improvement in the processing of a large amount of data, it is benefited from being improved.

Avoid other security issues associated with naming pipes.

You must configure the client computer and server computer to use TCP / IP. Because most firewalls are limited to the port sets that allow communication, you must also give serious considerations for the port numbers used by SQL Server.

Server

The default instance of SQL Server listens on port 1433. At the same time, UDP port 1434 is also used so that the SQL client can locate other SQL servers in its network. However, the name instance of SQL? Server? 2000 dynamically allocates the port number when starting. Network administrators do not want to open a series of port numbers on the firewall; therefore, when using the SQL Server named firewall, use the Server Network utility to configure this instance to listen on the specific port number. Your administrator can then configure a firewall to allow communication to arrive at the specific IP address and port number that the server instance is listening.

Note The source port used by the client network library is dynamically allocated in the range 1024 - 5000. For TCP / IP client applications, this is a standard approach, but this means that your firewall must allow communication from any port within this range. For more information on ports used by SQL Server, see Microsoft Knowledge Base Article 287932 "INF: TCP PORTS NEEDED Conopeication To SQL Server Through A Firewall".

Dynamic discovery named instance

If you change the default port number listening to SQL Server, configure the client to connect to this port. For more information, see Configuring the client section in this document.

After changing the port number after the default instance for SQL Server 2000, if the client is not modified, the connection error will be caused. If there are multiple SQL Server instances, the latest version of MDAC Data Access Stack (2.6) will use dynamic discovery, and use User Data Library Protocol (UDP) negotiation (via UDP port 1434) to find named instances. Although this may be valid in the development environment, it is unlikely to work in a practical application environment, because the firewall usually blocks UDP negotiation communication.

To avoid this problem, always configure the client to connect to the configured target port number.

Configure the client

You should configure the client to connect to SQL? Server using the TCP / IP network library, and ensure that the client library uses the correct target port number.

Use TCP / IP network libraries

You can use the SQL Server Client Network utility to configure the client. In some installations, this utility may not have been installed on the client (eg, your web server). In this case, you can do one of the following: •

The network library is specified by using the NetWork Library = DBMSSOCN "by using the name-value provided by the connection string. String "DBMSSOCN" is used to identify TCP / IP (socket) libraries.

Note When using the SQL Server .NET data provider, the network library settings use "dbmsoCN" by default.

Modify the registry on the client computer to set the TCP / IP to the default library. For more information on configuring SQL Server network libraries, see HOWTO: CHANGE SQL Server Default Network Library Utility (Q250550).

Specified port

If your SQL Server instance is configured to listen on ports other than the default 1433, you can specify the following method to be connected to the port number:

Use the client network utility.

Specify port numbers and provide "Server" or "Data Source" name-value to the connection string. Use strings with the following format:

"Data Source = Servername, Portnumber"

Note ServerName may be an IP address or a domain name system (DNS) name. To get the best performance, use the IP address to avoid performing DNS lookups.

Distributed transaction

If you have developed a service component that uses COM Distributed Transaction Processing and Microsoft Distributed Transaction Coordinator (DTC), you may also need to configure your firewall to allow DTC communication to flow between independent DTC instances, and Moves between DTC and resource managers (such as SQL Server).

For more information about opening ports for DTC, see Info: Configuring Microsoft Distributed Transaction Coordinator (DTC) To Work Through A Firewall.

Back to top

Processing blob

Many applications are now dealing with more traditional characters and numerical data, requiring data formats such as graphics and sound, even need to handle more complex data formats, such as video. There are many different types of graphics, sound, and video formats. However, from a storage point of view, they can be considered as binary data blocks, commonly referred to as binary large objects, namely BLOB.

SQL Server provides binary, varbinary, and image data types to store BLOB. Although there is a present name, BLOB data can also refer to text-based data. For example, you may wish to store any long comment fields that can be associated with a particular line. To this end, SQL Server provides NTEXT and Text data types.

Typically, for binary data less than 8 KB, use the Varbinary data type. For binary data exceeding this size, use Image. Table 2 describes the main functions of each data type.

Table 2. Data type function

Data Types The big norms are binary from 1 to 8,000 bytes. The storage size adds 4 bytes to the specified length. Fixed length binary data Varbinary from 1 to 8,000 bytes. The storage size is 4 bytes to the actual length of the provided data. Growth binary data image size is between 0 and 2 GB. Large, becoming long binary data TEXT size is between 0 and 2 GB. Character data NTEXT size is between 0 and 2 GB. UNICODE Character Data Number Microsoft? SQL Server 2000 can store small to medium-size text, ntext, and image values ​​in the data line. This feature is most applicable to tables with the following features: the data in the table, NTEXT, and IMAGE columns are usually read / written as a unit, and most of the statements that reference the table use text, ntext, and image data. For more information, see the "Text In Row" topic in the SQL Server online book.

Where is BLOB data stored?

SQL Server 7.0 and higher have improved performance when using BLOB data stored in the database. One of the reasons is that the database page size has increased to 8 KB. Therefore, less than 8 kb text or image data no longer needs to be stored in a separate binary tree page structure, and can be stored in a row. This means reading / writing Text, NText or Image data can be as fast as read / write characters and binary strings. When the data is greater than 8 KB, a pointer will be maintained within the line, and the data itself is saved in the binary structure of the separate data page - so inevitably affect performance.

For more information on how to save Text, NTEXT, and Image data in a row, see the "Using Text and Image Data" topic in the SQL Server online book.

The common alternative method for processing BLOB data is to store BLOB data in the file system and store a pointer in the database column (preferably a uniform resource locator [URL] link) to reference the corresponding file. For versions below SQL? Server 7.0, the BLOB data is stored in the file system outside of the database to improve performance.

However, since BLOB support is improved in SQL Server 2000, add ADO.NET support read / write BLOB data, therefore stores BLOB data in the database to become a feasible method.

Advantages of using BLOB data in the database

Many advantages can be provided in the database to store BLOB data:

It is more likely to keep BLOB data to remain synchronized with the rest of the line.

BLOB data is backed up through a database. Having a single storage system simplifies management.

The BLOB data can be accessed via XML in SQL Server 2000 to return data in the XML stream.

For columns containing fixed length or variable length characters (containing unicode) data, you can perform SQL Server Full Text Retrieval (FTS) operations. FTS operations can also be performed for data-based text-based data (eg, Microsoft Word or Microsoft Excel documents included in the Image field.

Disadvantages of BLOB data stored in the database

Please carefully consider which resource stores may be better than storage in the database. The image usually referenced by HTTP HREF is a good example. This is because:

Retrieving images from the database will result in greater overhead than using the file system.

Disk storage on the database SAN is typically more expensive than the storage of disk used in the web farm.

Note You can eliminate the need to store images, movies or even Microsoft Office documents in the database. Metadata can be used to index and can include pointers that point to resources stored in the file system. Write BLOB data to the database

The following code illustrates how to use ADO.NET to write binary data obtained from the file to the Image field in SQL Server.

Public void storePicture (String FileName)

{

// read the file into a byte array

Using (filestream fs = new filestream (filename, filemode.open, fileaccess.read)

{

Byte [] imagedata = new byte [fs.length];

fs.read (imagedata, 0, (int) fs.length);

}

Using (SqlConnection Conn = New SqlConnection (Connectionstring))

{

Sqlcommand cmd = new SQLCOMMAND ("StorePicture", CONN);

cmd.commandtype = commandtype.storedProcedure;

Cmd.Parameters.Add ("@ filename", filename;

CMD.Parameters ["@ filename"]. Direction = parameterdirection.input;

Cmd.Parameters.Add ("@ blobdata", sqldbtype.image);

CMD.Parameters ["@ blobdata"]. Direction = parameterdirection.input;

// store the byte array within the image field

CMD.Parameters ["@ blobdata"]. value = imagedata;

Cn.open ();

cmd.executenonquery ();

}

}

Read BLOB data from the database

Create a SqlDataReader object through the ExecuteReader method to read a chamandbehavior.sequentialAlaccess enumerated value when reading a line containing the BLOB data. If this enumeration value is not used, the reader will extract data from the server to the client each time. If the line contains a BLOB column, you may need to take up a lot of memory. By using an enumeration value, more precise control can be performed because only the BLOB data is referenced (eg, through the GetBytes method, the method can be used to control the number of bytes read). The following code snippet is described.

// Assume Previously Established Command and Connection

// the command selects the Image column from the table

Cn.open ();

Using (SqlDataReader Reader = cmd.executeReader (Commandbehavior.SequentialAracse))

{

Reader.Read ();

// Get Size of Image DataA € "Pass Null As The Byte Array Parameter

Long Bytesize = Reader.getbytes (0, 0, NULL, 0, 0); // allocate byte array to hold image data

Byte [] imagedata = new byte [bytesize];

Long BytesRead = 0;

INT CURPOS = 0;

While (bytesRead

{

// chunksize is an Arbitrary Application Defined Value

BYTESREAD = Reader.getbytes (0, Curpos, ImageData, Curpos, Chunksize);

Curpos = chunksize;

}

}

// Byte Array 'ImageData' Now Contains Blob from Database

Note When using Commandbehavior.SEquentialAccess, you are required to access column data in a rigorous order. For example, if the BLOB data is in columns, and you need data in column 1 and column, you must read column 1 and column before reading the third column.

Back to top

Execute database updates via data set

After introducing ADO.NET, the architecture of the execution database update has changed significantly. ADO.NET's goal is to make it easier to develop multi-layer applications that adapt to large databases and a large number of clients. This has produced some important results, especially:

ADO.NET applications typically separate the application logic on the client to the business and data integrity on the intermediate layer and the database layer. In fact, this means typical applications will have more batch or transactional properties, while there is less (but larger) interaction between client applications and databases.

ADO.NET applications have more controls for how to handle updates (compared to ADO and their predecessors).

ADO.NET allows applications to propagate changes by stored in the backend database, rather than directly manipulating rows in the database table. This is a recommended implementation strategy.

Update mode of use

The process of updating data from the DataSet using ADO.NET can be overview as follows:

1.

Create a DataAdapter object and populate the DataSet object using the result of the database query. Data will be cached locally.

2.

Change your local DataSet object. These changes can include one or more tables in the DataSet of the local cache perform updates, delete, and insert operations.

3.

Initialize DataAdapter's related properties related to the update. This step is configured to process updates, delete, or insert the exact way. Because there are a variety of methods to process this problem, the following "Initialization DataAdapter is discussed on the recommendation method."

4.

Call the DataAdapter Update method to submit the hang change. Each of the DataSets in the local cache will be processed. (The records that are not changed will be automatically ignored by the Update method.)

5.

Processing exceptions caused by the DataAdapter Update method. An exception will be triggered when the request is changed in the database.

(There is also another way to perform updates. You can use the ExecuteNonQuery method to perform SQL update queries directly. When you want to update specific rows in programming and do not use the DataSet object, it is suitable for use.)

Initialize DataAdapter for updates

In ADO.NET, you must add your own code to your DataAdapter object to update the database update. There are three ways to complete this task:

You can provide your own update logic. •

You can generate update logic using the Data Adapter Configuration Wizard.

You can generate update logic using the CommandBuilder object.

It is recommended that you provide your own update logic. To save time, you can use the Data Adapter Configuration Wizard, but if you really use the wizard, try not to generate update logic at runtime. Do not rely on CommandBuilder objects unless it is already, please do not degrade performance, and you can't control the update logic generated by the object. In addition, CommandBuilder does not help you use the stored procedure to submit an update.

For applications that dynamically generate data access logic, such as reporting tools or data extraction tools, you can use CommandBuilder. Use CommandBuilder eliminates these tools to write their own code generation modules.

Use stored procedures

By using the stored procedure, the database administrator can make the database administrator more security than using dynamic SQL, and more complete data integrity checks. For example, the stored procedure can insert an insertion of the audit log in addition to the request update. Since the stored procedure is performed inside the database, the stored procedure can also provide optimal performance. Finally, because the stored procedure provides isolation between database structures and applications, it is easier to maintain.

Because the ADO.NET application using the stored procedure provides a lot of benefits, and is not more difficult to implement applications that make changes directly to the database, it is recommended to use this method in almost every case. The exception is that when you must use multiple backends or when the stored database (such as Microsoft Access) is not supported. In these cases, use the query-based update.

Management concurrency

The purpose of the DataSet object is to encourage a long-term operation (for example, when you remotely handle data, when you interact with the data), use open concurrency. When submitting updates from DataSet, there are four main methods for managing open concurrency:

Only include primary key columns

Includes all columns in the WHERE clause

Including unique key columns and time stamp columns

Includes unique key columns and modified columns

Note that the last three methods maintain data integrity; the first method is not.

Only include primary key columns

This option has caused a situation that the last update covers all previous changes. CommandBuilder does not support this option, and the Data Adapter Configuration Wizard supports. To use this option, go to the AdvanceDOptions tab to clear the UseConcurrency check box.

This method is not a recommended implementation strategy because it allows users to unintentionally change their changes to other users. It is never desirable to harm the integrity of other user updates. (This technology is only available for single-yet database.)

Includes all columns in the WHERE clause

Use this option to prevent you from rewrite changes from other users to take out row and your code submitted in this line. Changes between this period of time. This option is the default behavior of both the SQL code generated by the Data Adapter Configuration Wizard and SQLCommandBuilder.

This method is not a recommended implementation strategy for the following reasons:

If additional columns are added to the table, the query will need to be modified.

Typically, the database does not allow you to compare two blobs because they are too large, so that this is very efficient. (Tools like CommandBuilder and "Data Adapter Configuration Wizard" should not include a blob column in the WHERE clause.)

Additional overhead is generated when all columns in the table are compared to all columns in the updated row.

Including unique key columns and time stamp columns

Using this option, the database will update the timestamp column to a unique value after each update line. (You must provide timestamp column in the table.) Currently, CommandBuilder is not supported by CommandBuilder and "Data Adapter Configuration Wizard". Includes unique key columns and modified columns

Usually, this option is not recommended because if your application logic relies on expired data fields or even it does not update the field, it may generate an error. For example, if the user has changed the number of orders, and the user B changes the unit price, the total value of the wrong order may be calculated (quantity multiplying price).

Update the empty field correctly

When the fields in the database do not include data values, these empty fields can be easily visually considering a special null value. However, this psychology may be the root cause of programming errors because the database standard requires special processing of null values.

The core problem of the empty field is that when the two operands are null or one is null, the ordinary SQL = operator always returns false. In the SQL query, the operator isnull is the only correct way to check if there is an empty field.

If the application manages concurrently by specifying the WHERE clause, using the technique described above, you must include an explicit IS NULL expression anywhere in the field. For example, if the OldLastName is empty, the following query will always fail:

Set lastname = @newlastname where studentid = @studentid and

Lastname = @OLDLASTNAME

You should overwrite the query as follows:

Set lastname = @newlastname where (studentid = @studentID) and

(LastName = @oldlastname) or

(OldLastName Is Null and lastname is null))

To learn how to write the update logic of the above type, a good way is to read the output generated by the CommandBuilder tool.

More information

For a complete discussion of database updates, see David Sceppa's "Microsoft Press, 2002) Chapters 11 and 12.

Back to top

Use strong type dataset object

Strong Type DataSet objects presented database tables and columns as objects and properties. Access is performed by name instead of being executed by indexing a collection. This means you can use the way to access fields to identify the difference between strong types and non-type DataSet objects:

String n1 = mydataset.tables ["students"]. Rows [0] ["studentname"]; // untyped

String n2 = mydataset.students [0] .studentname; // strongly typed

Strong type DataSet object has the following benefits:

Accessing the code is more readable, more concise.

The smart sensing function in the Visual Studio .NET Code Editor can automatically complete the code row when you type.

The compiler can capture the strong type DataSet type does not match the error. The detection type error is better than when compiled.

When to use strong type data set

Strong Type Dataset is useful because they make the application development easier and less errors. This is especially true for clients for multi-storey applications. On such clients, focus on graphical user interface and data verification that requires multi-field access operations.

However, if the database structure changes (for example, when the field name and the table name are modified), strong type DataSet may be very troublesome. In this case, the type of DataSet class must be regenerated and all related classes must be modified. Strong types and non-type methods can be used in the same application. For example, some developers use strong type Dataset on the client, using non-type records on the server. Strong Type Dataset. Berge method can be used to import data from non-type DataSet.

Generate a DataSet class

Both .NET Framers SDK and Visual Studio.NET provide utilities to help you generate the necessary DataSet subclasses. The .NET Framework SDK involves the use of the command line tool and writes code. Obviously, the Visual Studio .NET method relies on the Visual Studio .NET development environment and does not require you to open the command window.

No matter how to generate a DataSet class, you must deploy new classes to all layers of this type of DataSet. (This situation is not common, but if you deliver the type DataSet in multiple layers by using remote processing technology, this is needed.)

Use the .NET Framework utility

The .NET Framework SDK contains a command-line utility called an XML schema definition tool that helps you generate class files based on an XML schema (.xsd) file. Use the utility to use the WriteXmlschema method of the DataSet object to convert the non-calorization DataSet into strong type DataSet.

The following command illustrates how to generate class files from the XML architecture file. Open a command window and type the following:

C: /> xsd mynewclass.xsd / d

The first parameter in this command is the path to the XML schema file. The second parameter indicates that the class to be created is derived from the DataSet class. By default, the tool generates a Visual C # .NET class file, but it can also generate a Visual Basic .NET class file by adding an appropriate option. To list the available options for this tool, type the following:

XSD /?

After creating a new class file, add it to the project. Now you can create an instance of a strong type DataSet class, as shown in the Visual C # .NET code segment:

Mynewclass DS = new mynewclass ();

Use Visual Studio .NET

To generate a strong type DataSet in Visual Studio .NET, right-click the Window, and then click Generate Dataset. This will create a .xsd file and a class file, then add it to the project. Before you do this, make sure one or more DataAdapter have been added to the Windows Form. Note that class files are hidden. To view this file, click the show all files button in the Solution Explorer window toolbar. This class file is associated with .xsd files.

To add a relationship to a strong type DataSet, open the XML schema designer by double-clicking the schema file in the Solution Explorer window, then right-click on the table you want to add constraints. On the shortcut menu, click Add New Relation.

Another way to generate a strong type DataSet in Visual Studio .NET is to right-click the item in the Project Explorer, select Add Files, then select DataSet. A new .xsd file will be created. At this point you can use the Server Explorer to connect to the database and drag the table to the XSD file. Back to top

Processing empty data field

Here are some points that help you correctly use the empty field value in the .NET data architecture:

Always use the System.dbnull class to set the value of the empty field. Do not use null values ​​supplied by C # or Visual Basic .NET. E.g:

Rowstudents ["Nickname"] = dbnull.value // Correct!

Strong Type DataSet contains two additional methods for DATAROW execution operations: An empty value for checking columns, and the other is set to empty the column value. The following code snippet shows these two methods:

IF (tds.rowstudnt [0] .isphonenonull ()) {a € _.

TDS.Rowstudent [0] .SETPHONENONONULL ()

Always test the null values ​​in the database using the DataRow class (or the strong class equivalent to the last project symbol). This method is the only supportive way to test database null values.

If the data field may contain a null value, be sure to test it before using this value in the context that requires a non-null value. A typical example in this regard is the possibility of an empty INTEGER value data field. Note that the .NET runtime INTEGER data type does not contain null values. The following is an example:

INT i = rowstudent ["zipcode"]; // throws exception if null!

Use a strong type DataSet .xsd file NULLVALUE annotation to configure how to map null values ​​in the database. By default, exceptions will be caused; however, in order to achieve higher particle size control, the class can be configured to replace null values ​​using the specified value (such as string.empty).

Back to top

Transaction processing

Almost all updated data sources, business-oriented applications require transaction support. Transaction processing is used to ensure the integrity of the system state contained in one or more data sources, which provides basic guarantees represented by the following four abbreviations ACID: atomic, consistency, isolation, and persistence.

For example, consider a Web-based retail application that handles purchasing orders. Each order requires three distinct operations involving three database updates:

The inventory level must be reduced according to the number of orders.

The customer's credit level must be debit in the purchase quantity.

The new order must be added to the order database.

It is critical to performing the above three distinct operations as a unit in an atomic manner. They must be or all success, or all failure: any other case will harm the data integrity. Transaction can provide this guarantee and some other guarantees.

For more information on the basic principles of transactions, see http://msdn.microsoft.com/library/en-us/cpguidnf/html/cpcontransactionProcessingFundamentals.asp.

There are many ways that can be used to integrate transaction management into the data access code. Each method is applicable to one of two basic programming models:

Manual transaction processing. Directly prepare a code that uses the ADO.NET or Transact-SQL transaction support functionality during component code or stored procedures.

Atom (COM ) transaction. Add declarative properties to the .NET class to specify the transaction requirements of the object during runtime. This model allows you to easily configure multiple components to perform jobs in the same transaction. Both technologies can be used to perform local transactions (ie transactions performed for single resource manager such as SQL Server 2000) or distributed transactions (ie transactions performed on multiple resource managers on remote computers), although atomic transactions The processing model greatly simplifies distributed transaction processing.

You may tend to use atom (COM ) transaction to benefit from easier programming models. The benefits of this programming model are especially obvious in systems with many components that perform database updates. However, in many scenarios, you should avoid additional overhead and performance losses brought about such transaction mode.

This section will provide some guidance to help you select the most appropriate model based on a specific application solution.

Select transaction processing model

Before selecting a transaction model, you should consider whether you really need transaction. The transaction is the most expensive single resource used by the server application. If it is not necessary to use, scalability will also be reduced. Consider the following criteria for controlling the use of transactions:

Transaction processing is only performed when it is necessary to get a lock in a set of operations and need to implement the ACID rules.

The transaction time is only possible to minimize the time of occupying the database lock.

Never let the client control the survival of the transaction.

Do not use transactions for a single SQL statement. SQL Server automatically runs each statement as a single transaction.

Atomic transaction processing and manual transaction

Although the programming model of automatic transaction processing is simplified to some extent, in particular when the database update is performed in multiple components, it is always a lot of faster manual execution of local transactions because they do not need to interact with Microsoft DTC. This is true even if you want to use an atomic transaction for a single local resource manager (such as SQL Server) (although performance loss) is reduced, the manual local transaction process can avoid any unnecessary process communication with DTC (IPC ).

In the following cases, use manual transaction processing:

Perform transaction processing for a single database.

In the following cases, use automatic transaction processing:

Request a single transaction across multiple remote databases.

You require a single transaction to include multiple resource managers, such as a database, and a Windows 2000 message queue (called MSMQ) resource manager.

Note Please avoid mixing transaction processing models. Both can only choose one.

In applying the performance, you can meet the requirements where you can meet the requirements, you choose to use Auto Transaction (even for a single database) to simplify the programming model is reasonable. Automatic transaction processes that multiple components can easily perform operations belonging to the same transaction.

Use manual transaction

When using manual transaction, you can write code using the ADO.NET or Transact-SQL transaction support functionality during component code or stored procedures. In most cases, you should choose to control transaction during storage, because this method provides excellent package, and from performance perspective is also equivalent to executing transaction with ADO.NET code.

Execute manual transaction processing via ADO.NET

ADO.NET supports a transaction object, you can use this object to start a new transaction, and then explicitly control the transaction on whether you should submit or roll back. This transaction object is associated with a single database connection and is obtained by the Begintransaction method of the connection object. Calling the method does not imply that the subsequent command is issued in the context of the transaction. Each command must be explicitly associated with transaction, and the method is to set the Transaction property of the corresponding command. Multiple command objects can be associated with transaction objects, thereby combining multiple operations for a single database into a single transaction. For examples of using the ADO.NET transaction code, see How to Write ADO.NET Manual Transaction Code in the appendix.

More information

ADO.NET manual transaction processing default isolation level is "submit reading", which means that the database has a shared lock when reading data, but the data can be changed before transaction processing. This is likely to cause non-repetitive reading or lead to phantom data. You can change the isolation level, and the method is one of the enumeration values ​​defined by the ISOLATIONLEVEL attribute of the transaction object to the ISOLATIONLEVEL enumeration type.

The appropriate isolation level must be selected for the transaction after careful consideration. This involves trade-off between data consistency and performance. The highest isolation level (Serialized) provides absolute data consistency, but the cost is to reduce the total system throughput. The lower isolation level allows the application to have better scalability, but at the same time increases the possibility of errors due to data inconsistencies. For systems that read data at most of the time, use a lower isolation level may be appropriate.

Valuable information about selecting the appropriate transaction isolation level, see the Microsoft Press book "Inside SQL Server 2000"? (Author: Kalen Delaney).

Execute manual transaction processing by stored procedure

You can also directly control manual transaction processing by using Transact-SQL statements during storage. For example, transactional operations can be performed by using a single stored procedure (this stored procedure using Transact-SQL transaction statements such as Begin Transaction, End Transaction, and Rollback Transaction).

More information

If desired, you can control the transaction isolation level by using the Set Transaction ISOLATION LEVEL statement during the stored procedure. Read Committed is the default isolation level of SQL Server. For more information on SQL Server Isolation Level, see "Isolation Levels" in the SQL Server online book "Accessing and Changing Relation Data" section.

See how to use Transact-SQL to perform transaction processing in the appendix using the Transact-SQL transaction statement to perform a transactional update.

Use automatic transaction processing

Automatic transaction simplifies programming models because they do not require explicit start of new transactions or explicitly submit or abort transactions. However, the biggest advantage of automatic transaction processing is to work with DTC, which makes a single transaction across multiple distributed data sources. This advantage may be meaningful in large distributed applications. Although the distributed transaction can be manually controlled by direct writing DTC, the automatic transaction has greatly simplified the task and is ideal for components-based systems. For example, it is easy to configure multiple components in a declaration manner to perform work that makes up a single transaction.

Automated transaction depends on the distributed transaction support support feature provided by COM , therefore, only service components (ie, from the ServicedComponent class ") can use automatic transaction processing.

Configure class to perform automatic transaction processing:

Send this class from the ServicComponent class in the system.enterpriseservices namespace. •

Define transaction requirements for this class by using Transaction attributes. The value provided in the TRANSActionOption enumeration type determines how to configure this class in the COM directory. Other properties that can be established using this property include transaction isolation levels and timeouts.

To clearly avoid voting in transaction results, you can use the AutoComplete property to add annotations to the method. If these methods trigger an exception, the transaction will automatically abort. Note that if you need, you can still vote directly to the transaction results. For more information, see the section below to determine the transaction results below.

More information

For more information on COM Auto Transaction Processing, search "Automatic Transactions Through COM " in the Platform SDK document (performing automatic transaction through COM ).

For examples of transactional .NET classes, see How to write a transaction .NET class in the appendix.

Configure transaction isolation level

Transaction isolation levels of COM 1.0 (i.e., COM running on Windows 2000) are Serialized. Although this provides the highest degree of isolation, such protection is at the expense of performance. The total throughput of the system will decrease, because the resource manager involved (usually the database) must have a read lock and write lock during transaction processing. In this process, all other transactions will be blocked, which may have a significant impact on the scaling capacity of the application.

COM 1.5 (which is included in Microsoft Windows .NET) Allows the transaction isolation level to the transaction isolation level in the COM directory. The setting associated with the root component in the transaction determines the isolation level of the transaction. In addition, the transaction level of the internal sub-assembly belonging to the same transaction is not higher than the transaction level defined by the root component. If this requirement is not met, an error will occur when the relevant sub-components are instantiated.

For the .NET managed class, the Transaction property supports the public ISOLATION properties. This property can be used to declare a specific isolation level, as shown in the following code.

[Transaction (TransactionOption.supported, Isolation = TransactioniSolationledLevel.Readcommitted)]

Public Class Account: ServicedComponent

{

.

}

More information

For more information on configurable transaction isolation levels and other Windows .NET COM enhancements, see MSDN Magazine Articles "Windows XP: Make Your Components More Robust With COM 1.5 Innovations", URL is http://msdn.microsoft.com /msdnmag/issues/01/08/comXp/default.aspx.

Determine the result

The results of automatic transaction processing are controlled by the transaction abort sign and consistency marks (they are located in the context of all transaction components in single transaction streams). The transaction result is determined when the root assembly in the transaction stream is deactivated (and the control is returned to the caller). Figure 5 shows this that the figure shows a traditional bank fund transfer transaction.

Figure 1.5. Transaction flow and context

The result of the transaction is to be determined when the root object (in this example, for the Transfer object) is deactivated and the client's method is determined. If any consistency flag in any context is set to false, or if the transaction abort flag is set to True, the underlying physical DTC transaction is aborted. You can use one of the following two ways to control the transaction results from the .NET object:

You can use the AutoComplete property to add annotations to the method and let the .NET automatically issues your vote to control the results of the transaction. When using this property, if the method triggers an exception, the consistency flag will be automatically set to false (eventually leading to the accident). If the method returns and does not trigger an exception, the consistency flag is set to True, which means that the component agrees to submit a transaction. This cannot be guaranteed because it relies on the vote from other objects in the same transaction.

You can call the statistical setcomplete or setabort method of the ContextUtIL class, which sets the consistency flag to True or False, respectively.

SQL Server errors in severe greater than 10 will result in hosted data providers to trigger an exception of the SQLEXCeption type. If your method captures and processes the exception, make sure that the manual vote is to abort the transaction, or (for the method marked as [AutoComplete]), make sure the exception is propagated to the caller.

[AutoComplete] method

For methods with the AutoComplete property tag, do any of the following:

Communicate the SQLEXCEPTION along the tuning stack.

Pack the SQLException into an external exception and propagate the latter to the caller. You may want to pack the exception package in an exception type that is more meaningful to the call.

If this exception is not propagated, the object will not vote to abort the transaction (despite the database error). This means that other successful operations performed by other objects sharing the same traffic flow may be submitted.

The following code captures SQLException and then propagates it directly to the caller. This transaction will eventually be suspended because the consistency logo of the object will be automatically set to false when it is deactivated.

[AutoComplete]

Void someMethod ()

{

Try

{

// Open the connection, and perform Database Operation

.

}

Catch (SQLEXCEPTION SQLEX)

{

LOGEXCEPTION (SQLEX); // Log the Exception Details

Throw; // Rethrow the Exception, Causeing the consistent

// flag to be set to false.

}

Finally

{

// Close The Database Connection

.

}

}

Non [AutoComplete] method

For methods that are not marked with AutoComplete properties, you must do the following:

Call ContextUtil.Setabort in the Catch block, thereby voting the transaction. This will set the consistency logo to false.

If an exception occurs, CONTEXTUTIL.SETCOMPLETE is called to vote to the transaction. This will set the consistency flag to true (its default state).

The following code illustrates the method.

Void someharmthod ()

{

Try

{

// Open the connection, and perform Database Operation

.

Contextutil.setComplete (); //manually vote to commit the transport

}

Catch (SQLEXCEPTION SQLEX)

{

LOGEXCEPTION (SQLEX); // Log the Exception Details

Contextutil.Setabort (); // manally vote to abort the transaction

// Exception is Handled At this point and is not propagated to the caller

}

Finally

{

// Close The Database Connection

.

}

}

Note If there are multiple catch blocks, CONTEXTUTIL.SETABORT is called at the beginning of the method, and CONTEXTUTIL.SETCOMPLETE will be easier in the TRY block end. Thus, it is not necessary to repeat the ContextUtil.Setabort in each CatCH block. The consistency flags determined by these methods make sense only when the method returns.

You must always spread an exception (or packaged) along the calling stack, as this can make the call code know that the transaction will fail. This makes the calling code can be optimized. For example, in the bank fund transfer scheme, if the debit operation has failed, the transfer component can determine whether not performing a credit operation.

If you set the consistency flag to false, then return and do not trigger an exception, the calling code cannot know that the transaction will fail. Although a Boolean value can be returned or set a Boolean output parameter, you should always trigger an exception to indicate an error. This produces a clearer, more consistent code, and has a standard error handling method.

Back to top

Data page

Paging data is a common requirement in a distributed application. For example, it may be necessary to show a list of books, at this time, the entire list may be disabled. Users will want to perform familiar operations for data, such as viewing the next page or on the previous page, or jump to the beginning or end of the list.

This section will discuss options for implementing this feature, and the impact of each option on scalability and performance.

Several choice

The selection of data paging includes:

Plipment DataSet with a series of queries through the Fill method of SqlDataAdapter

Use ADO with COM interoperability, and use server-side games

Use the stored procedure to manually implement data paging

The optimal choice for data paging depends on the factors listed below:

Scalability requirements

Performance requirements

Network bandwidth

Database server memory and function

Intermediate layer server memory and function

I hope to make a page, the number of rows returned by the query

Size of data pages

Performance tests show that the manual method using the stored procedure provides optimal performance under a wide pressure level. However, because the manual method completes its work on the server, if most of the functionality of the site relies on data paging function, the server pressure level may become an important issue. To ensure that this method is suitable for your specific environment, you should test all options for your specific requirements.

Various options are discussed below.

Fill method using SqlDataAdapter

As described above, DataSet can be populated with data in the database. One of the overloaded Fill methods (as shown in the following code) uses two integer index values ​​as parameters.

Public Int Fill

DataSet DataSet,

Int StartRecord,

Int MaxRecords,

String Srctable

);

The StartRecord value represents the index of the starting record from zero. The maxRecords value indicates that the number of records to be copied to the new DataSet (starting from StartRecord). SqlDataAdapter uses SqlDataRead internally to perform queries and return results. SqlDataAdapter reads the result and creates a DataSet based on the data read from the SqlDataReader. SqlDataAdapter copies all results into the just generated DataSet through StartRecord and MaxRecords, and discards the results it does not need. This means that a large number of unwanted data can be extracted from the network to the data access client, which is the main disadvantage of the method.

For example, if there are 1,000 records, and only need to record 900 to 950, the previous 899 records will be extracted from the client and discarded. This overhead may be small for small result sets, but when you make a large data set, this overhead may be large.

Use ADO

Another option to make a paging is to use a COM-based ADO to page. The main motivation behind this option is to obtain access to the server-side cursor, which is open via the ADO Recordset object. You can set the RECORDSET cursor position to aduseserver. This will result in the use of the server-side games if your OLE DB provider supports it (SQLOLEDB). You can then use this cursor to navigate to the starting record without having to extract all records through the network to the data access client code.

This method has the following two major disadvantages:

In most cases, you need to convert the records returned in the Recordset object to the DataSet to use in the client hosting code. Although OLEDBDataAdapter does overload the Fill method, use the ADO Recordset object as a parameter and convert it to the DataSet, but is not used to specify the tools that start and end records. The only realistic choice is to move to the starting record in the RecordSet object, handle each record in turn, and manually copy the data to the manually generated new DataSet. This is possible (especially overhead calls through COM Interop) It is possible to completely offset the advantages of extraction of additional data over the network, especially for small DataSet.

During this time you take the required data from the server, you need to keep the connection and server-side cursors open. When opening and maintaining on a database server, the cursor is usually an expensive resource. Although this selection can improve performance, it may also reduce scalability due to long-term valuable resources on the server.

Use manual implementation

The last selection of the last patch discussed in this section is to manually implement paging functionality by using the stored procedure for the application. For tables containing unique keys, the stored procedure can be achieved relatively easily. This process is more complex for tables that do not contain unique keys (you should not have a lot of such a table).

Paging tables for tables with unique keys

If the table contains a unique key, you can use this button in the WHERE clause to create a result set starting from a particular row. This method combines this method with a SET ROWCOUNT statement or SQL Server TOP statement for restrictions assembly, which can provide a valid paging mechanism. The stored procedure code below illustrates this method:

Create Procedure GetProductSpaged

@LastProductId Int,

@PageSize Int

AS

Set rowcount @PageSize

SELECT *

From product

Where [Standard Search Criteria]

And Productid> @LastProductidorder by [criteria That Leaves Productid Monotonical Increase]

Go

The caller of the stored procedure simply maintains the LastProductID value and incrementing it or decrements the selected page size between continuous calls.

Page with tables that do not contain unique keys

If you want to make a pagination table does not contain a unique key, you can consider adding a unique key to a certain method (eg, using the identity column). This will enable you to implement the paging solutions discussed earlier.

For tables that do not include unique keys, you can generate a valid paging solution as long as you can generate uniqueness by combining other two or more fields belonging to the result set.

For example, consider the following table:

Col1col2col3 other columns §a1w§a1x§a1y§a1z§a2w§a2x§b1w§b1x§

For this table, uniqueness can be generated by combining COL1, COL2, and COL3. Therefore, the paging mechanism can be implemented by using the method illustrated in the following stored procedure.

Create Procedure RetrievedataPaged

@LastKey Char (40),

@PageSize Int

AS

Set rowcount @PageSize

SELECT

COL1, Col2, Col3, Col4, Col1 Col2 Col3 As Keyfield

From SampleTable

Where [Standard Search Criteria]

And Col1 Col2 Col3> @LastKey

Order By Col1 ASC, Col2 ASC, Col3 ASC

Go

The client keeps the final value of the Keyfield column returned by the stored procedure and reinsertes it into the stored procedure to control the table paging.

Although manual implementation adds the burden of the database server, it avoids unnecessary data over the network. Performance tests show that this method is capable of working properly at a wide range of pressure levels. However, based on the workload of the website related to the data paging function, perform manual paging on the server may affect the scalability of the application. You should run performance test in your own environment to find the best way to fit the specific application solution.

Back to top

appendix

How to enable object build of .NET class

You can use your company (COM ) service to enable .NET managed classes to make object builds.

Enable .NET managed class

1.

A class from the ServicComponent class in the namespace located in the System.EnterpriseServices.

Using system.enterprises;

Public Class DataAccessComponent: ServicesDComponent

2.

Modify this class with the constructionenable value and specify the default build string as needed. This default is stored in the COM directory. Administrators can use the Component Services Microsoft Management Console (MMC) unit to maintain this value.

[Constructionenabled (default = "default dsn")]

Public Class DataAccessComponent: ServicesDComponent

3.

Provide rewriting implementation for virtual construct methods. This method is called after the language specific constructor of the object. Building strings maintained in the COM directory are provided as the unique parameters of the method.

Public override void construct (String ConstructionString)

{

// Construct Method Is Called Next After Construction.// The Configured DSN IS Supplied As The Single Argument

}

4.

Signing the assembly via the AssemblyKey file or AssemblyKeyname property to provide a strong name. The assembly registered with the COM service must have a strong name. For more information on Strong Name assembly, see http://msdn.microsoft.com/library/en-us/cpguidnf/html/cpconworkingwithstrongly-namedassemblies.asp.

[Assembly: askEMBLYKEYFILE ("Dataservices.snk")]]

5.

To support dynamic (inert) registration, use the assembly level attribute ApplicationName and ApplicationActivation, specify the name of the COM application for accommodating assembly components, respectively, and the activation type of the application. For more information on assembly registration, see http://msdn.microsoft.com/library/en-us/cpguidnf/html/cpconregisteringservicedComponents.asp.

// the applicationname attribute specifies the name of the name

// COM Application Which Will Hold Assembly Components

[Assembly: ApplicationName ("Dataservices")]]]

// the applicationActivation.ActivationOption Attribute Specifies

// WHERE Assembly Components Are Loaded ON Activation

// Library: Components Run in The Creator's Process

// Server: Components Run in a System Process, Dllhost.exe

[assmbly: ApplicationActivation (ActivationOption.Library)]

The following code snippet shows a service component called DataAccessComponent, which uses the COM build string to get database connection strings.

Using system;

Using system.enterprises;

// the applicationname attribute specifies the name of the name

// COM Application Which Will Hold Assembly Components

[Assembly: ApplicationName ("Dataservices")]]]

// the applicationActivation.ActivationOption Attribute Specifies

// WHERE Assembly Components Are Loaded ON Activation

// Library: Components Run in The Creator's Process

// Server: Components Run in a System Process, Dllhost.exe

[assmbly: ApplicationActivation (ActivationOption.Library)]

// sign the assembly. The snk key file is created using the // sn.exe utility

[Assembly: askEMBLYKEYFILE ("Dataservices.snk")]]

[Constructionenabled (default = "default dsn")]

Public Class DataAccessComponent: ServicesDComponent

{

PRIVATE STRING CONNECTIONSTRING;

Public DataAccessComponent ()

{

// Constructor is Called On Instance Creation

}

Public override void construct (String ConstructionString)

{

// Construct Method Is Called Next After Constructionor.

// the configured dsn is supplished as the single argument

THIS.CONNECTIONSTRING = ConstructString;

}

}

How to use SqlDataAdapter to retrieve multiple lines

The following code illustrates how to use the SqlDataAdapter object to issue a command to generate a DataSet or DataTable. It retrieves a group of product categories from the SQL Server Northwind database.

Using system.data;

Using system.data.sqlclient;

Public DataTable RetrieverowswithDataTable ()

{

Using (SqlConnection Conn = New SqlConnection (Connectionstring))

{

Cn.open ();

SQLCommand cmd = new SQLCommand ("DatretrieveProducts", Conn;

cmd.commandtype = commandtype.storedProcedure;

SqlDataAdapter Adapter = New SqldataAdapter (CMD);

DataTable DataTable = New DataTable ("Products");

Adapter .fill (DataTable);

Return DataTable;

}

}

Generate DataSet or DataTable with SQLADAPTER

1.

Create a SQLCOMMAND object to call the stored procedure and associate it with a SQLConnection object (display) or connection string (not displayed).

2.

Create a new SqlDataAdapter object and associate it with the SQLCommand object.

3.

Create a DataTable (you can also create a DataSet) object. Use the constructor parameters to name DataTable.

4.

Call the Fill method of the SQLDataAdapter object, with the retrieved row, DataSet or DataTable.

How to use SqlDataReader to retrieve multiple lines

The following code snippet clarifies the SQLDataReader method that can retrieve multiple rows.

Using system.io;

Using system.data;

Using system.data.sqlclient;

Public SqlDataReader RetrieverowswithDataReader ()

{

SqlConnection conn = new SqlConnection ("Server = (local); integrated security = sspi; database = northwind");

SQLCommand cmd = new SQLCommand ("DatretrieveProducts", Conn;

cmd.commandtype = commandtype.storedProcedure;

Try

{

Cn.open ();

// generate the reader. ComMMandbehavior.closeConnection Causes the

// the connection to be closed when the reader object is closed

Return (cmd.executeReader (CMMANDBEHAVIOR.CLOSECONNECONNECONNECONNECONNECONNECONNECONNECTION);

}

Catch

{

CONN.CLOSE ();

Throw;

}

}

// Display the Product List sale the console

Private void displayproducts ()

{

SqldataReader Reader = RetrieverowswithDataReader ();

Try

{

While (Reader.Read ())

{

Console.writeline ("{0} {1} {2}",

Reader.GetInt32 (0) .tostring (),

Reader.getstring (1));

}

}

Finally

{

Reader.close (); // Also Closes The Connection Due To The Connection Due To THE

// Commandbehavior Enum Used Whenrating The Reader

}

}

Use SqlDataReader to search line

1.

Create a SQLCommand object used to perform stored procedures and associate it with a SQLConnection object.

2.

Open the connection.

3.

Create a SqlDataReader object by calling the EXECUTEREADER method of the SQLCommand object.

4.

To read data from the stream, call the Read method of the SQLDataReader object to retrieve the row and use the Type Accessor Method (such as getInt32 and getString method) to retrieve the column value.

5.

After using the reader, call its Close method.

How to search multiple lines using XmlReader

You can use the SQLCommand object to generate an XMLReader object, which provides a stream-based access to XML data. Command (usually stored procedure) must generate XML-based result sets, for SQL Server 2000, which typically contains a SELECT statement with a valid FOR XML clause. The following code snippet clarifies the method:

Public void retrieveandDisplayRowsWithxmlReader ()

{

Using (SqlConnection Conn = New SqlConnection (Connectionstring))

{;

Sqlcommand cmd = new sqlcommand ("DATRETRIEVEPRODUCTSXML", CONN);

cmd.commandtype = commandtype.storedProcedure;

Try

{

CONN.OPEN (); XmlTextReader XReader = (XMLTextReader) cmd.executexmlreader ();

While (XReader.Read ())

{

IF (XReader.name == "Products")

{

String stroutput = xreader.getattribute ("productID");

Stroutput = ""

Stroutput = xreader.getattribute ("productname");

Console.writeLine (stroutput);

}

}

XReader.close (); // XmlTextReader Does Not Support Idisposable So It Can't Be

// used within a using keyword

}

}

The above code uses the following stored procedures:

Create Procedure DatretrieveProductsXML

AS

Select * from products

For XML Auto

Go

Search XML data using XMLReader

1.

Create a SQLCommand object to call the stored procedure that can generate the XML result set (for example, using the for XML clause in the SELECT statement). Associate the SQLCOMMAND object with a connection.

2.

Call the ExecuteExmlReader method for the SQLCommand object and assign the result to the XMLTextReader object. This is the fastest type of XMLReader object that should be used when you do not need to make an XML-based authentication.

3.

Read the data using the Read method of the XMLTextReader object.

How to use stored procedure output parameters to retrieve single line

With the named output parameters, you can call the stored procedure for returning the recorded data items within a single row. The following code snippet uses the stored procedure to retrieve the product name and unit price of the specific product contained in the Products table of the Northwind database.

Void getProductDetails (int products)

Out String ProductName, Out Decimal Unitprice

{

Using (SqlConnection Conn = New SqlConnection)

"Server = (local); integrated security = sspi; database = northwind"))

{

// set up the command Object used to execute the stored proc

SQLCommand cmd = new SQLCommand ("DatgetProductDetailsSpoutput", CONN)

cmd.commandtype = commandtype.storedProcedure;

// establish stored proc parameters.

@ProductID Int Input

@ProductName Nvarchar (40) Output

@Unitprice Money Output

// Must Explicitly Set The Direction of Output Parameters

Sqlparameter paramprodid =

Cmd.Parameters.Add ("@ProductID", ProductID; paramprodid.direction = parameterdirection.input;

Sqlparameter paramprodname =

CMD.Parameters.Add ("@ProductName", SqldbType.varchar, 40);

ParamProdname.direction = parameterDirection.output;

Sqlparameter paramunitprice =

Cmd.Parameters.Add ("@unitprice", sqldbtype.money);

Paramunitprice.direction = parameterdirection.output;

Cn.open ();

// Use ExecutenonQuery to run the commit.

// Alth No Rows Are Returned Any Mapped Output Parameters

// (and potentially returnes) Are Populate

cmd.executenonquery ();

// Return Output Parameters from stored Proc

ProductName = paramprodname.value.toString ();

Unitprice = (decimal) paramunitprice.value;

}

}

Use stored procedure output parameters to retrieve single line

1.

Create a SQLCOMMAND object and associate it with a SQLConnection object.

2.

Set the stored procedure parameters by calling the Add method of the PARAMETERS collection of SQLCommand. By default, the parameters are assumed to be input parameters, so the direction of any output parameters must be explicitly set.

Note A good habitat is an explicit setting of all parameters (including input parameters).

3.

Open the connection.

4.

Call the EXECUTENONQUERY method for the SQLCommand object. This will populate the output parameters (and may fill the return value).

5.

The output parameters are retrieved from the appropriate SQLParameter object by using the Value property.

6.

Close the connection.

The above code snippet calls the following stored procedures.

Create Procedure DatgetProductDetailsspoutput

@ProductID INT,

@ProductName Nvarchar (40) Output,

@Unitprice Money Output

AS

SELECT @ProductName = ProductName,

@Unitprice = Unitprice

From product

Where productid = @ProductID

Go

How to use SqlDataReader to retrieve a single line

You can use the SqlDataReader object to retrieve a single row, especially from the returned data stream. The following code snippet is described.

Void getProductDetailsusingReader (int products)

Out String ProductName, Out Decimal Unitprice

{

Using (SqlConnection Conn = New SqlConnection)

"Server = (local); integrated security = sspi; database = northwind")) {

// set up the command Object used to execute the stored proc

Sqlcommand cmd = new sqlcommand ("DatgetProductDetailsReader", CONN);

cmd.commandtype = commandtype.storedProcedure;

// establish stored proc parameters.

@ProductID Int Input

Sqlparameter paramprodid = cmd.parameters.add ("@ProductID", ProductID;

Paramprodid.direction = parameterdirection.input;

Cn.open ();

Using (SqlDataReader Reader = cmd.executeReader ())

{

IF (Reader.Read ()) // Advance to the One and Only Row

{

// Return Output Parameters from Returned Data Stream

ProductName = Reader.getstring (0);

Unitprice = reader.getDecimal (1);

}

}

}

}

Use the SqlDataReader object to return to a single line

1.

Establish a SQLCommand object.

2.

Open the connection.

3.

Call the EXECUTEREADER method for the SqlDataReader object.

4.

The output parameters are retrieved by the Types of SqlDataReader objects (here, for getString and getDecimal).

The above code snippet calls the following stored procedures.

CREATE Procedure DatgetProductDetailsReader

@ProductID Int

AS

Select ProductName, Unitprice from Products

Where productid = @ProductID

Go

How to use ExecuteScalar to retrieve a single item

The ExecuteScalar method is specifically applied to only the query that returns only a single value. If the query returns multiple columns and / or rows, ExecuteScalar will return only the first column of the first line.

The following code illustrates how to find product names corresponding to a specific product ID:

Void getProductNameexecutescalar (int Productid, Out String ProductName)

{

Using (SqlConnection Conn = New SqlConnection)

"Server = (local); integrated security = sspi; database = northwind"))

{

Sqlcommand cmd = new SQLCOMMAND ("LookuppproductNamescalar", Conn;

cmd.commandtype = commandtype.storedProcedure;

Cmd.Parameters.Add ("@ ProductID", ProductId;

Cn.open ();

ProductName = (string) cmd.executescalar ();

}

Use ExecuteScalar to retrieve single items

1.

Create a SQLCommand object to call the stored procedure.

2.

Open the connection.

3.

Call the ExecuteScalar method. Note that this method returns an object type. It contains the value of the first column retrieved and must be converted to an appropriate type.

4.

Close the connection.

The above code uses the following stored procedures:

Create Procedure LookuppproductNamescalar

@ProductID Int

AS

SELECT TOP 1 ProductName

From product

Where productid = @ProductID

Go

How to use stored procedures output or return parameters to retrieve single items

You can use the stored procedure output or return parameters to find a single value. The following code clarifies the usage of output parameters:

Void getProductNameusingsingsPoutPut (int ProductId, Out String ProductName)

{

Using (SqlConnection Conn = New SqlConnection)

"Server = (local); integrated security = sspi; database = northwind"))

{

Sqlcommand cmd = new SQLCommand ("LookuppproductNamesPoutput", CONN);

cmd.commandtype = commandtype.storedProcedure;

SQLParameter paramprodid = cmd.parameters.add ("@ productID", productID;

Paramprodid.direction = parameterdirection.input;

Sqlparameter parampn =

CMD.Parameters.Add ("@ ProductName", SqldbType.varchar, 40);

PARAMPN.DIRECTION = parameterDirection.output;

Cn.open ();

cmd.executenonquery ();

ProductName = parampn.value.tostring ();

}

}

Use stored procedure output parameters to retrieve single value

1.

Create a SQLCommand object to call the stored procedure.

2.

Set any input parameters and a single output parameter by adding SQLParameters to the Parameters collection of SQLCommand.

3.

Open the connection.

4.

Call the EXECUTENONQUERY method for the SQLCommand object.

5.

Close the connection.

6.

Retrieve the output value by using the Value property of the output SQLParameter.

The above code uses the following stored procedures.

Create Procedure LookuppRoductNamespoutput

@ProductID INT,

@ProductName Nvarchar (40) Output

AS

SELECT @ProductName = ProductName

From product

Where productid = @ProductID

Go

The following code illustrates how to use the return value to indicate whether there is a specific row. From a coding perspective, this is similar to using the stored procedure output parameters, and the difference is that the SQLParameter direction must be explicitly set to ParameterDirection.ReturnValue. Bool CheckProduct (int products)

{

Using (SqlConnection Conn = New SqlConnection)

"Server = (local); integrated security = sspi; database = northwind"))

{

Sqlcommand cmd = new sqlcommand ("CheckProductSP", CONN);

cmd.commandtype = commandtype.storedProcedure;

Cmd.Parameters.Add ("@ ProductID", ProductId;

Sqlparameter paramret =

Cmd.Parameters.Add ("@ productuctexists", sqldbtype.int;

Paramret.direction = parameterDirection.ReturnValue;

Cn.open ();

cmd.executenonquery ();

}

Return (int) paramret.value == 1;

}

Check if there is a specific row by using the stored procedure return value

1.

Create a SQLCommand object to call the stored procedure.

2.

Set an input parameter that contains the primary key value of the row to access.

3.

Set a single return value parameter. Add a SQLParameter object to the parameters collection of SQLCommand and set it to ParameterDirection.ReturnValue.

4.

Open the connection.

5.

Call the EXECUTENONQUERY method for the SQLCommand object.

6.

Close the connection.

7.

Retrieve the return value by using the Value property of the return value SQLPARAMETER.

The above code uses the following stored procedures.

Create Procedure CheckProductSP

@ProductID Int

AS

IF EXISTS (SELECT ProductID)

From product

Where produter = @ProductID)

Return 1

Else

Return 0

Go

How to use SqlDataReader to retrieve a single item

You can use the SqlDataReader object and get a single output value by calling the ExecuteReader method of the command object. This requires a little more code because the SqlDataReader Read method must be called, and then the required value is retrieved by one of the accessor methods of the reader. The following code clarifies the usage of the SqlDataReader object.

Bool CheckProductWithreader (int products)

{

Using (SqlConnection Conn = New SqlConnection)

"Server = (local); integrated security = sspi; database = northwind"))

{

Sqlcommand cmd = new sqlcommand ("CheckProductexistSwithcount", conn);

cmd.commandtype = commandtype.storedProcedure;

cmd.parameters.add ("@ ProductID", ProductID; cmd.parameters ["@ productID"]. Direction = parameterDirection.input;

Cn.open ();

Using (SqlDataReader Reader = cmd.executeReader)

Commandbehavior.sing Leresult)))

{

IF (Reader.Read ())

{

Return (Reader.GetInt32 (0)> 0);

}

Return False;

}

}

The above code uses the following stored procedures.

Create Procedure CheckProductexistSwithcount

@ProductID Int

AS

Select Count (*) from Products

Where productid = @ProductID

Go

How to write ADO.NET manual transaction code

The following code shows how to make full use of the transaction support provided by the SQL Server .NET data provider, protect fund transfer operations through transactions. This operation is transferred between two accounts in the same database.

Public void Transfermoney (String toaccount, String fromaccount, Decimal Amount)

{

Using (SqlConnection Conn = New SqlConnection)

"Server = (local); Integrated Security = SSPI; Database = SimpleBank"))

{

Sqlcommand cmdcredit = new SQLCOMMAND ("Credit", CONN);

cmdcredit.commandtype = commandtype.storedProcedure;

Cmdcredit.Parameters.Add (New SqlParameter ("@ Accountno", toaccount);

Cmdcredit.Parameters.Add (New Sqlparameter);

Sqlcommand cmddebit = new sqlcommand ("debit", conn);

CMDDebit.commandtype = commandtype.storedProcedure;

Cmddebit.Parameters.add (New Sqlparameter ("@ Accountno", fromaccount);

Cmddebit.Parameters.Add (New Sqlparameter);

Cn.open ();

// start a new transaction

Using (SqlTransaction Trans = Conn.BegintransAction ())

{

// Associate the two of the sand of the Same Transaction

Cmdcredit.transaction = trans;

Cmddebit.Transaction = Trans;

Try

{

cmdcredit.executenonquery ();

Cmddebit.executenonquery ();

// Both Commands (Credit and Debit) WERE SUCCESSFULTRANS.COMMIT ();

}

Catch (Exception EX)

{

// Transaction Failed

TRANS. ROLLBACK ();

// log exception details.

Throw EX;

}

}

}

}

How to use Transact-SQL to perform transaction processing

The following stored procedures illustrate how transactional fund transfer operations are performed inside the Transact-SQL stored procedure.

Create Procedure MoneyTransfer

@Fromaccount Char (20),

@Toaccount Char (20),

@Amount Money

AS

Begin Transaction

- Perform Debit Operation

Update Accounts

Set balance = balance - @amount

Where accountnumber = @Fromaccount

IF @@ rowcount = 0

Begin

Raiserror ('Invalid from Account Number', 11, 1)

Goto Abort

End

Declare @Balance Money

SELECT @Balance = Balance from Accounts

Where accountnumber = @Fromaccount

IF @balance <0

Begin

Raiserror ('Insufficient Funds', 11, 1)

Goto Abort

End

- Perform Credit Operation

Update Accounts

Set balance = balance @amount

Where accountnumber = @toaccount

IF @@ rowcount = 0

Begin

Raiserror ('Invalid To Account Number', 11, 1)

Goto Abort

End

Commit transaction

Return 0

Abort:

Rollback Transaction

Go

This stored procedure uses Begin Transaction, COMMIT Transaction, and Rollback Transaction statements to manually control the transaction.

How to write a transactional .NET class

The following sample code shows three service .NET managed classes, which are configured to perform automatic transaction processing. Each class uses the Transaction property to annotation. The value of this property determines whether a new transaction stream should be started, or whether the object should share the transaction flow of its direct calls. These components work together to implement bank fund transfer tasks. The Transfer class is configured with the RequiresNew transaction property, while DEBIT and CREDIT are configured with Required. As a result, all three objects will share the same transaction at runtime.

Using system;

Using system.enterprises;

Transaction (TransactionOption.Requiresnew)]

Public Class Transfer: ServicedComponent

{

[AutoComplete]

Public void Transfer (String toaccount,

String fromaccount, Decimal Amount) {

Try

{

// perform the debit operation

DEBIT Debit = New Debit ();

Debit.debitaccount (fromaccount, amount);

// Perform the Credit Operation

Credit Credit = New Credit ();

Credit.creditaccount (Toaccount, Amount);

}

Catch (SQLEXCEPTION SQLEX)

{

// Handle and Log Exception Details

// Wrap and Propagate the Exception

Throw New TransferaXception ("Transfer Failure", SQLEX);

}

}

}

[Transaction (TransactionOption.Required)]

Public Class Credit: ServicedComponent

{

[AutoComplete]

Public void Creditaccount (String Account, Decimal Amount)

{

Try

{

Using (SqlConnection Conn = New SqlConnection)

"Server = (local); integrated security = SSPI"; Database = "SimpleBank"))

{

Sqlcommand cmd = new sqlcommand ("CREDIT", CONN)

cmd.commandtype = commandtype.storedProcedure;

Cmd.Parameters.Add (New Sqlparameter ("@ Accountno", Account);

cmd.Parameters.Add (New Sqlparameter);

Cn.open ();

cmd.executenonquery ();

}

}

} catch (sqlexception sqlex) {

// log exception details here

Throw; // Propagate Exception

}

}

[Transaction (TransactionOption.Required)]

Public Class Debit: ServicedComponent

{

Public void debitaccount (String Account, Decimal Amount)

{

Try

{

Using (SqlConnection Conn = New SqlConnection)

"Server = (local); integrated security = SSPI"; Database = "SimpleBank"))

{

SQLCommand cmd = new sqlcommand ("debit", conn);

cmd.commandtype = commandtype.storedProcedure;

Cmd.Parameters.Add (New Sqlparameter ("@ Accountno", Account);

Cmd.Parameters.Add (New Sqlparameter); conn.open ();

cmd.executenonquery ();

}

}

Catch (SQLEXCEPTION SQLEX)

{

// log exception details here

Throw; // Propagate Exception Back to Caller

}

}

}

Partner

Thank you very much for submissionors and reviewers:

Bill Vaughn, Mike Pizzo, Pablo Castro, Doug Rothaus, Kevin White, Blaine Dokter, David Schleifer, Graeme Malcolm (Content Master), Bernard Chen (Sapient), Matt Drucker (Turner Broadcasting), Steve Kirk, David Sceppa, Scott Densmore, Diego González (Lagash Systems)

For more information on .NET Best Implementation Policy, visit the Patterns & Practices page.

To participate in online collaboration development environments related to this topic, join the GotdotNET workspace: Microsoft Patterns & Practices Data Access for .NET Workspace. Share your data access blocking issues, suggestions and customizations with communities in the workspace.

Do you have any questions, comments and suggestions? To feedback this article, please send us an email to: devfdbck@microsoft.com.

Go to the original English page

转载请注明原文地址:https://www.9cbs.com/read-40357.html

New Post(0)