.NET Data Access Architecture Guide
Alex Mackman, Chris Brooks, Steve Busby, and Ed Jezierski Microsoft Overview: This article provides guidelines for implementing ADO.NET-based data access layers in multi-layer .NET applications. Its focus is a set of universal data access tasks and scenarios, and guides you to choose the most appropriate way and technology (68 print pages). table of Contents
ADO.NET Introduction Management Database Link Error Processing Performance By Firewall Connection Link Processing BLOBS Transaction Data Pack Introduction If you design a data access layer for .NET applications, you should use Microsoft ADO.NET as a data access model. ADO.NET extension is rich and supports combined with loose data access requirements, multi-layer web applications, and web services. Typically, it takes advantage of many extended object models, ADO.NET provides a variety of ways to address a specific issue. This article will guide you to choose the most appropriate data access method, which is to list a wide range of universal data access programs, providing techniques, and suggesting optimal practices. This article also answered other frequently asked questions: Where is it best to store database link strings? How should I achieve a link storage pool? How to deal with transactions? How to implement paging to allow users to scroll in many records? Note The focus of this article is ADO.NET: one of the two suppliers provided with SQL Server .NetData Provider - to access Microsoft SQL Server 2000 with ADO.NET - Access Microsoft SQL Server 2000. In the right place, you need to pay attention to all the differences you need to use when you use the OLE DB .NET data supplier to access other OLE DB sensitive data sources. For the Data Access Application Block data access application blocks for the specific implementation of the data access components developed by the guidelines discussed herein and the optimal practice. Note that the source code of this implementation is available and can be used directly in your .NET application. Who should read this article? This article provides guidelines for application designers and corporate developers who wish to build .NET applications. If you are responsible for designing and developing a multi-layer .NET application, please read this article. What do you need to know? To build a .NET application using this guide, you must have the actual experience of using the ActiveX Data Object (ADO) and / or OLE DB to develop data access code, and SQL Server experience. You must also understand how to develop management code for the .NET platform, and must also know the basic changes introduced by the ADO.NET data access model. For more information on .NET development, see http://msdn.microsoft.com/net. ADO.NET Introduction ADO.NET is the data access model of the .NET application. It can be used to access relational database systems, such as SQL Server 2000, and many other data sources that have OLE DB supplies. To a certain extent, ADO.NET represents the latest version of ADO technology. However, ADO.NET introduces some major changes and innovations, which are specifically used to loose, essentially non-linking web applications. For the comparison of ADO and ADO.NET, see "ADO.NET" used for ADO programmers in MSDN. An important change introduced by ADO.NET is that the ADO Recordset object is replaced with a combination of DataTable, DataSet, DataAdapter, and DataReader objects. DataTable represents a collection of rows from a table, which is similar to Recordset. DataSet represents the collection of DataTable objects and the relationships and restrictions that are bound to other tables. In fact, DataSet is an associated structure in memory supported by built-in extended marker language (XML). One of the main features of DataSet is that it does not know anything about the underlying data source, and these data sources may be used to fill them. This is an isolated independent entity for indicating data sets, and it can be passed from one component to another by different layers of a multi-layer application. It can also be serialized as an XML data stream, so it is ideal for data transmission between different types of platforms.
ADO.NET uses the DataAdapter object to create channels to send to data from DataSet and underlying data sources. The DataAdapter object also supports enhanced batch update features, before this is the Recorder's related functions. Figure 1 shows a complete DataSet object model. Figure 1 DataSet object model .NET data supplier ADO.NET relies on the service of the .NET data supply. They provide access to the underlying data source, including four main objects (Connection, Command, DataReader, and DataAdapter), currently, ADO.NET only releases two suppliers: SQL Server .NET data supplier. This is a supplier for Microsoft SQL Server 7.0 and its later version of the database, which optimizes access to SQL Server, and communicates with SQL Server with SQL Server's built-in data conversion protocol. This supplier is always used when linking to SQL Server 7.0 or SQL Server 2000. OLE DB .NET data supplier. This is a supplier for managing OLE DB data sources. Its efficiency is slightly lower than SQL Server .NET Data Provider, because when communicating with the database, it needs to be called via the OLE DB layer. Note that this supplier does not support OLE DB supplies for open database links (ODBC), MSDasql. For ODBC data sources, ODBC .NET data supplies should be used. See the list of OLE DB supplies compatible with ADO.NET
.
Other .NET data supplies in the current beta version include:
ODBC .NET data supplier. At present, Beta version 1.0 is available for download. It provides built-in access to the ODBC drive, which is the same as the accessibility of the local OLE DB supplies provided with the OLE DB .NET data vendor. More information about ODBC .NET and Beta download
.
Used to get an XML management supplier from SQL Server 2000. The XML for SQL Server Web Upgrade 2 has also included a management vester for use from SQL Server 2000 to get XML. See more information about this upgrade, see
.
Name Spatial Organizations related to each .NET data supplier (class, structure, enumeration, etc.) in their respective namespaces:
System.data.sqlclient. The SQL Server .NET data vendor type is included. System.Data.OLDB. The OLE DB .NET data supply type is included. System.data.odbc. The ODBC .NET data supply type is included. System.data. The type of versatile is included, such as DataSet and DataTable. In the respective associated namespace, each supplier provides the implementation of Connection, Command, DataRead, and DataAdapter objects. The SQLClient implementation has a prefix "SQL"; and the OLEDB implementation has a prefix "OLEDB". For example, the SqlClient implementation of the Connection object is SqlConnection. The OLEDB implementation is OLEDBCONNECTION. Similarly, the two implementations of the DataAdapter object are SqlDataAdapter and OLEDBDataAdapter. Universal Programming If you are likely to be targets with different data sources, you can consider IDBConnection, IDBCommand, iDataRead, and IDBDataAd, iDataRead, and IDBDataAd, iDataRead, and IDBDataAd, iDataRead, and IDBDataAdapter to the System.Data Namespace from a data source. The interface is programmed. All implementations of Connection, Command, DataReader, and DataAdapter objects must support these interfaces. For more information on achieving .NET data supplies, see http://msdn.microsoft.com/library/en-us/cpguide/html/cpconimplementingnetdataProvider.asp. Figure 2 shows how data access stack and ADO.NET are Other data access technologies include ADO and OLE DB, which is linked. The figure also shows two management suppliers and primary objects in the ADO.NET model. Figure 2 Data Access Stack About ADO to ADO.NET Evolution, See MSDN Magazine November 2000 Article "ADO Introduction: Data Access Service for Microsoft .NET Framework". In most of the code segments of the stored procedure and direct SQL, the SQLCommand object calls stored procedures use the SQLCommand object calls to perform database operations. In some examples, you can't see the SQLCommand object, because the stored procedure name is passed directly to the SqlDataAdapter object, but this will still cause the SQLCommand object to be created. The use of stored procedures instead of SQL statements is that stored procedures usually increase performance because the database can optimize the data access plan used by the process and caching it for future reuse. In the database, the stored procedure can be protected separately. Customers can be given permissions to execute a stored procedure, but there is no right to process the underlying table. The stored procedure will result in a simple maintenance, because in a deployed component, modify the stored procedure is usually simpler than the modified hard-encoded SQL statement. The stored procedure adds a layer extracted from the underlying database structure. The details of the customer and the storage procedure of the stored procedure and the underlying structure are separated. The stored procedure can reduce network traffic because the SQL statement can be performed in a batch process instead of sending multiple requests from the client. The comparison of the attribute and constructor can set the specific attribute value for the ADO.NET object by constructing function parameters or direct setting properties. For example, the following code snippet is equivalent. // Use constructor arguments to configure command Object
SQLCommand cmd = new sqlcommand ("Select * from products", conn);
// the Above line is functionally equivalent to the following // Three Lines Which Set Properties Explicitly
SQLCommand cmd = new sqlcommand ();
cmd.connection = conn;
cmd.commandtext = "SELECT * from Products";
From a performance perspective, the difference between the two methods can be ignored because the properties of the setting or get the .NET object are much more effective than the COM object to perform similar operations. The choice made is just a personal hobby and coding style. However, it is easy to make the code easy to understand (especially when you are not familiar with the ADO.NET object model), it is easy to debug. Note that VB developers are recommended to avoid creating objects using the "DIM X AS New ..." structure. In a COM environment, these code will cause "short circuit" of the COM object to create a process, producing some wonderful and unsatisfactory mistakes. However, in the .NET environment, this is no longer a problem. Managing Database Link Database Links is a dangerous, expensive, limited resource, especially in multi-layer Web applications. You must manage your link correctly because your method will greatly affect the overall upgrade of the application. Also, you must carefully consider where the link string is stored. You need a configurable, secure location. When you manage database links and link strings, you should work hard:
Help implement application scalability by multiplexing a pool database link across multiple customers. Adopt configurable, high-performance link pool strategies. Use Microsoft Windows operating system authentication when accessing SQL Server. Avoid pre-depositing of the intermediate layer. Safely store chain strings. Open the database links later, and close them earlier. This section discusses the link pool and helps you choose the right link pool strategy. Other alternative methods also exist. This section will also consider how to manage, store, and control database link strings. Finally, this section also provides two coding schemes that will help ensure that the link is reliable and returned to the link pool. The link pool database link pool allows the application to reuse the existing links in the pool rather than repeatedly establish links to the database. This technique will greatly increase the scalability of the application, because limited database links can provide services for many customers. This technology will also increase performance because it is possible to avoid huge time to establish a new link. Data Access Technology, such as ODBC and OLE DB, providing a variety of forms of link pools, which can be configured to different levels. These two ways are transparent to database client applications. The OLE DB link pool is often referred to as a session or resource pool. Regarding the general discussion of the pool in Microsoft Data Access Components (MDAC), see http://msdn.microsoft.com/library/en-us/dnmdac/html/poologing2.asp. The ADO.NET data supplier provides a transparent link pool, and the exact mechanism of each link pool is different for each of the suppliers. The link pool discussed in this section is about:
SQL Server .NET Data Supplies OLE DB .NET Data Supply SQL Server .NET Data Supply If you are using the SQL Server .NET data supply, you can use the link pool of the links provided by this vendor. It is a highly efficient mechanism that is implemented by the supplier within the management code. Each process will create a pool and until the end, the pool is canceled. You can use this link pool in transparency, but you should clear how the pool is managed and you know which options can be used to adjust the link pool. How to configure SQL Server .NET Data Supply Link Pool You can configure a link pool in the form of a link string using a set of name-value pairs. For example, you can configure whether the pool is valid (default is valid), the pool's maximum, minimum capacity, the queue for opening the link, requested the blockped time. The following example string configures the maximum and minimum capacity of the pool. "Server = (local); integrated security = SSPI; Database = northwind; Max pool size = 75; min pool size = 5"
When the link is opened, the pool is created, and multiple links are added to the pool to satisfy the number of links to satisfy the configured minimum. Thereafter, the link can be added to the pool until the maximum pool count is configured. When the maximum count is reached, the request to open a new link will queue a configurable time. Selecting the pool capacity can establish a maximum limit for a large system that manages thousands of users simultaneously issued a request. You need to monitor the performance of link pools and applications to determine the best pool capacity of the system. The optimal capacity is also dependent on the hardware running SQL Server. During development, you may need to reduce the default maximum pool capacity (currently 100) to help find link leaks. If the minimum pool capacity is set, it will result in some performance loss when the pool is initially filled to reach the value, although several customers who initially link will benefit from it. Note that the process of creating a new link is serialized, which means that when the pool is initially filled, the server cannot handle the simultaneous request. For more information on monitoring link pools, see the text monitoring link pool section. For a complete list of link pool link string keywords, see http://msdn.microsoft.com/library/en-us/cpguide/html/cpconnectionpoolingforsqlservernetDataProvider.asp. MORE INFORMATION When using the SQL Server .NET Data Supply Link pool, you must clearly: the link is poolized by a method that exactly matches the link string. The poolization mechanism is sensitive to spaces between name-value. For example, the following two link strings will generate a separate pool because the second string contains an additional empty character. SqlConnection conn = new SQLCONNECTION
"Integrated Security = SSPI; Database = Northwind");
Conn (); // pool a is created
SqlConMection conn = new SqlConnection
"Integrated Security = SSPI; Database = Northwind");
Conn (); // Pool B Is Created (EXTRA SPACES in String)
In the .NET Framework Beta, the link pool is always invalid when running in the debugger. Inside the debugger, the debug version and the release, the link pool can work properly. The final release (RTM) of the .NET Framework cancels this restriction, the link pool can run in all cases. The link pool is divided into a plurality of transaction-specific pools and a pool for multiple links currently not listed in the transaction. For threads associated with a particular transaction context, the link will be returned from the appropriate pool (including links established with the transaction). This makes it a transparent process using established links. Use the OLE DB .NET Data Supply Puff OLE DB .NET Data Supply Use the OLE DB resource poolized underlying service to store the link to the pool. Many methods can be used to configure resource poolization: You can use a link string to configure, enable resource pool or make it fail. You can use the registry. Resource pool can be configured by programs. In order to avoid deployment issues related to registry, avoid using the registry to configure OLE DB resource poolization. For more details on OLE DB resource poolization, see Chapter 19 of the "OLE DB Programmer Reference" in MSDN: The resource pool section in the OLE DB service. To manage link poolization with poolization targets as a Windows DNA developer, it is recommended that you fail to make OLE DB resource poolization and / or ODBC link pool, and use COM object poolization as a technique that stores database links to the pool. This is mainly for two reasons:
The pool capacity and limits can be explicitly configured (in the COM directory). Performance is improved. The method of piochemical objects can be doubled. However, since the SQL Server .NET data supplier uses poolization, (when using this supplier), you no longer need to develop your own object poolization mechanism. This will avoid the complexity of manual recruitment. If you are using the OLE DB .NET data supplier, consider COM object poolization to benefit from advanced configuration and improvement. If you develop a pool-based object for this purpose, you must use OLE DB resource poolization and automatic transaction campaign (for example, by incorporating "OLE DB Services = -4" into the link string). Transaction recruitment must be handled in the implementation of the pool. Monitoring Link Pine To monitor the application of the application to link poching, you can use the Profiler tool issued with SQL Server or performance monitor with Microsoft Windows 2000. To use the SQL Server Profiler to monitor the link poolization, the operation is as follows:
Click Start, point to Programs, point to Microsoft SQL Server, and then click Profiler to run Profiler. In the File menu, point to New, and then click Track. Provide link content, then click OK. In the Tracking Properties dialog box, click the Event Tag. In the selected event category list, make sure that the review and review of the Terminal Event is displayed under the Security Audit. Click Run to start tracking. When the link is established, an audit login event will be seen; see the review of the event when the link is closed. To monitor the link poolization through the Performance Monitor, the operation is as follows:
Click Start, point to Programs, point to Administrative Tools, and then click Performance to Run Performance Monitor. Right-click in the chart background, and then click Add Counter. In the Performance Object drop-down list box, click SQL Server: General Statistics. In the list of appears, click the user link. Click Add, and then click Close. Note that the RTM version of the .NET framework will additionally include a set of ADO .NET performance counters (these counters are used in conjunction with performance monitor), which are used to monitor and accumulate link poolization status for SQL Server .NET data vessel. Management Security Although database linking poolization improves the overall application's overall scalability, this means you no longer manage security in database end. This is because the link string must be the same for linking poolization. If you need to track the database operation of each user, consider adding a parameter for each operation, passing this parameter, you can pass the user identity, manually log in the database. Using Windows Certification When linking to SQL Server, Windows certification should be used because it provides many advantages: security is easy to manage because the Windows security model is used instead of a dispersed SQL Server security model. Avoid embedding the username and password in the link string. The username and password are not transmitted in the network in a clear text. After the password expiration period, the minimum length, the account lock has improved the security of the login after multiple invalid login requests. Performance .NetBeta 2 performance test shows that using Windows certification with SQL Server certification, it takes more time to open the Pihua database link. However, although the cost of Windows certification is high, it is not important to relatively (caused by the performance loss) compared to the time spent on the execution of a command or store. As a result, the advantages of Windows certification listed above are usually slightly over performance loss. Similarly, when opening a pool-level link, in the RTM version of the .NET Framework, Windows Certification is not more obvious. Avoid pretending to pretending Windows certification in the middle layer to access the Database's Windows account. Although it is more logical, it is more logical, it must be avoided because it is seriously affected by damaging the linking poolization and the application's scalability. In order to solve this problem, consider the implementation of the limited Windows account (rather than authenticated person), each account represents a specific role. For example, you can consider the following method:
Create two Windows accounts, one for read operations, one for write operations (you can also map with separate account maps for specific applications. For example, you can use an account for Internet users, and internal operators and / or Administrators use additional accounts). Map each account to a SQL Server database role and then set the required database permissions for each role. When using the application logic in the data access layer to determine which Windows account needs to be pretended when performing the database operation. Note that each account must be the domain account existing in the Internet Information Service (IIS) and SQL Server in the same domain or trust domain; or a matching account that creates (with the same username and password) on each computer. Use TCP / IP SQL Server 7.0 for the network library to support Windows certification for all network libraries. Use TCP / IP to get configuration, performance, and scalable advantages. For more information on using TCP / IP, see this article through the firewall to establish links
one period.
The storage chain string has a variety of ways to store a chain string, each with different degrees of flexibility and security. Although the string is hardcoded in the source code, the file system cache ensures that performance loss associated with external storage strings outside the diploma system can be ignored. In fact, additional flexibility provided by external link strings (allowing administrators) is popular in any case. When you choose to store a chain string, the two important factors to consider are the security and simplicity of configuration, followed by performance. You can choose to store the database link string in the following locations: Application configuration files such as web.config files for ASP.NET web applications. Universal Data Link File (UDL) (only supported by the OLE DB .NET Data Supply) Windows Registry Custom file COM directory, you can use Windows authentication to use Windows authentication with Windows certification by using constructor (only for service components), you can Avoid storage user names and passwords on the link string. If the safety requirements require more stringent, consider the memory of the links in an encrypted format. For ASP.NET web applications, the link string is stored in a web.config file in encryption format is a secure and configurable solution. Note that in the link string, set the Persist Security Info Names value to false, you can prevent the Connectionstring property of the SQLCONNECTION or OLEDBCONNECTION object to return to security sensitive content, such as passwords. The following sections discusses how to store chain strings with these methods and illustrate relative advantages and disadvantages. This allows you to make a corresponding choice based on a particular application environment. Profile using an XML application configuration You can store the database link string in the custom setting portion of the application configuration file using element Appsettings. This element supports any keyword-value pair, as shown in the following code segment:
Value = "server = (local); integrated security = sspi; database = northwind" /> appsettings> configure> Note: Appsettings elements are now under the Configuration element and cannot appear directly below System.Web. advantage Easy to deploy. The link string is deployed with the configuration file through regular .NET XCOPY deployment. It is easy to access by the program. The appSettings property of the ConfigurationSettings class makes it easier to read the database link string at runtime. Support dynamic updates (ASP.NETs are limited only). If the administrator updates the link string in the web.config file, the next time the change in the string is accessed, this is a stateless component, as the customer re-uses the components. Data Access. Request. Disadvantages security. Although the ASP.NET Internet Server Application Programming Interface (ISAPI) DLL blocks the customer from directly accessing the file with the .config extension, and NTFS file system permissions are also used to further restrict access, but you may still want to avoid these ways. The content is stored on the web server on the front end. To increase security, you need to store the link string in an encrypted format in the configuration file. MORE INFORMATION Utilize the AppSettings static properties of the System.Configuration.ConfigurationSettings class, you can get the custom settings for your application. As shown in the following code snippet, it is assumed here to set the set keyword for DBConnStr. Using system.configuration; private string getdbaseconnectionstring () { Return configurationSettings.appsettings ["dbconnstr"]; } More information about configuring the .NET Framework application, see http://msdn.microsoft.com/library/en-us/cpguide/html/cpconfiguringnetFrameworkApplications.asp. Using the UDL file OLE DB .NET data supplier support Its link string uses a Unified Data Link (UDL) file name. The link string can be transmitted to the OLEDBConnection object in the form of building parameters, or set up a link string using the Connectionstring property of the object. Note SQL Server .NET Data Supply does not support using UDL files in its link string. Therefore, this method is only valid using the OLE DB .NET data supplier. For OLE DB supplies, use the link string to reference the UDL file, use "file name = name.ud,". Advantage standard method. You may have been managed by the UDL file. Disadvantage performance. Each time you open the link, the link string containing UDLS is read and parsed. safety. The UDL file is stored in a plain text. With NFTS file permissions, you can ensure the security of these files, but this will lead to the same problem as the .config file. The UDL file is not supported by the SQLClient object. This method is not supported by the SQL Server .NET data vendor, and you want to use this supplier to access SQL Server 7.0 and its later versions. More information Must ensure that the administrator has the read / write access to the file for management, and make sure that the identity of the application has read permissions. For ASP.NET web applications, the application worker process is run by default, but using the machine-wide profile (Machine.Config) The element can cover it. Use the web.config file Elements, and an optional specified account, you can pretend. For web applications, make sure that the UDL file is not placed in the false directory, because the file can be downloaded from the web. More information about these and other ASP.NET features related to security, see http://msdn.microsoft.com/library/en-us/dnbda/html/authaspdotnet.asp .. Using the Windows Registry You can store the link string in a Windows registry, but not for use due to deployment issues. Advantageous security. With Access Control List (ACLs), access to the selected registry keyword can be managed. For higher levels of security, consider encrypting data. It is easy to access by the program. The .NET class supports reading a string from the registry. Disadvantage deploy. The relevant registry settings must be deployed with applications to some extent the advantages of XCOPY deployment. You can use a custom file to store a chain string using a custom file, but this technology is not an advantage, so it is not recommended. advantage No shortcomings Extra coding. This approach requires additional coding and forcing you to clearly handle the simultaneous problems. deploy. This file must be copied with other ASP.NET application files. Avoiding this file in a directory or subdirectory of the ASP.NET application, you can prevent it from downloading through the network. Use build parameters and COM directories to store the link string in the COM directory and use the object's constructor to automatically pass it to the object. COM When the object is initialized, the construct method of the object will be called immediately after the configuration construct string. Note that this method is only for service components. This method is considered only if the management component uses other services, such as distributed transaction support or object poolization. advantage Management. With component service MMC plugins, administrators can easily configure link strings. Disadvantage safety. The COM directory is considered an insecure store (although you can limit the access to it by using the COM role), and thus cannot be used to maintain the link string in a clear text. deploy. The entry in the COM directory must be deployed with the .NET application. If other business services such as distributed transactions or objects are used, store the database link string in the directory will not increase the additional overhead of deployment, because other services must be supported, the COM directory must be deployed. You must provide services for components. Constructed strings can be used only for the components of the service. To enable constructing strings, you cannot derive the required component classes from the ServicesDComponent class (this will serve the component). More information For more information on how to configure the .NET class for an object constructor, see how the Appendix is constructed for .NET class. For more information on the development of service components, see http://msdn.microsoft.com/library/en-us/cpguide/html/cpConwritingServicedComponents.asp. Link usage mode No matter what .NET data supplier, you must always: Open the database link as much as possible. Use this link with as short as possible. Turn off the link as quickly as possible. Link until it is closed through the Close or Dispose method, it returns to the pool. Even if it is found to be in a crash state, it should also be closed. This ensures that it can return to the pool and is marked as invalid. The object pool periodically scans the pool to find objects that have been marked as invalid. To ensure that the method returns the front link has been closed, consider the method of demonstrating in the following two codes. The first example uses the finally block, and the second example uses a C # using declaration, which ensures that the object's Dispose method is called. The following code ensures that the Finally block is closed. Note that this method is only used in Visual Basic .NET and C # because Visual Basic .NET supports structured exception processing. Public void dosomework () { SqlConnection Conn = New SqlConnection (Connectionstring); Sqlcommand cmd = new sqlcommand ("commandproc", conn; cmd.commandtype = commandtype.storedProcedure; Try { Cn.open (); cmd.executenonquery (); } Catch (Exception E) { // Handle and log Error } Finally { CONN.CLOSE (); } } The current code shows another method, this method uses a C # using declaration. Note that Visual Basic .NET does not support the USING declaration, or any functionality of any functionality. Public void dosomework () { // USING Guarantees That Dispose Is Called on Conn, Which Will // close the connection. Using (SqlConnection Conn = New SqlConnection (Connectionstring)) { Sqlcommand cmd = new sqlcommand ("commandproc", conn; Fcmd.commandtype = commandtype.storedProcedure; Cn.open (); cmd.executequery (); } } This method also applies to other objects, such as SqlDataReader or OLEDBDataReader, these objects must be turned off before any other objects are processed. After the error handles the ADO.NET error generation, the underlying structured exception processing supported by the .NET framework will be processed. As a result, the error handling mode in the data access code is identical to the error handling method in other parts of the application. The abnormality is processed and processed by standard .NET exception processing syntax and technology. This section describes how to develop strong data access codes and explain how to handle data access errors. This section also provides an exception handling guide related to SQL Server .NET data vendor. .NET exception The .NET data supplies convert these exceptions in the data access code in the data access code to the data accessed code. The error details of a particular database can be obtained by the attributes of the relevant exception objects. All .NET exception type is finally derived from the exception base class from the System namespace. The .NET data supplier releases a specific type of supplier. For example, once SQL Server returns an error state, the SQL Server .NET data supply releases the SQLException object. Similarly, the OLE DB .NET data supplier releases an exception of the OLEDBEXCEPTION type, this object contains details exposed by the underlying OLE DB supply. Figure 3 shows an abnormal hierarchy of the .NET data supplier. Note that the OLEDBEXCEPTION class is the base class of the ExternalException class from the ExternalException class. The object's ERRORCODE property stores the COM HRESULT generated by OLE DB. Figure 3 NET Data Supply Hierarchy Caches and processes .NET exceptions To process data access exceptions, place the data access code in the TRY block, and use the appropriate filter to capture any exceptions of the generated in the CATCH block. For example, when writing a data access code with SQL Server .NET Data Supply, the exception of the SQLException type should be captured, as shown in the following code: Try { // Data Access Code } Catch (SQLException Sqlex) // More Specific { } Catch (Exception EX) // Less Specific { } If there is a different filtering criteria for more than one Catch declaration, remember, arrange them in the most special type to the least special type. In this way, the most special type in the CATCH block will be performed for any given type. The property exposed by the SQLException class contains details of the exception status. These include: The Message property contains text for describing errors. Number properties, which contains erroneous numbers that uniquely identify incorrect types. State properties. It contains additional information about the error enable state. It is often used to indicate a particular event in a particular error state. For example, if a single storage process generates the same error from more than a row, the present attribute will be used to identify a particular event. Errors collection. It contains detailed information generated by SQL Server. This collecting portion is an object that contains at least one SQLError type. The following code snippet demonstrates how to use SQL Server .NET data vessel to process SQL Server error status: use system.data; Using system.data.sqlclient; Using system.diagnostics; //Method Exposed by a data access layer (dal) Component Public String getProductName (int Productid) { SqlConnection conn = new SqlConnection ("Server = (local); integrated security = sspi; database = northwind"); // Enclose All Data Access Code forin a Try Block Try { Cn.open (); SQLCommand cmd = new SQLCommand ("LookuppproductName", Conn; cmd.commandtype = commandtype.storedProcedure; Cmd.Parameters.Add ("@ ProductID", ProductId; Sqlparameter parampn = CMD.Parameters.Add ("@ ProductName", SqldbType.varchar, 40); PARAMPN.DIRECTION = parameterDirection.output; cmd.executenonquery (); // the finally code is executed before the method Returns Return parampn.value.tostring (); } Catch (SQLEXCEPTION SQLEX) { // Handle Data Access Exception Condition // log specific Exception Details LOGEXCEPTION (SQLEX); // Wrap the Current Exception in A More Relevant // Outer Exception and Re-Throw the New Exception Throw new dalexception "Unknown ProductID:" ProductId.toString (), SQLEX); } Catch (Exception EX) { // Handle Generic Exception Condition. Throw EX; } Finally { Conn.close (); // EnsuRES Connection IS Closed } } // Helper Routine That Logs Sqlexception Details To The // Application Event Log Private void logxception (SQLEXCEPTION SQLEX) { Eventlog el = new eventlog (); El.Source = "CustomApplog"; String strmessage; StrMessage = "Exception Number:" SQLEX.NUMBER "(" SQLEX.Message ") HAS Occurred"; El.writentry (StrMessage); FOREACH (SQLERROR SQLE IN SQLEX.ERRORS) { StrMessage = "Message:" SQLE.MESSAGE "Number:" SQLE.NUMBER "Procedure:" SQLE.Procedure "Server:" SQLE.Server "Source:" Sqle.Source "State:" Sqle.State "Severity:" SQLE.CLASS "LINENUMBER:" SQLE.LINENUMBER; El.writentry (StrMessage); } } In the SQLException Catch block, the code initially utilizes the LOGEXCeption help function record error status, which utilizes the Foreach declaration enumerates the details of the ERRORS collection and records the error details into the error log. The code in the CATCH block then encapsulates the SQL Server's exceptions in the Dalexception type, which makes it more meaningful to the caller's getProductName method. The exception handler uses the keyword throw to pass the exception to the call. More information For a complete list of SQLException class members, see http://msdn.microsoft.com/library/en-us/cpref/html/frlrfsystemDataSqlClientsqlexceptionMemberstopic.asp. Regarding the development of the exception, .NET exception record and package, return more information for the use of different methods of exception, see http://msdn.microsoft.com/library/default.asp?url=/library/en- US / DNBDA / HTML / ExcePTDotNet.asp. Generate an error T-SQL from the stored procedure provides a Raiserror function. You can use this function to generate a setup error and return an error back to the customer. For ADO.NET customers, SQL Server .NET data supplies interprets these data errors and convert them to SQLError objects. Using the raiserror function is a simple method to include the message text as the first parameter, and then specify a severity and status parameters, as shown in the following code segment: Raiserror ('Unknown Product ID:% S', 16, 1, @ ProductID) In this example, the alternate parameter is used to return the current product ID as part of the error message text, and the parameter 2 is the severity of the message, and the parameter 3 is a message state. More information To avoid hard coding of the message text, you can use the sp_addMessage system stored procedure or SQL Server Enterprise Manager to add your own message to the SysMessages table. Then you can use the ID reference message that passed to the RAISError function. The message IDs you define must be greater than 5000, as shown in the following code segment: raiserror (50001, 16, 1, @ProductID) About the Raiserror function, query Raiserror in the online book of SQL Server. Correctly use the severity level to select the wrong severity level, and clear the impact caused by each level. The range of errors is 0-25, and it is used to indicate the type of problem encountered by SQL Server 2000. In the client code, you can get the severity of the error by checking the Class property of the SQLError object in the ERRORS collection of the SQLEXCeption class. Table 1 shows the meaning of different severity levels and the impact caused. Table 1. Error severity level - impact and significance Severe Level Link Closed Generation SQLException Object Meaning 10 and the following NONO notification messages do not represent incorrect states. 11-16noyes can be modified by the user, for example, using the modified input data retry operation. 17-19noyes resources or system errors. 20-25yesyes fatal system error (including hardware errors). The customer link is terminated. Controlling Automated Transaction SQL Server .NET Data Supply For any severity greater than 10 errors that it encounter, throws the SQLException object. When the SQLException object is detected as a component of the Automation (COM ) transaction, the component must ensure that it can cancel the transaction. This may be, perhaps not an automation process, and depends on whether the method has made a markup of the AutoComplete property. For more information on processing objects in automated transaction context, see section of the determination transaction in this article. The notification type message 10 and the following severity level are obtained for indicating the notification type message and does not cause the thrush of the SQLException object. To get a notification message:> Create an event handler and submit an InfoadAge event exposed to the SqlConnection object. The following code snippet shows the event agent. Public Delegate Void SqlinFomessageEventhandler (Object Sender, SQLINFOMESSAGEVENTARGS E); Message data can be obtained by passing to the SQLINFOMESSAGEVENTARGS object in your event handler. This object exposes the ERRORS attribute that contains a set of SQLError objects - each notification message A SQLError object. The following code snippet demonstrates how to register an event handler for recording notification messages. Public String getProductName (int Productid) { SqlConnection conn = new SQLCONNECTION "Server = (local); Integrated Security = SSPI; Database = Northwind"); Try { // Register a Message Event Handler Conn.infomessage = New SqlinfomessageEventHandler (MessageEventhandler); Cn.open (); // setup Command Object and Execute IT . } Catch (SQLEXCEPTION SQLEX) { // log and handle exception . } Finally { CONN.CLOSE (); } } // Message Event Handler Void MessageEventHandler (Object Sender, SqlinfubessageEventArgs E) { Foreach (SQLERROR SQLE IN E.Error) { // log sqlerror proties . } } Performance This section describes some common data access scenarios, which describes the optimal performance and scalability solutions for each scenario in the form of ADO.NET data. In the right place, it also compares performance, function and development. This section takes into account the following functional solution. Get multiple rows. Get a result set and repeat it in the obtained row. Get a line. Get a row with the specified keyword. Get one. Get one from the specified row. Determine the existence of a certain data. Check if a row with a specific keyword exists. This is a variant of single-finding scenarios, which is enough to return a simple Boolean value. Get Multi-Bank In this scenario, you have to get a set of formalized data and repeat an action in the obtained row. For example, you get a set of data and processed in a non-linked manner, and then (possibly passing it as an XML document to the client application. Optionally, you can also display these data in the form of an HTML table. To help determine the most appropriate data access method, consider whether you need (non-link) DataSet object's additional flexibility, or only the original performance provided by the SQLDataReader object, which is very suitable for data representation of the B2C web application. Figure 4 shows these two basic scenes. Note SqlDataAdapter used to fill DataSet utilize SqlDataReader method data. Figure 4 Multi-line data access scheme comparison When you get multiple rows from the data source: Use the SqlDataAdapter object to generate a DataSet or DataTabl object. Provide read-only forward data streams with the SqlDataReader object. Provide read-only XML data streams using the XMLReader object. SQLDATAREADER and DataSet / DataTable are essentially the selection between performance and functionality. SQLDataReader provides optimal performance, while DataSet provides additional functionality and flexibility. Data Binding All three objects can be used as data sources of data binding controls. DataSet and DataTable can be used as data sources for wider range controls. This is because DataSet and DataTable implementation (generating ILIST interface) IListSource interface, and SqlDataReader implements the Ienumerable interface. Many WinForm controls that can perform data binding needs to implement the data source of the ILIST interface. This difference is because the type of scene designed for each object type is different. DataSet is a rich, non-linked structure that is suitable for web and desktop applications. On the other hand, the data reader has been optimized for web applications. This application needs to be optimized and can only access forward. Check the data source requirements for the specific control type to be bound. Data Dataset is delivered between the application intercommination provides a relationship diagram that can be arbitrarily manipulated as an XML, and the non-link cache copy of the data is passed between the application layer and the component. However, SQLDataReader provides more optimized performance because it avoids performance and memory overhead associated with creating DataSet. Remember, DataSet objects will result in multiple sub-objects - including DataTable, DataRow, and Datacolumn - and as a collection object of these sub-object containers. Use DataSet to use SqlDataAdapter to populate DataSet objects when: You need a non-linking cache data for cache data so that you can pass it to other layers in other components or applications. You need a data relationship diagram in memory to perform XML or non-XML operations. The data you are using is from multiple data sources, such as multiple databases, tables, or files. You want to update some or all rows, and hope to use the batch of SqlDataAdapter. You have to bind data to the control, and this control requires support for the ILIST interface. MORE INFORMATION If you use SqlDataAdapter to generate DataSet or DataTable, you should pay attention to: You don't have to clearly open or close the database link. The SQLDataAdapter Fill method opens the database link and turn it off before this method returns. If the link has been turned on, this method still makes the links open. If you need link for other purposes, consider opening the link before calling the Fill method. This way you can avoid unnecessary open / closing operations, improve performance. Although the same SQLCommand object can be repeatedly executed, do not repeat the use of this object to perform different commands. For how to use the SqlDataAdapter object to populate the code example of the DataSet or DataTable object, see how to use the SqlDataAdapter object in the appendix. Using SqlDataReader, you can use the SQLDataReader object obtained by calling the SQLCOMMAND object: when processing a lot of data - too much and cannot be maintained within a single buffer. I hope to reduce the imprint of the application in memory. I hope to avoid creating related overhead with the DataSet object. I want to perform data binding operations for a control, and this control supports the data source of the IEnumerable interface. I hope the streamlined data access and optimize it. A row that contains the binary large object (blob) is read. You can use the SqlDataReader object to pull the BLOB data from the database from the database in a manageable large block, rather than one-time extraction. For more details on processing BLOB data, see this article handling blobs one period. MORE INFORMATION If you use the SqlDataReader object, please note: During the data reader activity, the underlying database link is kept open and cannot be used for any other purpose. Call the CLOSE method as possible in the place of the SQLDataReader object. Each link can only have a data reader. By passing the COMMANDBEHAVIOR.CloseConnection enumeration value to the ExecuteReader method, you can clear the link after using the data reader; or bind the link lifecycle to the SqlDataReader object. This indicates that the link will also turn off when the SqlDataReader object is turned off. When using the reader access data, if you know the underlying data type of the column, you should use the type of accessor method (such as getInt32 and getString) because these methods reduce reading when reading column data. The amount of type conversion required for column data. To avoid sending unnecessary data from the server to the client, if you want to close the reader and discard all the results of all reserved results, call the CANCEL method to call the command object before calling the Close method to the reader. The CANCEL method ensures that the result of the server is abandoned without being sent to the client. Instead, the CLOSE method to call the data reader will make the reader unnecessarily extract the result of the reserved result to empty the data stream. If you want to get the output value or return value returned from the stored procedure, and you can call the Close method to the reader before you get the output or return value, you must call the reader before getting the output or return value. About demonstrating how to use the SQLDataReader object, how to use the SqlDataReader object to get multi-line data in the appendix. Using XMLReader under the following cases, use the XMLReader object obtained by calling a SQLCOMMAND object: It is desirable to process the data as an XML, but do not want to trigger additional performance overhead due to the creation of a DataSet object, and the non-link cache of the data is not required. It is desirable to use the SQL Server for XML syntax, which allows for a flexible way to get an XML fragment from the database (ie, an XML document without root elements). For example, this approach allows you to accurately specify the element name, is the use element or a graphic of the attribute as the core, the illustration is returned with XML data, and so on. MORE INFORMATION If you use XmlReader, Note: When reading data from the XMLReader object, the link must be kept open. The ExecutexmlReader method of the SQLCommand object does not currently supports the Commandbehavior.CloseConnection enumeration value, so the link must be clearly closed after the reader is used. For how to use the code example of the XMLReader object, see how multi-line data is obtained using XmlReader in the appendix. Gets a single line of data In this scenario, get a single line of data containing a set of specified columns from the data source. For example, you get a customer ID and want to find details related to the customer; or get a product ID, and hope to get product information. Method Compare If the binding operation is to be performed on a row of data obtained from the data source, you can populate the DataSet or DataTable object with the SqlDataAdapter object, which is the same as those described in the previously discussed multi-line data and repeating scenes. However, this object should be avoided unless otherwise requiring a DataSet or DataTable object. If you need to get a single line of data, use one of the following methods: Use stored procedure output parameters. Use the SqlDataReader object. Both methods avoid universal additional overhead of the server-side creation result set and create a DataSet object in the client. The relative performance of each method relies on whether the intensity level and whether the database link is enabled. When the database link is enabled, the performance test indicates that the stored procedure method is similar to the SqlDataReader method in the high-intensity environment (more than 200 links at the same time). Use stored procedure output parameters to use stored procedure output parameters: To get a row of data from a linked poolization enabled multi-layer web application. More information Regarding the code example of how to use the stored procedure output parameter, see the use of stored procedure output parameters in the appendix to obtain a row of data. Use the SqlDataReader object below, you need to use the SqlDataReader object: In addition to data values, metadata is required. The column metadata can be obtained using the GetSchematable method of the data reader. Did not use a link poolization. When link pool is invalid, the SqlDataReader object is a good way in all intensity environments; performance tests indicate that this method is approximately 20% higher than storage process methods when the 200 browser link is linked. More information If you know that the query results need to return a line, use the commandbehavior.singlerow enumeration value when calling the SQLCOMMAND object method. Some suppliers such as OLE DB .NET data supplies, use this skill to optimize performance. For example, the supplier uses an IROW interface (if this interface exists) instead of a higher IROWSET interface. This parameter has no effect on the SQL Server .NET data supplier. When using the SqlDataReader object, the output parameters should always be used through the type of Accessor of the SqlDataReader object, such as getString and GetDecimal. This avoids unnecessary type conversion. With regard to how to use the SqlDataReader object to get a single line of code, see How to use the SqlDataReader object in the appendix to get a single line of data. Get a single data in this scenario to get a single data. For example, after the product ID is provided, you want to query a single product name; or give the customer name, you want to query the customer's credit rating. In this scenario, in order to obtain single data, it is usually not desirable to raise additional overhead created a DataSet object or even a DataTable object. Maybe just want to check if there is a specific row in the database. For example, when the new user registers on the website, you need to check if the selected username already exists. This is a very special example in a single data query, but in this example, it is enough to return a simple Boolean return value. Methods Comparison When acquiring individual data from the data source, consider the following method: The ExecuteScalar method for using the SQLCommand object with the memory process. Use stored procedures to output or return to parameters. Use the SqlDataReader object. The ExecuteScalar method returns the data item directly because it is designed to return only a single value, which requires less code compared to the stored procedure output parameters and the SqlDataReader method. In terms of performance, the stored procedure output or return parameters should be used because the test results show that the stored procedure method provides consistently from low-intensity to high-intensity environments (links to 200 browser links at the same time). Performance. More information If multiple columns and / or rows are returned by the query executed by the ExecuteQuery method, this method only returns to the first column of the first row. See how to use ExecuteScalar to get single data using ExecuteScalar in the appendix using the code snippet using the ExecuteScalar method. About demonstrating how to use the stored procedure output or return parameter acquisition of single data, see how to use the stored procedure output or return parameter acquisition in the appendix to demonstrate how to use the SqlDataReader object to obtain a single data code example, see how to see the appendix Use the SqlDataReader object to get a single data. Configure Internet applications frequently through firewalls to configure Internet applications to enable it to connect to SQL Server through the firewall chain. For example, many web applications and main structural components of the firewall are peripheral networks (also known as DMZ or non-military zones), which are used to isolate high-end web servers and internal networks. When connecting to SQL Server through a firewall, you need to configure the firewall, customer, and server. SQL Server provides customer web applications and server web applications to help configure it. Select a network library When you establish a link through a firewall, use the SQL Server TCP / IP network library to simplify configuration, which is the default option for SQL Server2000 installation. If you use the previous version of SQL Server, you can check if the TCP / IP has been configured as the default network library in the client and server-side network applications. In addition to configuration advantages, use TCP / IP libraries also means: benefiting from the improved performance and increased scalability of bulk data. Avoid additional security information related to the specified pipeline. TCP / IP must be configured on the customer and server computer, because most firewall limits the port passing port, so you must carefully consider the port number used by SQL Server. Configure the default instance of the server SQL Server to monitor the 1433 port. However, the specified instance of SQL Server 2000 dynamically assigns the port number when they are turned on. Network administrators have hopes to open a range of ports in the firewall; therefore, when using the firewall to specify instances of SQL Server, the instance is configured using the service network application to make it listening to a specific port. The administrator then configures the firewall so that the firewall allows traffic to reach the port listening to a particular IP address and server instance. Note that the source port number used by the client network library is dynamically allocated between 1024-5000. This is a standard practice for TCP / IP client applications, but this means that the firewall must allow any port traffic to this range to pass. For more information on ports used by SQL Server, see INF: P on the Microsoft Product Support Services website, TCP ports needed to communicate with SQL Server via firewall . . Dynamic Lookup Specify instance If you change the default port listens to SQL Server, you must configure the client to link it to this port. For more details, see the configuration client in this article one period. If the port number of the SQL Server 2000 default instance is changed, the client will cause a link error. If there are multiple SQL Server instances, the latest version of the MDAC data access stack (2.6) will perform dynamic lookup, and the specified instance is positioned using the User Data Bad Protocol (UDP) negotiation (via UDP port 1434). Although this approach may be effective in the development environment, it is not much working in the current environment, because the firewall will prevent UDP negotiation traffic under the typical question. In order to avoid this, always configure the client to link to the configured destination port number. Configuring the client should configure the client to use the TCP / IP network library to SQL Server, and should ensure that the client library uses the correct destination port number. Use the TCP / IP network library to use the SQL Server client network library to configure the client. In some installation versions, this application may not be installed to the client (such as a web server). In this case, you can solve it as follows: The "NetWork Library = DBMSSOCN" name-value supplied by the link string is used to specify the network library. String DBMSSOCN is used to identify TCP / IP (socket) libraries. Note that when using the SQL Server .NET data supply, the default settings of the network library use "dbmsoCN". Modify the registry on the client machine and set TCP / IP to the default library. For more information on configuring the SQL Server network library, see HOWTO: Modify the SQL Server default network library (Q250550) without using the client network application. Specify port If the instance of SQL Server is configured to listen to other ports other than the default 1433, then the link to the port number can be specified: Use the client network application to specify the port number using the "Server" or "Data Source" name-value pair provided to the link string. To use a string in the format: "Data Source = ServerName, PortNumber" Note that ServerName can be an IP address, or a Domain Name System (DNS) name, in order to optimize performance, you can use IP addresses to avoid DNS queries. Distributed Transaction Process If developed service components using COM Distributed Transaction Processing and Microsoft Distributed Transaction Coordinator (DTC) services, you need to configure firewalls to allow DTC streams to be in different DTC instances and DTCs and resources. The manager (such as SQL Server) flows. For more information on the DTC open port, see INFO: Working through a firewall, configuring a Microsoft Distributed Transaction Coordinator (DTC). Processing BLOBS Currently, many applications have external data formats, such as video formats in addition to processing many traditional strings and digital data. Data format types of graphics, sound and video are different. However, from a storage perspective, they can be considered as binary data blocks, typically referred to as BLOBS (binary large objects). SQL Server provides binary, varbinary, and image data formats to store BLOBS. Without the name, the BLOB data can also be referred to as a file-based data. For example, you might want to store binary annotation fields related to a particular row. SQL Server provides NTEXT and TEXT data types for this purpose. Typically, for binary data less than 8 kB, use Varbinary data types. For binary data exceeding this size, use Image. Table 2 brings together the main features of each data type. Table 2 Data Type Characteristic Data Type Size Description Binary range from 1-8KB. The storage size is a specified size plus 4 bytes. The fixed length binary data Varbinary range ranges from 1-8 kB. The storage size is an actual size of the data plus 4 bytes. Variable Length Binary Data Image From 0-2GB size variable length binary data large-capacity variable length binary data TEXT from 0-2GB size variable length data character type data NTEXT from 0-2GB size variable length data Wide Biode Character Data Stores the BLOB Data SQL Server 7.0 and its later versions have improved the use of BLOB data stored in the database. One reason for this is that the database page size has increased to 8KB. As a result, less than 8kb text or image data does not have to store a separate binary tree structure in the page, but can be stored in a single line. This means reading and writing text, ntext, or image data can be as fast as reading or writing characters or binary strings. After 8KB, a pointer will be created in the row, the data itself is stored in the binary structure of the standalone data page, which inevitably generates an impact on performance. For more information to force Text, NText, and Image data to store in a single line, see the use of Text and Image data topics in SQL Server online books. An optional method for processing BLOB data frequently is that the BLOB data is stored in the file system and stores a pointer in the database column (usually a uniform resource locator -ur link) to reference the correct file. For SQL Server 7.0, you can improve performance in file systems that BLOB data stored outside of the database. However, SQL Server 2000 improves BLOB support, and ADO.NET supports support for reading and writing BLOB data, making BLOB data in the database into a feasible method. The advantages of the BLOB data in the database are stored in the database, bringing a lot of advantages: Easy to maintain the synchronization of BLOB data and other items in the row. BLOB data is supported by the database, with a single storage stream, easy to manage. The XML supported by SQL Server 2000 can access BLOB data, which will return 64-bit encoded data in the XML stream. A column that contains a filling or variable length character (including wide character) data can perform SQL Server full text search (FTS) operations. It is also possible to perform FTS operations on the formatted text-Word or Excel document containing formatted text in the Image field. Write the BLOB data to the code below to demonstrate how to use ADO.NET to write binary data obtained from a file to the SQL Server Image field. Public void storePicture (string filename) { // read the file into a byte array FILESTREAM FS = New FileStream (FileName, Filemode.Open, FileAccess.Read); Byte [] imagedata = new byte [fs.length]; fs.read (imagedata, 0, (int) fs.length); fs.close (); SqlConnection conn = new sqlconnection (""); Sqlcommand cmd = new SQLCOMMAND ("StorePicture", CONN); cmd.commandtype = commandtype.storedProcedure; Cmd.Parameters.Add ("@ filename", filename; CMD.Parameters ["@ filename"]. Direction = parameterdirection.input; Cmd.Parameters.Add ("@ blobdata", sqldbtype.image); CMD.Parameters ["@ blobdata"]. Direction = parameterdirection.input; // store the byte array within the image field CMD.Parameters ["@ blobdata"]. value = imagedata; Try { Cn.open (); cmd.executenonquery (); } Catch { Throw; } Finally { CONN.CLOSE (); } } Read BLOB data from the database You need to use the Commandbehavior.SequentialAccess enumeration value when you create a SQLDataReader object via the ExecuteReader method to read a line containing BLOB data. If this enumerate value is not available, the reader only sends a line of data from the client from the server. If the row contains BOLB data, this indicates that a large amount of memory is to be taken. By utilizing an enumerated value, a better control is obtained because the BLOB data is only issued when it is referenced (for example, using the GetBytes method, you can control the number of bytes). This is shown in the code snippet below. // Assume Previously Established Command and Connection // the command selects the Image column from the table Cn.open (); SqldataReader Reader = cmd.executeReader (Commandbehavior.sequentialAlaccess); Reader.Read (); // Get Size of Image Data - Pass Null As The Byte Array Parameter Long Bytesize = Reader.getbytes (0, 0, NULL, 0, 0); // allocate byte array to hold image data Byte [] imagedata = new byte [bytesize]; Long BytesRead = 0; INT CURPOS = 0; While (bytesRead { // chunksize is an Arbitrary Application Defined Value BYTESREAD = Reader.getbytes (0, Curpos, ImageData, Curpos, Chunksize); Curpos = chunksize; } // Byte Array 'ImageData' Now Contains Blob from Database Note Using Commandbehavior.sequentialAndbehavior.sequentialAccess needs to access column data in strict order. For example, if the BLOB data exists in column 3, and it is also necessary to read data from the 1, 2 column, the first, 2 columns must be read before reading the third list. Transaction is actually all business-oriented applications for updating the data source requires transaction support. By providing four basic guarantees, that is, well-known first abbreviations ACID: divided, consistency, separation, and durability, transaction processing will be used to ensure integrity of systems contained in one or more data sources. For example, consider a web-based retail application that is used to handle purchase orders. Each order requires 3 complete different operations, which involves 3 database updates: The inventory level must reduce the number of the order. The amount purchased must be credited to the customer's credit rating. The new order must be added to the database. These three different operations are critical as a unit and automated. Three operations must be all successful, or unsuccessful - any error will disruver data integrity. Transaction processing provides this integrity and other guarantees. To learn more about the basic principles of the transaction process, see http://msdn.microsoft.com/library/en-us/cpguide/html/cpcontransactionProcessingFundamentals.asp. Many methods can be used to merge transaction management into the data access code. Each method is suitable for one of two basic programming models below. Handmade transaction. You can write code that uses ADO.NET or Transact-SQL transaction support feature directly during component code or stored procedures. Automation (COM ) transaction. You can add a declaration to the .NET class to specify the properties required for the object transaction processing at runtime. This model allows you to easily configure multiple components to run them within the same transaction. Although the automated transaction model greatly simplifies the distributed transaction process, both models are used to perform local transaction processing (ie, transaction processing executed for single resource manager such as SQL Server 2000) or distributed transaction (ie, Transaction processing performed on multiple resource management on a remote computer). You may try to benefit from an easy programming model with automation (COM ) transaction. This advantage is more pronounced in a system with multiple components to perform a database update. However, in many cases, additional overhead and performance loss caused by this transactional model should be avoided. This section will guide you to select the most appropriate model based on a specific application environment. Selecting a transaction model Before selecting a transaction model, you should first consider whether you really need transaction. Transaction processing is the most expensive resource used by server applications, where they do not use, they reduce scalability. Consider the guidelines for managing transaction processing below: transaction processing only when you need to get a lock and need to enhance the ACID rule. Maintain transaction processing as short as possible to minimize the time to maintain the database lock. Never put our customers in the control of the transaction life cycle. Do not use transaction processing for a single SQL statement. SQL Server automatically performs each statement as a single transaction. Automated transaction processing and manual transaction processing Although the programming model has simplified automated transaction processing, especially when performing database updates in multiple components, local transaction processing is always quite fast because they do not need to interact with Microsoft DTCs. This is also the case even if you use automated transaction to a single local resource manager (such as SQL Server) (although performance loss is reduced), because the hand-local transaction process avoids all unnecessary communication between the DTC process. For the following cases, manual transactions are required: Perform transaction processing for a single database. For the following conditions, automatic transaction processing should be used: You need to extend a single transaction to multiple remote databases. A single transaction is required to have multiple resource managers (such as databases and Windows 2000 message queues (called MSMQ) resource managers). Pay attention to avoid mixing transaction processing models. It is best to use only one. In an application environment in which performance is good, (even for a single database) Select Automated Transaction to simplify the programming model, this practice is reasonable. Automated transaction processing allows multiple components to easily perform multiple operations in the current transaction processing. Using manual transaction processing For manual transaction processing, you can write code using the ADO.NET or Transact-SQL transaction support feature during component code or stored procedures. In most cases, you should choose to control transaction during storage, as this method provides higher encapsulation, and in terms of performance, this method is compatible with transaction with ADO.NET code. Use the ADO.NET to perform manual transaction processing ADO.NET support transaction processing objects, using this object to start a new transaction process, and explicitly control whether transaction processing is executed or rollback. Transaction objects are related to a single database link, which can be obtained by a BEGINTRANSACTION method of the link object. Calling this method is not implying that the next command is issued in the transaction context. Each command must be associated with transaction by setting the transaction property of the command. Multiple command objects can be associated with transaction objects, so multiple operations are packet in a single transaction in a single transaction. For examples of using the ADO.NET transaction code, see how to encode ADO.NET manual transaction in the appendix. More information ADO.NET manual transaction processing default separation level is read, which means that the database controls shared lock when reading data, but data can be modified before the end of transaction processing. This situation potentially produces non-repeatable reading or virtual data. The separation level can be changed by setting the ISOLALATIONLEVEL attribute of the transaction object to the ISOLATIONLEVEL enumeration type. The appropriate separation level must be selected carefully for transaction. It is compromised to the proportion of data consistency and performance. The highest separation level (serialized) provides absolute data consistency, but at the expense of the overall throughput of the system. The lower separation level will make the application easier to expand, but at the same time increase the possibility of errors due to inconsistencies of data. The lower separation grade is suitable for systems that read data in most time, with a minimum write data. Regarding the election of the appropriate transaction level, it is very valuable, see Microsoft Publishing House, known as INSIDE SQL Server 2000, Author Kalen delaney. The manual transaction processing can also be directly controlled by the Transact-SQL statement in the stored procedure. For example, transaction processing can be performed using a stored procedure containing Transact-SQL transaction statements such as Begin Transaction, End Transaction, and Rollback Transaction. More information If necessary, use the SET TRANSACTION ISOLATION LEVEL statement to control the separation level of the transaction processing during the stored procedure. Reading is the default setting for SQL Server. For more information on SQL Server Separation Level, see Separation Levels in SQL Server Online Book "Access and Modify Data" section. About demonstrating how to use Transact-SQL transaction statements to execute transaction updates, see How to use Transact-SQL to perform transaction processing in the appendix. The programming model is simplified using an automated transaction automation transaction because they do not need to explicitly begin a new transaction process, or clearly execute or cancel transactions. However, the biggest advantage of automated transactions is that they can be combined with DTC, which makes a single transaction to be extended to multiple distributed data sources. This advantage is important in large distributed applications. Although it is possible to control distributed transactions by manual DTC, it is possible to control distributed transactions, but automated transaction has greatly simplified workload, and it is designed for components-based systems. For example, a plurality of components can be conveniently configured to perform tasks that contain a single transaction process. Automated transactions depends on distributed transaction processing support features provided by COM . As a result, only the service component (ie, the components derived from the ServicesDComponent class) can use automated transactions. To process the configuration class for an automated transaction, the operation is as follows: Dys in the servicedComponent class located in the EnterpriseServices namespace. Define the transaction requirements of the class through the Transaction property. Enumeration values from TransactionOption determine how to configure classes in the COM class. Other attributes that can be set together with this property include transactional separation levels and timeout caps. In order to avoid that the transaction results must be explicitly selected, the method can be annotated with the AutoComplete property. If these methods are released, the transaction will be canceled. Note that if needed, you can directly select the transaction results. For more details, see section for the result of the transaction processing later. More information For more information on COM automation transactions, search for "Automation Transaction through COM " can be searched in platform SDK documents. For examples of the .ne T transaction processing class, see How to encode .NET transaction in the appendix. Configure transaction separation levels for COM 1.0 - That is, the transaction separation level running in Windows 2000 is serialized. This provides the highest separation level, which is based on performance. The overall throughput of the system is reduced. Because the resource manager (typically the database is typically) must keep reading and writing during transaction processing. During this time, all other transactions have been blocked, which will greatly impact the application's extension capabilities. With the COM version 1.5 version issued with Microsoft Windows .NET Allows the Component Configuration Transaction Separation Level in the COM directory. Settings related to the root component of the transaction determines the separation level of transaction processing. In addition, the transaction level owned by the internal subcomponents in the same transaction stream must not be higher than the level defined by the component. If this is not the case, when the child component is instantiated, it will cause an error. For the .net management class, the Transaction property supports all public isolation properties. You can use this attribute to specify a special separation level, as shown in the following code: [Transaction (Transaction (Transaction = TransactioniSolationledLevel.Readcommitted)] Public Class Account: ServicedComponent { . } More information about configuring transaction separation levels and other Windows .NET COM enhancements, see MSDN Magazine 2001 "Windows XP: Use Com 1.5 Enhancement Characteristics to make your components stronger". Determine the transaction results in all transaction component contexts of a single transaction stream, automated transaction results are determined by the status of transaction cancellation flags and consistency marks. When the root assembly in the transaction stream becomes a non-active state (and control returns the caller), the transaction results are determined. This situation has been demonstrated in Figure 5, which shows a typical bank fund transferred transaction. Figure 5 The transaction flow context is the root object (in this example) changes to an inactive state, and when the customer's method calls return, the transaction result is determined. Any consistency logo in any context is set to false, or if the transaction cancel flag is set to true, then the underlying physical DTC transaction will be canceled. You can control the transaction results from the .NET object in one of the following two ways: You can comment on the method with the AutoComplete property and let the .NET automatically store will decide the transaction results vote. If the method releases exception, this property is utilized, the consistency flag is automatically set (this value finally cancels the transaction). If the method returns without release an abnormality, then the consistency flag will be true. This value indicates that the component is happy to perform transactions. This is not guaranteed because it relies on voting of other objects in the same transaction. You can call the static method setComplete or setabort of the ContextUtIL class, which makes the consistency flags true or false separately. SQL Server errors in severeity greater than 10 will cause the management data supplier to release the exception of the SQLEXCeption type. If the method caches and processes an exception, be sure to cancel the transaction manually, or the method is marked with [AutoComplete] to ensure that the exception can pass the caller. The AutoComplete method performs the following: Pass the SQLException to call the stack. Pack the SQLEXCEPTION in an external exception and pass back the caller. It is also possible to encapsulate an exception type that is more meaningful to the call. If you can't pass, it will cause the object to not put forward the transaction, so that the database error is ignored. This means that the successful operation of the other objects of sharing the same transaction will be submitted. The following code caches SQLException and then passes it directly back to the caller. Transaction processing will eventually be canceled because the consistency flag of the object is automatically set to false when the object changes inactive. [AutoComplete] Void someMethod () { Try { // Open the connection, and perform Database Operation . } Catch (SQLEXCEPTION SQLEX) { LOGEXCEPTION (SQLEX); // Log the Exception Details Throw; // Rethrow the Exception, Causeing the consistent // flag to be set to false. } Finally { // Close The Database Connection . } } Non-Autocomlete method For methods without AutoComplete, must: Call ContextUtil.Setabort in the CatCH block to terminate transaction. This sets the compatible flag to false. If no exception event occurs, call ContextUtil.setComplete to submit a transaction, which sets the compatible flag to true (default). The code illustrates this method. Void someharmMethod () { Try { // Open the connection, and perform Database Operation . Contextutil.setComplete (); //manually vote to commit the transport } Catch (SQLEXCEPTION SQLEX) { LOGEXCEPTION (SQLEX); // Log the Exception Details Contextutil.Setabort (); // manally vote to abort the transaction // Exception is Handled At this point and is not propagated to the caller } Finally { // Close The Database Connection . } } Note If there are multiple Catch blocks, call ContextVtil.Setabort when the method starts, and call ContextUtIl.setComplete at the end of the TRY block will become easy. With this method, you don't need to repeat the conteutil.setabort in each CatCH block. The setting of the compatible marker determined by this method is only valid when the method is returned. For exception events (or loop exception), it must be passed to the calling stack because it makes the calling code that transaction failed. It allows calling code to make optimized selections. For example, in bank fund transfers, if the debt operation fails, the transfer branch can determine that the debt operation is not executed. If the compatible flag is set to a false and there is no exception event when returning, there is no way to call the code if the transaction will certainly fail. Although the Boolean value can be returned or the Boolean output parameter is set, it should still be consistent, and the abnormal event should be displayed to indicate that there is an error. This code has a standard error handling method, so it is more concise, more compatibility. Data Page is a general requirement to use data to make paging in distributed applications. For example, the user may get a list of books and the list is not fully displayed. Users need to perform some familiar operations on the data, such as browsing the next page or the previous page, or jumps to the first page of the list. Or the last page. This part of the content will discuss options for implementing this feature, as well as each option in performance and scaling. Options Compare Data Patement options: With SQLDataAdapter's Fill method, the results from the query are filled into the DataSet. Use ADO to use ADO through COM, and utilize server cursors. Manually implement data paging by using the stored process. The optimal option for paging data depends on the following factors: Scalability Requirements Requirements Requirements The memory and power of the memory and power intermediate server of the network bandwidth database server indicate the size performance test of the number of lines returned by the paging query indicates that the manual method of the stored procedure is on a large stress level. Both provide optimum performance. However, because the manual method is executed on the server, if most of the site features rely on the data paging function, the server performance will become a key element. To ensure that this method is suitable for special environments, you should test all kinds of special requirements. A variety of different options will be discussed below. Using SqlDataAdapter as discussed earlier, SqlDataAdapter is used to populate data from the database into the DataSet, and any of the overloaded Fill methods require two integer index values (as shown in the following code): Public int fix (Dataset Dataset, Int StartRecord, Int MaxRecords, String Srctable ); StartRecord value indicates the starting index value from zero. The maxRecord value represents the number of records starting from StartRecord and copies them into the new DataSet. SqlDataAdapter uses SqlDataReader to perform queries and returns the result. SqlDataAdapter reads the results and creates a DataSet based on data from SalDataReader. SqlDataAdapter copies all results through StartRecord and MaxRecords to the new generation of DataSet and discard unwanted data. This means that many unnecessary data will have potentially access to data through the network - this is the main defect of this method. For example, if there is 1000 records, it is required to be 900 to 950 records, then 899 records in front will still cross the network and then dropped. For small number of records, this overhead may be relatively small, but if you target a large amount of data, this overhead will be very huge. Another option to use the ADO implementation paging is to use COM-based ADO. The goal of this approach is to access the server cursor. The server cursor is displayed via the ADO Recordset object. You can set the location of the Recordset cursor to the AduseServer. If your OLE DB supplier supports this setting (such as SQLOLEDB), you can use the server cursor. This allows the cursor to navigate directly to the starting record without the need to pass all data to the user code of access data. This method has the following two shortcomings: In most cases, it may be necessary to translate records returned to the Recordset object into content in the DataSet to use in the customer-managed code. Although OLEDTAADAPTER is indeed acquiring the ADO Recordset object and translating it into DataSet, there is no function of starting and ending using special records. The only reality option is to move the start record to the Recordset object, loop each record, and then copy data to the new DataSet that manually generated. This kind of operation, especially using COM Interop, which advantage may not only do not need to transmit excess data on the network, especially for small Dataset more obvious. The connection and server cursor are open when the required data is output from the server. On the database server, open and maintenance of cursors require expensive resources. Although this option increases performance, it is possible to reduce scalability due to extended time two-consuming server resources. The last option for providing the data paging discussed by manual implementation in this section is to use the stored procedure to manually implement the page features of the application. For tables that contain unique keywords, the stored procedure is relatively easy. And for tables without a unique key (there should not be many keywords), which is relatively complex. Page with a single keyword table If the table contains a unique keyword, you can use the keywords in the WHERE Terms to create results settings from a particular row. This method is to match the SET ROWCOUNT state used to limit the size setting size, which provides a valid patch principle. This method will be explained in the code stored below: Create Procedure getProductSpaged @ lastproductid Int, @PageSize Int AS Set rowcount @PageSize SELECT * From product Where [Standard Search Criteria] And Productid> @LastProductID Order by [criteria That Leaves Productid Monotonically Increasing] Go The calling program of this stored procedure only maintains the value of the LastProductID and increases or decreases the value by the size between the selected continuous calls. Subsidies without a single keyword If you need a paging table, you can consider adding one - such as using the identity column. This allows the paging scheme discussed above. As long as you can create uniqueness by combining two or more areas in the results record, it is still possible to implement a valid paging scheme without a unique keyword table. For example, investigate the following form: Col1col2col3other column ... a1w ... a1x .a1y .a1z .a2w .a2x .b1w ... b1x. For this table, combined with col, col2, and col3 may have a uniqueness. In this way, the method can be implemented by the method during the following stored procedure: CREATE Procedure RetrieveDataPaged @LastKey Char (40), @PageSize Int AS Set rowcount @PageSize SELECT COL1, Col2, Col3, Col4, Col1 Col2 Col3 As Keyfield From SampleTable Where [Standard Search Criteria] And Col1 Col2 Col3> @LastKey Order By Col1 ASC, Col2 ASC, Col3 ASC Go The customer keeps the final value of the Keyfield column returned by the stored procedure, and then inserts back to the stored procedure to control the paging of the table. Although manual implementation increases the strain on the database server, it avoids unnecessary data on the network. Performance tests indicate that this method is working well throughout the strain level. However, based on the data paging function involved in the site, manually paging on the server may affect the scalability of the application. Performance tests should be run in the environment, find the most appropriate way for the application. Appendix how to enable object structure for a .NET class To enable .NET management class to use Enterprise (COM ) Services to enable .NET management class, you need to perform the following steps: Guide the required class from the SYSTEM. Enterprise Services namespace. Using system.enterprises; Public Class DataAccessComponent: ServicesDComponent Adds a construction enabled attribute for this class and specifies the default structure string. The default value is saved in the COM directory, and the administrator can use the component service Microsoft Management Console (MNC) Snap-in. To maintain the default. [Constructionenabled (default = "default dsn")] Public Class DataAccessComponent: ServicesDComponent provides a replacement implementation of a virtual construct method. This method is called after the object language constructor. The structure string saved in the COM directory is the unique string of the method. Public override void construct (String ConstructionString) { // Construct Method Is Called Next After Constructionor. // the configured dsn is supplished as the single argument } Provide a strong name for the assembly via the Assembly key file or the Assembly key name property. Any compilation that uses COM service registration must have a strong name. About more information on stronger name assembly, reference: http://msdn.microsoft.com/library/en-us/cpguide/html/cpconworkingwithstrongly- namedassemblies.asp. [Assembly: AssemblyKeyFile ("DataServices.snk")] To support dynamic registration, you can specify the name of the COM application for maintaining assembly elements and application action types, respectively, using attributes ApplicationName and Application Action, respectively. More information about assembly registration, reference: http://msdn.microsoft.com/library/en-us/cpguide/html/cpconregisteringserviced components.asp. // the applicationname attribute specifies the name of the name // COM Application Which Will Hold Assembly Components [Assembly: ApplicationName ("Dataservices")]]] // the applicationActivation.ActivationOption Attribute Specifies // WHERE Assembly Components Are Loaded ON Activation // Library: Components Run in The Creator's Process // Server: Components Run in a System Process, DLLHOST.exe [assmbly: ApplicationActivation (ActivationOption.Library)] The following code segment is a service component called DataAccessComponent that uses the COM structure string to get a database connection string. Using system; Using system.enterprises; // the applicationname attribute specifies the name of the name // COM Application Which Will Hold Assembly Components [Assembly: ApplicationName ("Dataservices")]]] // the applicationActivation.ActivationOption Attribute Specifies // WHERE Assembly Components Are Loaded ON Activation // Library: Components Run in The Creator's Process // Server: Components Run in a System Process, Dllhost.exe [assmbly: ApplicationActivation (ActivationOption.Library)] // sign the assembly. The snk key file is created Using the THE // sn.exe utility [Assembly: askEMBLYKEYFILE ("Dataservices.snk")]] [Constructionenabled (default = "default dsn")] Public Class DataAccessComponent: ServicesDComponent { PRIVATE STRING CONNECTIONSTRING; Public DataAccessComponent () { // Constructor is Called On Instance Creation } Public override void construct (String ConstructionString) { // Construct Method Is Called Next After Constructionor. // the configured dsn is supplished as the single argument THIS.CONNECTIONSTRING = ConstructString; } } How to use SqlDataAdapter to retrieve a plurality of rows to explain how to use the SqlDataAdapter object to generate a command to generate Data Sets or DataTable. It retrieves a series of catalogs from the SQL Server Northwind database. Using system.data; Using system.data.sqlclient; Public DataTable RetrieverowswithDataTable () { Using (SqlConnection Conn = New SqlConnection (Connectionstring)) { SQLCommand cmd = new SQLCommand ("DatretrieveProducts", Conn; Cmd.commandtype = commandtype.storedProcedure; SqlDataAdapter Da = New SqlDataAdapter (CMD); DataTable DT = New DataBLE ("Products"); Da.fill (DT); Return DT; } } Press the following steps to use SQLADAPTER to generate DataSet or DataTable: Create a SQLCommand object Enable the stored procedure and connect it with the SQLConnection object (display) or connection string (not shown). Create a new SqlDataAdapter object and link it to your SQLCommand object. Create a DataTable (or DataSet) object. Use the constructor to name DataTable from the variable. Call the Fill method of the SQLData Adapter object, transfer the retrieval line to DataSet or DataTable. How to use SqlDataReader to retrieve multiple rows of code how to retrieve multi-line using the SqlDataReader method: use system.io; Using system.data; Using system.data.sqlclient; Public SqlDataReader RetrieverowswithDataReader () { SqlConnection conn = new SQLCONNECTION "Server = (local); Integrated Security = SSPI; Database = Northwind"); SQLCommand cmd = new SQLCommand ("DatretrieveProducts", Conn; cmd.commandtype = commandtype.storedProcedure; Try { Cn.open (); // generate the reader. ComMMandbehavior.closeConnection Causes the // the connection to be closed when the reader object is closed Return (cmd.executeReader (CMMANDBEHAVIOR.CLOSECONNECONNECONNECONNECONNECONNECONNECONNECTION); } Catch { CONN.CLOSE (); Throw; } } // Display the Product List sale the console Private void displayproducts () { SqldataReader Reader = RetrieverowswithDataReader (); While (Reader.Read ()) { Console.writeline ("{0} {1} {2}", Reader.GetInt32 (0) .tostring (), Reader.getstring (1)); } Reader.close (); // Also Closes The Connection Due To The Connection Due To // Commandbehavior Enum Used Whenrating The Reader } With the following steps, use SqlDataReader to retrieve multi-line: Create a SQLCommand object for executing the stored procedure and link it to the SQLConnection object. Open the link. Generate a SQLDataReader object by calling the Excute Reader method of the SQLCommand object. Read data from the stream, call the READ method of the SQLDataReader object to retrieve the row, and use the classified access program method (such as the Getiut 32 and Get String method) to search the column. When reading, call the Close method. How to use XmlReader to retrieve multiple rows to generate an XMLReader object using SQLCommand objects, which provides flow-based forward access to XML data. This command (usually a stored process) must generate an XML-based result setting, which is usually composed of SQL Server2000 consisting of a SELECT state with valid Terms for XML. The following code segment illustrates this method: public void retrieveandDisplayRowsWithXMLReader () { SqlConnection Conn = New SqlConnection (Connectionstring); Sqlcommand cmd = new sqlcommand ("DATRETRIEVEPRODUCTSXML", CONN); cmd.commandtype = commandtype.storedProcedure; Try { Cn.open (); XmlTextReader XReader = (XMLTextReader) cmd.executexmlreader (); While (XReader.Read ()) { IF (XReader.name == "Products") { String stroutput = xreader.getattribute ("productID"); Stroutput = "" Stroutput = xreader.getattribute ("productname"); Console.writeLine (stroutput); } } XReader.close (); } Catch { Throw; } Finally { CONN.CLOSE (); } } The above code uses the following stored procedures: CREATE Procedure DatretrieveProductSxml AS Select * from products For xml auto Go Retrieve XML data by following these steps: Creating a SQLCommand object Enables the generation of the XML result settings. (For example, using the For XML terms in the SELECT state). Contact the SQLCOMMAND object with a link. Call the EXECUTEXMLREADER method for the SQLCommand object and assign the result to the forward object XMLTextReader. This is the fastest type of XMLReader object that should be used if any of the XML-based authentication that returns data is not required. Read the data using the read method of the XMLTextReader object. How to use the stored procedure output parameter retrieval single row can call a stored procedure, which can return to retrieval data items in a single row in a single row by a way to do output parameters. The following code segments use the stored procedure to retrieve the name and unit price of the product, which is included in the Northwind database. Void getProductDetails (int products) Out String ProductName, Out Decimal Unitprice { SqlConnection conn = new SqlConnection ("Server = (local); integrated security = sspi; database = northwind"); // set up the command Object used to execute the stored proc Sqlcommand cmd = new sqlcommand ("DatgetProductDetailspoutput", CONN); cmd.commandtype = commandtype.storedProcedure; // establish stored proc parameters. @ProductID Int Input @ProductName Nvarchar (40) Output @Unitprice Money Output // Must Explicitly Set The Direction of Output Parameters Sqlparameter paramprodid = Cmd.Parameters.Add ("@ProductID", ProductID; Paramprodid.direction = parameterdirection.input; Sqlparameter paramprodname = CMD.Parameters.Add ("@ProductName", SqldbType.varchar, 40); ParamProdname.direction = parameterDirection.output; Sqlparameter paramunitprice = Cmd.Parameters.Add ("@unitprice", sqldbtype.money); Paramunitprice.direction = parameterdirection.output; Try { Cn.open (); // Use ExecutenonQuery to run the commit. // Alth No Rows Are Returned Any Mapped Output Parameters // (and potentially returnes) Are Populate cmd.executenonquery (); // Return Output Parameters from stored Proc ProductName = paramprodname.value.toString (); Unitprice = (decimal) paramunitprice.value; } Catch { Throw; } Finally { CONN.CLOSE (); } } Retrieve single line by using the stored procedure output parameter: Create a SQLCOMMAND object and link it to the SQLConnection object. Set the stored procedure parameters by calling the Add method of the SQLCommand's Parameters collection. By default, the parameters are assumed to be output, so the direction of any output parameters must be explicitly set. Note that the direction of all parameters is clear is a good exercise, including input parameters. Open the connection. Call the EXECUTENONQUERY method for the SQLCommand object. It is in output parameters (and potentially with a return value). Use the Value property to retrieve the output parameters from the appropriate SQLParameter object. Close the connection. The above code segment enables the following stored procedures. Create Procedure DatgetProductDetailsSpoutput @ ProductID INT, @ProductName Nvarchar (40) Output, @Unitprice Money Output AS SELECT @ProductName = ProductName, @Unitprice = Unitprice From product Where productid = @ProductID How does Go How to retrieve a single row using SqlDataReader to retrieve a single row, as well as the value from the required column that returns a data stream. This description of the following code: Void getProductDetailsusingReader (int productID, Out String ProductName, Out Decimal Unitprice { SqlConnection conn = new SQLCONNECTION "Server = (local); Integrated Security = SSPI; Database = Northwind"); // set up the command Object used to execute the stored proc Sqlcommand cmd = new sqlcommand ("DatgetProductDetailsReader", CONN); cmd.commandtype = commandtype.storedProcedure; // establish stored proc parameters. @ProductID Int Input Sqlparameter paramprodid = cmd.parameters.add ("@ProductID", ProductID; Paramprodid.direction = parameterdirection.input; Try { Cn.open (); SqlDataReader Reader = cmd.executeReader (); Reader.read (); // advance to the one and only row // Return Output Parameters from Returned Data Stream ProductName = Reader.getstring (0); Unitprice = reader.getDecimal (1); Reader.Close (); } Catch { Throw; } Finally { CONN.CLOSE (); } } Follow these steps to return to with the SqlDataReader object: Establish a SQLCommand object. Open the connection. Call the ExecuteReader object of the SQLDReader object. Retrieving output parameters using the classification of SqlDataReader objects - here is getString and getDecimal. The following code segments are enabled: CREATE Procedure DatgetProductDetailsReader @ProductID Int AS Select ProductName, Unitprice from Productswhere ProductId = @ProductID How to use the ExecuteScalar single item EXECUTESCALAR method is designed to return access to a single value. In returns multiple columns or multi-line access events, ExecuteScalar only returns the first example of the first line. The following code shows how to query product names of a product ID: Void getProductNameExecutescalar (int Productid, Out String ProductName) { SqlConnection conn = new SQLCONNECTION "Server = (local); Integrated Security = SSPI; Database = Northwind"); Sqlcommand cmd = new SQLCOMMAND ("LookuppproductNamescalar", Conn; cmd.commandtype = commandtype.storedProcedure; Cmd.Parameters.Add ("@ ProductID", ProductId; Try { Cn.open (); ProductName = (string) cmd.executescalar (); } Catch { Throw; } Finally { CONN.CLOSE (); } } Press the following steps to use Execute Scalar to retrieve a single item: Establish a SQLCommand object that calls the stored procedure. Open the link. Call the ExecuteScalar method, pay attention to the method to return to the object type. It contains values of the first column of retrieved and must be designed to be appropriate. Close the link. The following code is enabled: CREATE Procedure LookuppRoductNamescalar @ProductID Int AS SELECT TOP 1 ProductName From product Where productid = @ProductID How does the go use a stored procedure output or returned parameter to use the stored procedure output or returned parameters to query a single value, the following code illustrates the use of output parameters: Void getProductNameusingsPoutPut (int Productid, Out String ProductName) { SqlConnection conn = new SQLCONNECTION "Server = (local); Integrated Security = SSPI; Database = Northwind"); Sqlcommand cmd = new SQLCommand ("LookuppproductNamesPoutput", CONN); cmd.commandtype = commandtype.storedProcedure; SQLParameter paramprodid = cmd.parameters.add ("@ productID", productID; Paramprodid.direction = parameterdirection.input; Sqlparameter parampn = CMD.Parameters.Add ("@ ProductName", SqldbType.varchar, 40); PARAMPN.DIRECTION = parameterDirection.output; try { Cn.open (); cmd.executenonquery (); ProductName = parampn.value.tostring (); } Catch { Throw; } Finally { CONN.CLOSE (); } } Retrieve the single value by using the output parameter of the stored procedure as follows: Create a SQLCommand object that calls the stored procedure. Set any input parameters and a single output parameter by adding SQLPARMeters to the SQLCommand's parameters collection. Open the link. Call the EXECUTE NONQUERY method for the SQLCommand object. Close the link. Retrieve the output value with the Value property of the output SQLParameter. The above code uses the following stored procedures: CREATE Procedure LookuppRoductNamespOutput @ProductID INT, @ProductName Nvarchar (40) Output AS SELECT @ProductName = ProductName From product Where productid = @ProductID The following code shows how to use the return value to determine if there is a special line. From an encoding point of view, this is similar to the use of stored procedure output parameters, in addition to the SQLParameter direction that needs to be explicitly set to ParameterDirection.ReturnValue. Bool CheckProduct (int products) { SqlConnection conn = new SQLCONNECTION "Server = (local); Integrated Security = SSPI; Database = Northwind"); Sqlcommand cmd = new sqlcommand ("CheckProductSP", CONN); cmd.commandtype = commandtype.storedProcedure; Cmd.Parameters.Add ("@ ProductID", ProductId; Sqlparameter paramret = Cmd.Parameters.Add ("@ productuctexists", sqldbtype.int; Paramret.direction = parameterDirection.ReturnValue; Try { Cn.open (); cmd.executenonquery (); } Catch { Throw; } Finally { CONN.CLOSE (); } Return (int) paramret.value == 1; } According to the following steps, you can use the stored procedure return value to check if there is special line: Establish a SQLCommand object that calls the stored procedure. Set the input parameters of the main keywords that contain the rows that need to be accessed. Set a single return value parameter. Add the SQLParameter object to the SQLCommand's Parameter collection and set it to ParameterDireetion.ReturnValue. Open the link. Call the method of ExecuteNonQuery of the SQLCommand object. Close the link. Retrieve the return value by using the Value property of the return value SQLParameter. The above code uses the following stored procedures: CREATE Procedure CheckProductSP @ProductID Int AS IF EXISTS (SELECT ProductidFrom Products Where produter = @ProductID) Return 1 Else Return 0 How does Go use SqlDataReader to retrieve a single item. By calling the ExecuteReader method of the command object, you can use the SqlDataReader object to obtain a single output value. This requires a little more code, because the SqlDataReader Read method must be called, then the required value is retrieved by the reader access program method. The use of the SqlDataReader object is described in the following code: BOOL CheckProductWithreader (int Productid) { SqlConnection conn = new SQLCONNECTION "Server = (local); Integrated Security = SSPI; Database = Northwind"); Sqlcommand cmd = new sqlcommand ("CheckProductexistSwithcount", conn); cmd.commandtype = commandtype.storedProcedure; Cmd.Parameters.Add ("@ ProductID", ProductId; CMD.Parameters ["@ productID"]. Direction = parameterdirection.input; Try { Cn.open (); SqldataReader Reader = cmd.executeReader Commandbehavior.sing Leresult); Reader.Read (); Bool BRecordexists = Reader.Getint32 (0)> 0; Reader.Close (); Return BRecordexists; } Catch { Throw; } Finally { CONN.CLOSE (); } } The above code uses the following stored procedures: CREATE Procedure CheckProductexistSwithcount @ProductID Int AS Select Count (*) from Products Where productid = @ProductID How to encode the ADO.NET manual transaction The following code shows how to use the SQL Server. Net data supplies to protect the billet transfer operation of the transaction. This operation is transferred between two accounts located in the same database. Public void Transfermoney (String toaccount, String fromaccount, Decimal Amount) { Using (SqlConnection Conn = New SqlConnection) "Server = (local); Integrated Security = SSPI; Database = SimpleBank")) { Sqlcommand cmdcredit = new SQLCOMMAND ("Credit", CONN); cmdcredit.commandtype = commandtype.storedProcedure; Cmdcredit.Parameters.Add (New SqlParameter ("@ Accountno", toaccount); Cmdcredit.Parameters.Add ("@ amount", amount); sqlcommand cmddebit = new sqlcommand ("debit", conn); CMDDebit.commandtype = commandtype.storedProcedure; Cmddebit.Parameters.add (New Sqlparameter ("@ Accountno", fromaccount); Cmddebit.Parameters.Add (New Sqlparameter); Cn.open (); // start a new transaction Using (SqlTransaction Trans = Conn.BegintransAction ()) { // Associate the two of the sand of the Same Transaction Cmdcredit.transaction = trans; Cmddebit.Transaction = Trans; Try { cmdcredit.executenonquery (); Cmddebit.executenonquery (); // Both Commands (Credit and Debit) WERE SUCCESSFUL TRANS.COMMIT (); } Catch (Exception EX) { // Transaction Failed TRANS. ROLLBACK (); // log exception details. Throw EX; } } } } How to use the Transact-SQL to perform the following stored procedures describe how to perform a support transfer operation of transactions within the Transact-SQL process. Create Procedure MoneyTransfer @Fromaccount Char (20), @Toaccount Char (20), @Amount Money AS Begin Transaction - Perform Debit Operation Update Accounts Set balance = balance - @amount Where accountnumber = @Fromaccount IF @@ rowcount = 0 Begin Raiserror ('Invalid from Account Number', 11, 1) Goto Abort End Declare @Balance Money SELECT @Balance = Balance from Accounts Where accountnumber = @Fromaccount IF @balance <0 Begin Raiserror ('Insufficient Funds', 11, 1) Goto Abort End - Perform Credit Operation Update Accounts Set balance = balance @amount Where accountnumber = @toaccount IF @@ rowcount = 0 Begin Raiserror ('Invalid To Account Number', 11, 1) Goto Abortend Commit transaction Return 0 Abort: Rollback Transaction GO This stored procedure uses Begin Transaction, Commit Transaction, and Rollback Transaction status manual control transaction. How to encode a transactional .NET class The following example is three service NET classes, which are configured or used for automatic transactions. Each class comes with a Transaction property, which will determine if a new transaction stream or a data stream is shared if the object is shared. These elements work together to perform bank billet transfer. The Transfer class is configured with the RequiresNew transaction property, while the debits and CREDIT classes are configured with the request attribute. This way, share the same transaction three objects when running. Using system; Using system.enterprises; Transaction (TransactionOption.Requiresnew)] Public Class Transfer: ServicedComponent { [AutoComplete] Public void Transfer (String toaccount, String fromaccount, Decimal AMOUNT) { Try { // perform the debit operation DEBIT Debit = New Debit (); Debit.debitaccount (fromaccount, amount); // Perform the Credit Operation Credit Credit = New Credit (); Credit.creditaccount (Toaccount, Amount); } Catch (SQLEXCEPTION SQLEX) { // Handle and Log Exception Details // Wrap and Propagate the Exception Throw New TransferaXception ("Transfer Failure", SQLEX); } } } [Transaction (TransactionOption.Required)] Public Class Credit: ServicedComponent { [AutoComplete] Public void Creditaccount (String Account, Decimal Amount) { SqlConnection conn = new SQLCONNECTION "Server = (local); integrated security = SSPI"; Database = "SimpleBank"); Sqlcommand cmd = new sqlcommand ("CREDIT", CONN); cmd.commandtype = commandtype.storedProcedure; Cmd.Parameters.Add (New Sqlparameter ("@ Accountno", Account); cmd.Parameters.Add (New Sqlparameter); Try { Cn.open (); cmd.executenonquery (); } Catch (SQLEXCEPTION SQLEX) { // log exception details here Throw; // propagate exception} } } [Transaction (TransactionOption.Required)] Public Class Debit: ServicedComponent { Public void debitaccount (String Account, Decimal Amount) { SqlConnection conn = new SQLCONNECTION "Server = (local); integrated security = SSPI"; Database = "SimpleBank"); SQLCommand cmd = new sqlcommand ("debit", conn); cmd.commandtype = commandtype.storedProcedure; Cmd.Parameters.Add (New Sqlparameter ("@ Accountno", Account); cmd.Parameters.Add (New Sqlparameter); Try { Cn.open (); cmd.executenonquery (); } Catch (SQLEXCEPTION SQLEX) { // log exception details here Throw; // Propagate Exception Back to Caller } } } Participants are very grateful to the following writers and reviewers: Bill Vaughn, Mike Pizzo, Doug Rothaus, Kevin White, Blaine Dokter, David Schleifer, Graeme Malcolm, Bernard Chen, Matt Drucke ) And Steve Kirk. What kinds of questions, comments and suggestions are readers? Regarding feedback from this article, please send e-mail to devfdbck®Microsoft.com. Do you want to learn and use the powerful features of .NET? Work together with the technical experts of the Microsoft Technology Center to learn to develop the best solutions. For more information, please visit: http://www.micrsoft.com/business/services/mtc.asp.