Best practices using ADO.NET

xiaoxiao2021-03-06  103

[Introduction] ADO.NET as Microsoft's latest data access technology has been widely used in corporate development. For a first-line developer, after mastering basic concepts and technologies, improve the most effective means of application levels and solve practical problems, and exchanges with each other's optimal time experience. In this article, two ADO.NET experts have not retained their readers, and many practical experiences have been exhausted. This article provides you with the best solution for implementing and achieving optimal performance, scalability, and features in Microsoft ADO.NET applications; while also telling best practices in using ADO.net; Helps optimize ado.net application design. This article includes

Information about the .NET framework data provider is included in the .NET framework. The comparison between DataSet and DataReader and the interpretation of the best use of each object in these objects. Explain how to use DataSet, CommANDs, and Connections. Information about integration with XML. General skills and problems. The data provider in the .NET Framework Data Provider. The data provider in the NET framework plays a bridge between the application and the data source. The .NET Framework Data Provider can return query results from the data source, execute commands to the data source, and propagate the changes in the DataSet to the data source. This article includes which .NET framework data provider is some of the skills that best suits you. Which .NET Framework Data Provider? To make your application get the best performance, use the .NET Framework Data Provider that best suits your data source. There are a lot of data providers to choose your application. The following table (see next page) provides information about the available data provider, as well as which data source for each data provider. Connect to SQL Server 7.0 or later In order to get optimal performance when connecting to Microsoft SQL Server 7.0 or later, use the SQL Server .NET data provider. The design purpose of the SQL Server .NET data provider is that you can access SQL Server directly without any additional techniques. Figure 1 illustrates the difference between different technologies available to access SQL Server 7.0 or higher. Connecting to the ODBC Data Source ODBC .NET Data Provider can be found in the Microsoft.Data.odbc namespace, its architecture is the same as the .NET data provider for SQL Server and OLE DB. The ODBC .NET data provider follows Naming Connection - with "ODBC" as a prefix (for example, ODBCCONNECTION), and uses a standard ODBC connection string. With DataReader, DataSet, DataAdapter, and DataView ADO.NET, the following two objects are provided to retrieve relational data and store them in memory: Dataset and DataReader. DataSet provides a relationship representation of data in memory, and a complete set includes some tables (these tables contain data, sorting and constrained data), and the relationship between tables. DataReader provides a fast, forward, read-only data stream from the database. When using DataSet, DataAdapter often uses DataAdApter (may also be a CommandBuilder) to interact with the data source. When using DataSet, you can also use DataView to sort and filter data in DataSet. You can also inherit from DataSet, create a strong type DataSet, which is used to disclose the table, row, and listeners as strong type object properties. The following topics include information involving: How to optimize access to data using DataSet or DataReader, and how to optimize how DataAdapter (including CommandBuilder) and DataView skills. DataSet and DataReader When you design an application, consider the level you need to use to determine the use of DataSet or DataReader. To do the following by the application, use DataSet:

Navigate between the plurality of discrete sheets of the result. Operate data from multiple data sources (eg, mixed data from multiple databases, an XML file, and a spreadsheet). Exchange data between layers or use XML web services. Unlike DataReader, DataSet can be passed to the remote client. Reuse the same record set to achieve performance improvements (such as sort, search, or filtering data) by cache. Each record needs to perform a lot of processing. The extension processing of each line returned to DataReader will extend the necessary time to serve the DataReader, which affects performance. Use XML operations to operate data, such as scalable style sheet language conversion (XSLT conversion) or XPath query. For the following cases, you must use DataReader in your application:

No cache data is required. The result to be processed is too large, and the memory can't be put. Once you need to quickly access the data in just, read-only mode. DataAdapter uses DataReader when you fill DataSet. Therefore, using DataAdapter replaces DataSet promotion performance as saving DataSet occupies memory and populating cycles required for DataSet. In general, this performance improvement is just symbolic, so design decisions should be based on the functionality required. The benefits of using strong Type Dataset DataSet is that it can be inherited to create a strong type DataSet. Strong Type Dataset is the benefits of design-time type checks, as well as Microsoft Visual Studio.NETs to the end of the strong class DataSet statement. Once you have modified the DataSet architecture or relational structure, you can create a strong type DataSet, disclose the lines and columns as objects, not as an item in a collection. For example, the name column of the client table is not disclosed, and the NAME property of the Customer object is disclosed. Type DataSet derived from the DataSet class, so there is no feature of DataSet. That is, the type of DataSet can still be remotely accessed and is provided as a data binding control (eg, DataGrid). If the architecture is unknown in advance, it can still benefit from the functionality of universal DataSet, but it is not available to the additional features of strong type DataSet. To handle a strong type DataSet Use a strong type DataSet, you can use the DataSet XML Architecture Definition Language (XSD) architecture to ensure that strong type Dataset can properly handle empty references. The NULLVALUE identifier allows you to use a specified value String.empty instead of dbnull, retaining an empty reference or an exception. Which option to choose depends on the context of the application. By default, if you encounter an empty reference, an exception will be triggered. Refresh the data in the DataSet Use DataAdapter.Fill if you want to refresh the value in the DataSet with the update value on the server. If there is a primary key defined on a DataTable, DataAdapter.Fill matches the primary key and makes the value on the application server when changing to an existing line. Even if you modify these data before refresh, the RowState of the refresh row is still set to unchanged. Note that if the primary key is not defined for the DataTable, DataAPter.Fill adds a new line with the possible host key value. If you want to refresh the table with the current value from the server, reserve any changes to the rows in the table, you must first use the DataAdapter.Fill to populate the table and populate a new DataTable, then use the preserveChanges value True to merge DataTable to DataSet. Searching data in DataSet When querying a row matching a specific condition in a DataSet, an index-based lookup improves search performance. When you assign the primaryKey value to DataTable, an index is created. An index is created when you create DataView for DataTable. Here are some techniques that use index-based findings.

If you are querying the columns of the PrimaryKey that makes a DataTable, you want to use DataTable.Rows.Find instead of datatable.select. For queries involved in non-master key columns, you can use DataView to improve performance. When the sort order is applied to the DataView, an index used when searching is created. DataView discloses the Find and Findrows methods to query the data in the underlying DataTable. If you do not need to be sorted views, you can still use the index-based lookup by creating DataView for DataTable. Note that this will only bring benefits only when multiple query operations are performed on the data. If you only perform a single query, the process you need to create an index will reduce the performance boost brought by the index. DataView constructs If you create DataView, and modify the sort, rowfilter or rowstatefilter properties, DataView will establish an index for the data in the foundation DataTable. When you create a DataView object, use the DataView constructor, which uses sort, rowfilter, and rowstatefilter values ​​as constructor parameters (together with the base DataTable). The result is an index created. Create a "empty" DataView and then set the sort, rowfilter, or rowstatefilter property, which will result in at least two times. Page ADO.NET can explicitly control what data returned from the data source and how much data is cached locally in the DataSet. Subject to the results of the query does not have a unique answer, but there are some techniques that should be considered when designing applications. Avoid using DataAdapter.Fill overload with StartRecord and maxRecords values. When filling DataSet in this way, only the maxRecords parameter (starting from the StartRecord parameter ID) is used to populate the DataSet, but always returns a complete query. This will cause unnecessary processing to read "unwanted" records; and to return additional records, unnecessary server resources will be exhausted. The technique for each time only back to a page is to create a SQL statement, combining the WHERE clause and the Order By clause and TOP predicates. This technique depends on the existence of a way to identify each row. When you browse the next page record, modify the WHERE clause to include all unique identifiers greater than the record of the last unique identifier of the current page. When you browse the previous page record, modify the WHERE clause to return all unique identifiers less than the record of the first unique identifier of the current page. Both queries returned to the recorded TOP page. When browsing the previous page, you need to sort in sequences. This will effectively return the last page of the query (if needed, you may be reordered the result). Another technology that only returns only one page record is to create a SQL statement, combine the use of TOP predicates and embedded SELECT statements. This technology does not rely on the existence of a way to uniquely identify each row. The first step in using this technology is to multiply the number of pages with the page size. The result is then passed to the TOP predicate of SQL Query, which is arranged in ascending order. In another query, the latter selects the TOP page size from the descended embedded query results. In essence, returning is the last page of embedded query. For example, to return the third page of the query result (page size 10), you should write the command shown below:

SELECT TOP 10 * from

(SELECT TOP 30 * from customs) AS TABLE1

ORDER BY ID DESC Note: From the result page returned from the query to sequence. You should be reordered if needed.

If the data is not changed, a record cache can be maintained locally in the DataSet to improve performance. For example, you can store 10 pages in the local DataSet, and you only query new data from the data source only when the user browsing exceeds the cache first page and the last page. Fill DataSet with architecture DataSet When DataSet is populated with data, the DataAdapter.Fill method uses DataSet's existing architecture and pops it with data returned from SelectCommand. If there is no table name in the DataSet matches the table name to be filled, the Fill method creates a table. By default, Fill only defines columns and column types. You can rewrite the default behavior of the Fill by setting the MISSINGSChemaAction property of DataAdapter. For example, let Fill creates a table architecture and also includes primary key information, unique constraint, column properties, whether it is empty, maximum column length, only column, and automatic increment, you must specify DataAdapter.MissingsChemaAction as missingschemaAction .Addwithkey. Or, before calling DataAdapter.fill, you can call DataAdapter.FillSchema to ensure that the schema has been in place when populating DataSet. The call to FillSchema generates an additional stroke to the server for retrieving additional architectural information. To get the best performance, you need to specify a DataSet schema before calling Fill, or set the MissingsChemaAction of DataAPter. Using CommandBuilder Best Practices Assuming SELECTCOMMAND performs a single table Select, CommandBuilder automatically generates DataAdapter's InsertCommand, UpdateCommand, and deleteCommand properties based on DataAdapter's SelectCommand property. Here is some of the techniques that use CommandBuilder for optimal performance.

The use of CommistBuilder should be limited to or in the designs. The processing necessary to generate the DataAdapter command property will affect performance. If you know the contents of the INSERT / UPDATE / DELETE statement, they are explicitly set. A better design skill is to create a stored procedure for the Insert / Update / Delete command and explicitly configure the DataAdapter command properties to use them. CommandBuilder uses the DataAdapter's SelectCommand property to determine the value of the other command properties. If DataAdapter's SELECTCommand itself has changed, make sure the refreshschema is called to update the command properties. If the DataAdapter command property is empty (the command property is empty), CommandBuilder generates a command for it. If the command properties are explicitly set, CommandBuilder will not rewrite it. If you want CommandBuilder to generate a command for the previously set command properties, set the command properties to empty. Batch SQL statement Many database supports multiple command merges or batch to execute it. For example, SQL Server allows you to separate commands with a semicolon ";". Multiple commands merge into a single command to reduce the number of servers, and increase the performance of the application. For example, all predetermined deletes can be stored locally in the application, and then a batch command call is issued, and they can be deleted from the data source. Although this does improve performance, it is possible to increase the complexity of the application when managing data updates in the DataSet. To remain simple, you may have to create a DataAdapter for each DataTable in DataSet. Fill DataSet with multiple tables If you use a batch SQL statement to retrieve multiple tables and populate DataSet, the first mesh is named named by the Name of the Fill method. The subsequent mesh is specified to the table name of the Fill method plus a number naming from 1 and the increment is 1. For example, if you run the following code: 'Visual Basic

Dim da As SqldataAdapter = New SqlDataAdapter

"SELECT * from Customers; SELECT * from ORDERS;" MyConnection)

DIM DS AS DATASET = New Dataset ()

Da.fill (DS, "Customers")

// C #

SqlDataAdapter Da = New SqlDataAdapter

"SELECT * from Customers;" ""

DataSet DS = New Dataset ();

Da.fill (DS, "Customers"); data from the Customers table is placed in a DataTable called "Customers". Data from the Orders table is placed in the DATATABLE named "CUSTOMERS1". After filling DataSet, you can easily change the TableName property of the "Customers1" table to "Orders". However, the following fill can cause the "Customers" table to be re-filled, and the "Orders" table is ignored and another "Customers1" table. In order to make a remedy for this situation, create a DataTableMapping, map Customers1 to ORDERS, and create additional table mappings for other later tables. E.g:

'Visual Basic

Dim da as sqldataadapter = new sqldataadapter ("Select * from customer;", myconnection)

Da.TableMappings.Add ("Customers1", "Orders")

DIM DS AS DATASET = New Dataset ()

Da.fill (DS, "Customers")

// C #

SqlDataAdapter Da = New SqlDataAdapter

"SELECT * from Customers;" ""

Da.TableMappings.Add ("Customers1", "Orders");

DataSet DS = New Dataset ();

Da.Fill (DS, "Customers"); Use DataReader below is some tips to use DataReader to get the best performance, and also answered some common problems with DataReader.

Before accessing any output parameters of the associated Command, you must turn off the DataReader. Close DataReader is always turned off after reading the data. If you use Connection just to return DataReader, turn off it immediately after closing DataReader. Another way to explicitly shut down Connection is to pass the CommandBehavior.CloseConnection to the ExecuteReader method to ensure that the relevant connection is turned off when the DataReader is turned off. This is especially useful if you return DataReader from a method and you can't control the shutdown of DataReader or related connections.

DataReader cannot be accessed remotely between layers. DataReader is designed for connected data access. When accessing column data, use the type of accessor, for example, getString, GetInt32, etc. This allows you to do not have to convert the Object returned by getValue into a specific type of procedure. A single connection can only open a DataReader at a time. In ADO, if you open a single connection, and request two records that use only, read-only cursors, the ADO will implicitly open the second, unchecked to the data storage area within the cursor survival. , Then implicitly close the connection. For ADO.NET, "Secret" is very different. If you want to open two DataReaders on the same data store, you must explicitly create two connections, each DataReader. This is a method of providing more control for the use of a pool-based connection. By default, DataRead is loaded to memory every time read. This allows columns to be randomly accessed within the current row. If this random access is not required, in order to improve performance, pass the commandbehavior.sequentialAracseSs to the ExecuteReader call. This will change the DataReader's default behavior to load data to memory only when the request is requested. Note that commandbehavior.sequentialAndbehavior.sequentialAlaccess requires sequential access to the column. That is, once the returned column is read, it cannot read it again. If data from DataReader has been completed, there is still a lot of unread results that are hang, just call the Cancel of Command before calling the DataReader's Close. CLOSE calling DataReader will result in the retrieval of the pending result and clear the stream before shutting down the cursor. Calling Command's Cancel discards the results on the server so that DataReader does not have to read these results when it is off. If you want to return to the output parameters from Command, you have to call Cancel to give up. If you need to read any output parameters, don't call the Cancel of Command, just call the DataReader's Close. Binary large object (blob) Retrieves the Binary Object (BLOB) with DataReader, you should pass the commandbehavior.sequentialAlaseRerandBehavior.SequentialAracse to the ExecuteReader method call. Because DataReader's default behavior is loaded into memory every time, because the BLOB value may be very large, the result may be used to make a large amount of memory due to a single BLOB. SequentialAccess sets the behavior of DataReader to only load the requested data. You can then use GetBytes or getChars to control how much data each time. Remember, when using sequentialAccess, you cannot access different fields returned by DataReader in order. That is, if the query returns three columns, the third column is blob, and wants to access the data in the first two columns, it is necessary to access the first column value before accessing the BLOB data, and then accesses the value of the second column. This is because the data is now returned, and the data is no longer available once DataReader reads the data. Use the command ADO.NET to provide different methods for execution of several commands and different options for optimizing commands. The following includes some techniques that are the performance of choosing the best command and how to improve the performance of the command. Different the best practices of OLEDBCommand. Different. The command execution between the NET Framework Data Provider is as standardized as possible. However, there is still a difference between the data provider. Some techniques are given below, which can be tailored to the command execution of the .NET frame data provider of OLE DB.

Use the CommandType.Text to call the stored procedure in accordance with the ODBC CALL syntax. Use CommandType.StoredProcedure to generate an ODBC Call syntax secretly. Be sure to set the type of OLEDBParameter, size (if applicable), and precision and range (if the parameter type is Numeric or Decimal). Note that if the parameter information is not explicitly supplied, OLEDBCommand recreates the OLE DB parameter accessor for each execution command. Using SQLCommand Best Practice Use SQLCommand to perform a quick prompt of the stored procedure: If the stored procedure is called, specify the SQLCOMMAND's CommandType property as a CommandType of StoredProcedure. This does not need to analyze commands before execution by explicitly identifying the command as a stored procedure. Using the Prepare method For the parameterization command for repeating the data source, the Command.Prepare method can improve performance. Prepare indicates that the data source is a command to be modified multiple times. To effectively use prepare, you need to completely understand how the data source responds to Prepare calls. For some data sources (such as SQL Server 2000), the command is implicitly optimized without having to call preted. For other (such as SQL Server 7.0) data sources, Prepare will be more effective. Explicit specifying architecture and metadata As long as the user does not specify metadata information, many objects of ADO.NET will infer metadata information. Here are some examples: DataAPter.Fill method, if there are no tables and columns in the DataSet, the DataAPter.Fill method creates a table and column in the DataSet. CommandBuilder, which will generate a DataAdapter command property for a single table select command. CommandBuilder.deriveParameters, which populates the Parameters collection of the Command object. However, each time you use these features, there will be performance losses. These features are recommended to be designed to be designed and inum. In a possible case, the architecture and metadata are explicitly specified. These include definition tables and columns in DataSet, define the Command attribute of DataAdapter, and define Parameter information for Command. ExecuteScalar and ExecutenonQuery If you want to return a single value like count (*), sum (price), you can use Command.executeScalar. ExecuteScalar returns the value of the first column of the first line and returns the result set as a scalar value. Because it can be done in a single step, ExecuteScalar not only simplifies the code, but also improves performance; if you need to use DataReader, you need two steps to complete (ie, ExecuteReader value). Use ExecutenonQuery using the SQL statements that do not return row, such as modifying data (such as INSERT, UPDATE, or DELETE) or only returns the output parameter or return value. This avoids any unnecessary processing used to create empty DataReaders. Test NULL If the column in the table (in the database) is allowed to be empty, it cannot test whether the parameter value "equal" empty. Instead, you need to write a WHERE clause, whether the test column and parameters are empty. The following SQL statement returns some rows, and their LastName columns are equal to the value assigned to the @LastName parameter, or the lastname column and @lastname parameters are empty.

SELECT *WUSTOMERSWHERE (LastName = @lastname) or (LastName Is Null and @lastname is NULL) Transfer NULL as the parameter value to the database, when null value is sent as the parameter value, NULL cannot be used (Visual Basic .NET is Nothing. Need to use dbnull.value. E.g:

'Visual Basic

Dim param as sqlparameter = new sqlparameter ("@ name", sqldbtype.nvarchar, 20)

Param.Value = dbnull.value

// C #

Sqlparameter param = new Sqlparameter ("@ name", sqldbtype.nvarchar, 20);

Param.Value = dbnull.value; Transaction model for executing the ADO.NET has changed. In ADO, when the StartTransaction is called, any update operation after calling is considered part of a transaction. However, in ADO.NET, when the Connection .begintractions is called, a Transaction object is returned, and it needs to be linked to the Command's Transaction property. This design can perform multiple root transactions on a single connection. If the Command.Transaction property is not set to a transaction that is started against the associated Connection, then Command will fail and lead exceptions. The upcoming .NET framework will make you register in an existing distributed transaction. This is ideal for the object pool scheme; in this scheme, a pool object is connected once, but the object is involved in multiple independent transactions. This feature is not available in the .NET Framework 1.0 release. Use the connection high performance application to maintain the shortest time connection with the data source in the usage, and utilize performance enhancements, such as connecting pools. The following topics provide some techniques to help achieve better performance when connecting to the data source using ADO.NET. Connecting pools for ODBC's SQL Server, OLE DB, and .NET Framework Data Provider Implicit Buffer Connections. You can control the behavior of the connection pool by specifying a different attribute value in the connection string. Use the DataAdapter optimization connection DataAdapter's Fill and Update methods to automatically open the connections specified by the associated command properties while connecting. If the Fill or Update method opens the connection, Fill or Update will close it when the operation is complete. To achieve optimal performance, keep the connection to the database to open only when needed. At the same time, reduce the number of times that opens and closes the multi-operation connection. If you only perform a single Fill or Update method call, it is recommended to allow Fill or Update methods to implicitly open and close connections. If there is a lot of Fill and Update calls, it is recommended to explicitly open, call fill and update, and explicitly close the connection. In addition, when transaction is performed, the connection is explicitly opened before starting the transaction, and the connection is turned off after commit. E.g:

'Visual Basic

Public Sub Runsqltransaction

Da As SqldataAdapter, MyConnection As SqlConnection, DS AS Dataset

MyConnection.Open ()

Dim myTrans as sqltransaction = myconnection.begintransaction () MyCommand.transaction = MyTrans

Try

Da.UPDATE (DS)

MyTrans.commit ()

Console.writeline ("Update Successful.")

Catch e as exception

Try

MyTrans.rollback ()

Catch ex as sqlexception

IF not myTrans.connection is nothing then

Console.writeline

"An Exception of Type" 'ex.gettype (). TOSTRING ()'

"Was Encountered While Attempting to Roll Back The Transaction."

)

END IF

END TRY

Console.writeline

"An Exception of Type" 'E.gettype (). TOSTRING ()' "WAS Encountered.")

Console.writeLine ("Update Failed.")

END TRY

MyConnection.Close ()

End Sub

// C #

Public void Runsqltransaction

SqlDataAdapter Da, SQLCONNECTION MyConnection, Dataset DS)

{

MyConnection.open ();

Sqltransaction myTrans = myconnection.begintransaction ();

MyCommand.Transaction = MyTrans;

Try

{

Da.UPDATE (DS);

Mycommand.transaction.commit ();

Console.writeline ("Update Successful.");

}

Catch (Exception E)

{

Try

{

MyTrans.rollback ();

}

Catch (SQLException EX)

{

IF (MyTrans.Connection! = NULL)

{

Console.writeline

"An Exception of Type" ex.gettype ()

"Was Encountered While Attempting to Roll Back The Transaction."

);

}

}

Console.writeline (E.TOString ());

Console.writeline ("Update Failed.");

}

MyConnection.Close ();

} Always turn off the connection and DataReader to complete the use of the Connection or DataReader object, always explicitly shut down them. Although garbage recycling will eventually remove objects and thus release connection and other managed resources, garbage collection is only required when needed. Therefore, ensuring that any valuable resources are explicitly released remaining your responsibility. Also, Connections without explicitly closing may not return into the pool. For example, an overwhelming range is not explicitly closed, and is only returned to the connection pool only when the connection pool size is maximized and the connection is still valid. Note Do not call Close or Dispose to call CONNECTION, DATAREADER, or any other managed object in the finalize method of the class. Finally, only the non-hosting resources they own directly. If the class does not have any non-managed resources, do not include the Finalize method in the class definition. Using the "Using" statement in C # For C # programmers, make sure that a convenient way to always turn off the Connection and DataReader objects is to use the USING statement. When you leave your own scope, you will automatically call the Dispose of the object being "used". For example: // C #

String connString =

"Data Source = localhost; integrated security = sspi; initial catalog = northwind;"

Using (SqlConnection Conn = New SqlConnection (ConnString))

{

Sqlcommand cmd = conn.createCommand ();

cmd.comMandtext = "Select Customerid, CompanyName from Customers";

Cn.open ();

Using (SqlDataReader DR = cmd.executeReader ())

{

While (Dr.Read ())

Console.writeline ("{0} / t {1}", Dr.getstring (0), Dr.getstring (1));

}

} USING statement cannot be used for Microsoft Visual Basic .NET. Avoid accessing the OLEDBConnection.State property If the connection has been opened, the OLEDBConnection.State property will perform the local OLE DB call IDBProperties.getProperties to the DataSourceInfo attribute of the DBProp_ConnectionStatus property, which may result in a round trip to the data source. That is to say, check the consideration of the State property may be high. So check the State property only when you need it. If you need to check this property, monitor the StateChange event of OLEDBConnection may make the application's performance better. Integrated ADO.NET with XML provides a wide range of XML integrations in DataSet and discloses SQL Server 2000 and its higher version of the XML feature. You can also use SQLXML 3.0 to access XML features in SQL Server 2000 and its higher versions. Here is the skills and information using XML and ADO.NET. DataSet and XML DataSet are closely integrated with XML and provide the following features:

Load the DataSet architecture or relational structure from the XSD architecture. Load the contents of DataSet from XML. If the architecture is not provided, the DataSet architecture can be inferred from the contents of the XML document. Write the DataSet architecture as an XSD architecture. Write the contents of DataSet as XML. Synchronous access to the relationship representation of data using DataSet, and the hierarchical representation of data using XMLDATADOCUMENT. Note You can use this synchronization to apply an XML function (for example, XPath query and xslt conversion) to data in the DataSet, or provide relationships for all or one of the data in the XML document under the premise of retaining the original XML fidelity. view. Architecture Incredition When loading a DataSet from an XML file, you can load the DataSet architecture from the XSD architecture, or pre-defined tables and columns before loading data. If there is no available XSD architecture, and do not know which tables and columns defined for the contents of the XML file, the architecture can be inferred on the basis of the XML document structure. Architecture Inference is useful as a migration tool, but it should be limited to design phase applications, which is due to the following restrictions.

Introduction to the architecture will introduce additional processing that affects application performance. The types of all inference columns are strings. Inference processing does not have certain determinism. That is, it is based on the XML file content, not a predetermined architecture. Therefore, for the same XML files as two predetermined architectures, since their content is different, the results are obtained from two completely different inference architectures. SQL Server for XML queries If you return query results from SQL Server 2000 for XML, you can use the SQLCommand.executeExmlReader method to create an XMLReader using the SQLCommand.executexmlReader method. SQLXML hosted classes There are some classes in the NET framework, which is publicly used for XML of SQL Server 2000. These classes can be found in the Microsoft.Data.sqlxml namespace, which adds the XPath query and the XML template file and the ability to convert XSLT to the data. The SQLXML hosted class is included in the XML (SQLXML 2.0) release for Microsoft SQL Server 2000, which can be used by link XML for Microsoft SQL Server 2000 Web Release 2 (SQLXML 2.0) More Useful Tips Some Writing ADO.NET Codes General skills. Avoiding an automated incremental conflict is like most data sources, DataSet enables you to identify columns that automatically increment on their values ​​when adding new rows. When using the auto-incrementum column in DataSet, if the column of the automatic increment is from the data source, it is possible to avoid the row of DataSet and the rows added to the data source between the local number conflict. For example, consider a table, its primary key column CustomerID is automatic increment. Two new customer information rows are added to the table and receive automatic incremental Customerid values ​​1 and 2. Then, only the second customer line is passed to the method of DataAdapter, the newly added row receives an automated incremental Customerid value 1 in the data source, and does not match the value 2 in the DataSet. When DataAdapter fills the second line in the table with the return value, a constraint conflict occurs because the first customer row has already used the CustomerID value 1. To avoid this, it is recommended to create the columns in the DataSet to the autocrementsTep value equal to -1 and the autocrementseed value is equal to -1 and the autocrementseed value is equal to 0, and the autoincrementseed value is equal to 0, and also ensures The automatic incremental identifier value generated by the data source starts from 1 and increments in a positive order. Therefore, DataSet is a negative number of automatic incremental values, and the positive automatic incremental value generated by the data source does not conflict. Another option is a column using the GUID type, not a column of an automatic increment. The algorithm that generates the GUID value should never cause the GUID value generated in the data source to be the same as the GUID value generated in the DataSet. If the column of the automatic increment is just used as a unique value, and there is no meaning, consider using the GUID instead of the automatic increment. They are unique and avoid the additional work necessary to use automatic incremental columns. Checking an open concurrent conflict According to the design, since the DataSet is open to the data source, it is necessary to ensure that the application avoids conflicts when multiple clients update the data in the data source. There are several techniques when testing an open concurrent conflict. A technique involves containing timestamp columns in the table. Another technique is to verify that the original value of all columns in a row is still matched to the value found in the database when tested by using the WHERE clause in the SQL statement. Multi-threaded programming ADO.NET optimizes performance, throughput and scalability. Therefore, the ADO.NET object does not lock the resource and must only be used only for single threads.

One exception is DataSet, which is a thread for multiple readers. However, you need to lock the DataSet when you write. The design purpose of accessing ADO ADO.NET is only the best solution for many applications when needed. However, some applications need only features that use ADO objects, for example, ADOMD. In these cases, the application can access ADO with COM Interop. Note that the use of the COM INTEROP to access data with ADO will result in performance reduction. When designing an application, first determine whether ADO.NET meets design requirements before implementing the design of the COM Interop Access ADO. Information about the available data provider and which data source is best for each data provider. Provider: SQL Server .NET Data Provider Details: Can be found in system.data.sqlclient namespace. Suitable for: Multi-layer applications using Microsoft SQL Server version 7.0 or higher. Use Microsoft Data Engine (MSDE) or a single-layer application for Microsoft SQL Server version 7.0 or later. For Microsoft SQL Server 6.5 and earlier versions, you must use the OLE DB provider for SQL Server with the OLE DB .NET data provider.

Provider: OLE DB .NET Data Provider Details: Can be found in system.data.oledb namespace. Applicable to: Using Microsoft SQL Server 6.5 or earlier, or supports any multi-layer application developed by OLEDB data providers that implement OLE DB interfaces. All OLE DB interfaces are listed in the .NET Framework SDK (Note: No OLE DB 2.5 interface is required). For Microsoft SQL Server 7.0 or higher, it is recommended to use SQL Server's .NET Framework Data Provider. Use the single-layer application of the Microsoft Access database. It is not recommended to use the Access database using the multilayer application. OLE DB provider of ODBC (MSDasql) is blocked. You need to access an open database connection (ODBC) source, you can use the ODBC .NET data provider, .NET Framework SDK version 1.1 contains this provider.

Provider: ODBC .NET Data Provider Details: Can be found in Microsoft.Data.odbc namespace. Provides access to data sources connected to the ODBC driver. Note: The ODBC data provider is included in the .NET Framework in 1.1 or higher. The namespace containing the ODBC .NET data provider is System.Data.odbc.

Provider: For Oracle's .NET Data Provider Details: You can find it in system.data.OracleClient namespace. Provides access to Oracle Data Sources (Version 8.1.7 and Higher Versions). Note: The .NET data provider for Oracle is included in the .NET Framework in 1.1 or higher.

Provider: Custom.NET Data Provider Details: ADO.NET provides a minimum set of interfaces that enable you to implement your own .NET framework data provider. For more information on creating a custom data provider, see the .NET Framework SDK.

转载请注明原文地址:https://www.9cbs.com/read-97779.html

New Post(0)