Improve the performance of data access layers (2)

xiaoxiao2021-03-06  99

3. Select the function of optimization performance

3.1. Use the parameter tag as the parameter of the stored procedure

When calling the stored procedure, use the parameter tag as a parameter as much as possible to use the character to do parameters. The JDBC driver calls the stored procedure to perform the process like performing other SQL queries, either directly to optimize the execution process via RPC. If the stored procedure is performed like SQL query, the database server resolves the statement, verifies the parameter type, and then converts the parameters into the correct data type, apparent that this call mode is not the most efficient. The SQL statement is always sent as a string to the database server, for example, "{Call getcustname (12345)}". In this case, even if the programmer envisages the unique parameters for getCustName, it is still a string of the data into the database. The database server parses the statement, separating a single parameter value 12345, and then converts the string "12345" into an integer value before performing the process as the SQL language. Calling a stored procedure over the database server via the RPC can avoid the overhead of using the SQL string. Situation 1 In this example, you cannot use the server-side RPC optimization calling process. The process of calling includes parsing statements, verifying parameter types, converting these parameters into the correct type before executing. Callablestatement cstmt = conn.prepareCall ("{Call getCustName (12345)}"); ResultSet RS = cstmt.executeQuery (); Situation 2 In this example, you can use the server-side RPC to optimize the calling process. Since the application avoids the overhead brought by the text parameters, and the JDBC can be optimized in the database in the RPC mode to call the stored procedure, so the execution time is greatly shortened. Callablestatement cstmt = conn.preparecall ("{call getcustname"); cstmt.setlong (1,12345); ResultSet RS = cstmt.executeQuery (); JDBC optimizes performance according to different purposes, so we need to use according to purposes Make a choice between PreparedStatement objects and STATEMENT objects. If you perform a separate SQL statement, select the Statement object; if you are executing twice or more, select the PreparedStatement object. Sometimes, in order to improve performance, we can use the statement pool. When using the statement pool, if the query is executed once and may never be executed, then the Statement object is used. If the query is rarely being executed, it may be executed once in the life of the statement pool, so use preparedsatement. In the same case, if there is no statement pool, use the Statement object.

3.2. Use the batch instead of using preparedStatement statements

Updating a large amount of data is usually preparing an INSERT statement and executing the statement multiple times, resulting in a large number of network round trips. To reduce the number of JDBC calls and improve performance, you can send multiple queries to the database at a time using PreparedStatement objects. For example, let's compare the examples below, situations 1 and the case 2. Case 1: Multiple execution prepaaredStatement statement prepaaredStatement ps = conn.preparestatement ("Insert INTO EMPLOYEES VALUES (?,?,?)"); For (n = 0; n <100; n ) {ps.setstring (Name [Name " ]); Ps.setlong (ID [n]); ps.setint (Salary [n]); ps.executeUpdate ();} scenario 2: Using batch preparedState ps = conn.prepareStatement ("INSERT INTO Employees VALUES (? ,?,?) "); For (n = 0; n <100; n ) {ps.setstring (name [n]); ps.setlong (ID [n]); ps.setint (Salary [n]) Ps.addbatch ();} ps.executebatch (); in situations 1, a preparedStatement is used to execute an Insert statement multiple times. In this case, 101 network round trips are required for 100 insertion, one for preparing the statement, additional 100 network round trips for performing each operation. When the addbatch () method, as described in the case, only two network round trips, one preparation statement, another execution batch. Although more database CPU computational overhead is required to use batch processing, performance can be obtained from a reduced network round-trip. Remember to make JDBC drivers have a good performance in performance, reducing the network traffic between JDBC drivers and database servers. 3.3. Select the appropriate cursor

Choosing a suitable cursor can improve the flexibility of your application. This section summarizes the performance issues of three types of cursors. The forward cursor provides excellent performance on all the rows in the continuous reading table. For the search table data, there is no way to retrieve data faster than the forward cursor. However, when the application must handle a non-continuous line, it cannot be used. For applications that require database high-level concurrency control and need to set forward and backward scrolling capabilities, JDBC driver useless targets is the most ideal. The first request for the unusally knowledgeable game is to get all rows (or when JDBC is read using "lazy" mode, you can read some rows) and store them on the client. Then, the first request will be very slow, especially when the long data is retrieved. Subsequent requests no longer need network traffic (or when the drive is lazy mode, only limited network traffic) is very fast. Since the first request is slow, no sensing the game should not be used for a single request for a line of data. When you want to return long data, memory is easily exhausted, so developers should also avoid using no sensible games. Some implementations of unknown cursors are in the zero table in the database, avoid performance issues, however, most of them are cached in applying local. No sense of tricks, sometimes called key set drivers, using identifiers, such as RowID already existing in your database. When you scroll through the result set, the data suitable for the identifier will be retrieved. Since each request generates network traffic, performance will be very poor. However, returning non-continuous rows do not affect performance. For further explanation, let's take a look at an application that usually returns to the application of 1000 row data. When executed or the first row is requested, the JDBC does not perform the SELECT statement provided by the application. But the JDBC driver key identifier replaces the SELECT list of the query, for example, RowID. This modified query will be driven, and all 1000 key values ​​will be retrieved from the database and driven. Each request from the application will go to the JDBC driver. In order to return the right row, the JDBC queries the key value in its local cache, constructs a statement similar to "where rowid =?" Containing WHERE, execute this Modify the query and retrieve a single result line from the server. When the application uses no insensitive cursor data from the cache, a sensive cursor is the preferred cursor mode in the dynamic situation. 3.4. Effectively use the GET method

JDBC provides a number of methods for retrieving data from the results, such as getint (), getString (), and getObject (). The getObject () method is the most common method, but provides the worst performance when not explaining non-default mapping. This is because the JDBC driver must do additional processing in order to determine the type of the retrieved value and the appropriate mapping. Therefore, it always uses methods that can clarify data types. To better improve performance, provide column numbers that are retrieved, for example, getString (1), getlong (2), and getint (3), not column name. If the column numbers are not explained, network traffic is unaffected, but the cost of conversion and finding has risen. For example, suppose you use getString ("foo") ... Driver may have to convert the identifier FOO of the column to uppercase (if necessary), and compare with all column names in the list. If a column number is provided, a large part of the processing is saved. For example, if you have a result set of 15 columns 100 rows, the column name is not included in the result set. You are interested in three columns, Emploeement, Employeenumber (long integer), and Salary. If you explain GetString ("Employeename"), Getlong ("Employeenumber") and GetInt ("Salary)), each column name must be converted to the case of matches in database metadata, there is no doubt that the query will be corresponding increase. If you explain GetString (1), getlong (2), and GetInt (15), performance will greatly improve. 3.5. Retrieve the automatically generated key

Many databases have hidden columns (also called pseudo columns) for each row in the description table. Typically, since the pseudo column describes the physical disk address of the data, it is the fastest way to use this type of storage in the query. Before JDBC3.0, the application can only perform the SELECT statement to retrieve the value of the false column immediately after the data is inserted. For example: // insert rowint rowcount = stmt.executeUpdate ( "insert into LocalGeniusList (name) values ​​( 'Karen')"); // now get the disk address - rowid - for the newly inserted rowResultSet rs = stmt.executeQuery ( "Select Rowid from localgeniuslist where name = 'karen'"); this search method has two main drawbacks. First, retrieving a false column needs to send a separate query statement to the server via the network. Second, since there may be no primary key in the table, the query condition may not be uniquely determined. In the rear, multiple pseudo-columns are returned, and the application may not be able to determine which is the most recently inserted row. The JDBC specification is an optional feature that can retrieve the automatically generated key information of the row when the row is inserted. For example: int rowcount = stmt.executeUpdate ( "insert into LocalGeniusList (name) values ​​( 'Karen')", // insert row AND return keyStatement.RETURN_GENERATED_KEYS); ResultSet rs = stmt.getGeneratedKeys (); // key is automatically Available even has no primary key, which gives the application a unique to determine the fastest way to determine the value. When accessing data, the ability to retrieve pseudo colors provides flexibility to JDBC developers and created performance. 4. Manage connection and data updates

4.1. Management connection

The quality of connection management directly affects the performance of the application. Use a connection to create multiple Statement objects to optimize your app, not multiple connections. Avoid connecting the data source after the initial connection is established. A bad coding habit is to perform SQL sleep and disconnect several times. A connection object can have multiple STATEMENT objects and it associate. Since the Statement object is a memory store that defines the SQL statement information, it manages multiple SQL statements. In addition, you can use the connection pool to significantly improve performance, especially for applications that are connected through the network or through the WWW connection. The connecting pool allows you to reuse connected, close the connection is not the physical connection to the database, but put the connected connection into the connection pool. When an application requests a connection, an active connection will be reused from the connection pool, which avoids the network I / O generated by creating a new connection.

4.2. Managing submission in the transaction

Due to disk I / O and potential network I / O, the transaction is often slower. WSCONNECTION.SetAutoCommit (false) is often used to close automatic submission settings. What did you include? The database server must refer to each data page containing updated and new data. This is usually a process of continuously written to the log file, but it is also a disk I / O. By default, when connected to the data source, Auto Submit is open, since a large amount of disk I / O required for each operation requires, automatic submission mode usually weakens performance. In addition, most databases do not provide local automatic submission mode. For this type of server, the JDBC driver must express the server to the server and a begin transaction for each operation. Although it is helpful to use the transaction, do not use it over. The throughput will be reduced due to the long-term holding lock for a long time in order to prevent other users from accessing the row. Submitting a transaction in a short period of time can maximize concurrency. 4.3. Select the correct transaction mode

Many systems support distributed transactions; that is, transactions can span multiple connections. Due to the record log and all network I / O included between components (JDBC drivers, transaction monitor and database systems) in distributed transactions, distributed transactions are more slower than ordinary transactions. Use them unless you need a distributed transaction. Local transactions can be used if possible. It should be noted that many Java application servers provide a default transaction behavior that utilizes distributed transactions. For the best system performance, the application is designed under a single connection object, unless it is necessary to avoid distributed transactions.

4.4. Use Updatexxx Method

Although programmed updates do not apply to all types of applications, developers should try to use programming updates and deletions, that is, update data using the RESULTSET object's Updatexxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx update data. This method allows developers to update data without building complex SQL statements. In order to update the database, you must call the UPDATEROW () method before moving the cursor on the line of the result set. In the code snippet below, the value of the result set object RS's AGE column is retrieved using the getint () method, and the updateint () method is used to update that column with the integer value 25. The Updaterow () method is used to update the value of the value modified in the database. INT n = rs.Getint ("agn"); // n Contains Value of Age Column In The ResultSet RS ... RS.UPDATEINT ("Age", 25); rs.Updaterow (); In addition to making applications more easily Maintenance, programming updates typically produce better performance. Since the pointer has been positioned on the updated line, the overhead of the positioning line is avoided.

4.5. Use getBestrowidentifier ()

转载请注明原文地址:https://www.9cbs.com/read-122227.html

New Post(0)