Oracle Tuning Summary

xiaoxiao2021-03-06  97

Oracle Tuning Summary

About Oracle's performance adjustment, generally includes two aspects, one means the adjustment of the Oracle database itself, such as SGA, PGA optimization settings, and the other is to connect Oracle applications and optimizations of SQL statements. Optimization of these two aspects can make a complete set of Oracle applications in a good operating state. This paper mainly makes some Oracle Tuning articles a simple summary, strive to cooperate with the actual operation, to explain some of the theoretical knowledge, so that most of the people who have general oracle knowledge can be understood by Oracle Tuning, and Some parameters are adjusted according to the actual situation. For more detailed knowledge, please refer to the recommended books mentioned in the end of this article. At the same time, because the topic content is too many and complicated, this article must have a misperignant or even mistaken, please enlighten and make progress together.

1. The setting of SGA is in Oracle Tuning, settings to SGAs are the key. SGA refers to Shared Global Area, or System Global Area, called sharing global zone or system global zone, and the structure is shown below.

For memory within the SGA area, sharing, globally, in UNIX, must set the shared memory segment (one or more) for Oracle, because Oracle is multi-process in Unix; Oracle on Windows It is a single process (multiple threads), so you don't have to set the shared memory segment.

1.1 Each component of SGA Looks with SQLPlus queries to see the situation of each component of SGA: SQL> SELECT * FROM V $ SGA; Name Value ------------------- - - ---------- Fixed Size 104936Variable size 823164928database buffers 1073741824redo buffers 172032

Or SQL> Show Sgatotal System Global Area 1897183720 Bytesfixed Size 104936 bytesvariable size 823164928 bytesdatabase buffers 1073741824 BYTESREDO BUFFERS 172032 BYTESREDO BUFFERS 172032 BYtesredo buffers 172032

Fixed Size Oracle may not be the same, but for the determination environment is a fixed value, the information of each part of the SGA is stored, which can be seen as a region that boots the establishment SGA.

Variable size includes memory settings such as shared_pool_size, java_pool_size, lad_pool_size

Database buffers refers to the data buffer, contains DB_BLOCK_BUFFER * DB_BLOCK_SIZE, BUFFER_POOL_KEEP, BUFFER_POOL_RECYCLE, and Buff_Size, Buffer_Pool_Keep, Buffer_Pool_Recycle. In 9i, DB_CACHE_SIZE, DB_KEEP_CACHE_SIZE, DB_RECY_CACHE_SIZE, DB_NK_CACHE_SIZE are included in 9i.

Redo buffers refers to the log buffer, log_buffer. At this point here, you may not be the same for V $ Parameter, V $ SGASTAT, V $ SGA query value may be different. V $ Parameter's value refers to the value set in the initialization parameter file. V $ SGASTAT is the log buffer size actually allocated by Oracle (because the allocation value of the buffer is actually discrete, nor is the smallest in Block The unit is allocated), and the value of the query in V $ SGA is the value of the log buffer in Oracle. In order to protect the log buffer, some protection pages are set, usually we will find that the protection page size is 8K (different environments may not same). Refer to the following SQL> Select Substr (Name, 1, 10) Name, Substr (Value, 1, 10) Value 2 from V $ Parameter Where Name = 'log_buffer'; Name Value ----------- ---------------------------- log_buffer 163840sql> Select * from v $ SGASTAT WHERE POOL IS NULL;

Pool Name Bytes ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Fixed_sga 104936 DB_BLOCK_BUFFERS 1073741824 log_buffer 163840

SQL> SELECT * FROM V $ SGA;

Name Value ---------------------------- Fixed Size 104936Variable size 823164928database buffers 1073741824redo buffers 172032

172032 - 163840 = 8192

(The above test data is obtained in the HP B.11.11 Oracle 8.1.7.4)

1.2 Signing Size Settings After simple analysis of the SGA structure, the following is a problem with how to correct the SGA size according to the system. SGA is a memory area, which is the physical memory of the system. Therefore, for an Oracle application, SGA is not bigger, the better, this needs to be found to be a balanced point of system optimization.

1.2.1 Preparing before setting the parameters Before setting SGA's memory parameters, we must first ask yourself a few questions 1: Multi-physical memory: How much memory is required for operating system estimation: Database is the use of file system or bare devices 4: How much concurrent connection 5: Application is OLTP type or OLAP type

According to the answers to these questions, we can roughly estimate memory settings for the system. Then we will now discuss one by one. First, how much physical memory is the easiest answer, and then how much memory is used by operating system estimates? From experience, it will not be too much, usually within 200m (not containing a large number of process PCB). Next we have to explore an important issue, that is, there is a problem with file systems and bare devices, which is often easily ignored by us. The operating system uses a large amount of buffer to cache the operating system block. This is when the database gets the data block, although there is no hit in the SGA, but actually may be obtained from the file cache of the operating system. If the database and operating system supports asynchronous IO, it is actually when the database write the process DBWR write disk, the operating system tagged the block in the file cache to delay write, wait until it is really written to disk, the operating system notifies the DBWR write disk completion . For this part of the file cache, the required memory may be relatively large, as a conservative estimate, we should consider at 0.2--0.3 times memory size. But if we use bare devices, this partial cache is not considered. In this case, the SGA has a chance to be large. About the database how many concurrent connections, this is actually related to the size of the PGA (Large_Pool_SIZE under MTS). In fact, this issue should be said to be related to the OLTP type or OLAP type. For OLTP Type Oracle tends to use MTS, use independent mode for OLAP types, and OLAP may also involve queries of a large number of sorting operations, which affect us of our memory. Then all questions are integrated, which is actually mainly reflected in the size of the UGA. UGA mainly includes the following partial memory settings SQL> Show parameters area_sizename type value ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -------------- Bitmap_mege_area_size integer 1048576create_bitmap_area_size integer 8388608hash_Area_size integer 131072sst_Area_size integer 65536sql>

In this part of our most attention, we are usually sort_AREA_SIZE, which is when the query needs to be sorted, the database session will use this part of the memory to sort using the temporary table space when the memory size is insufficient. This parameter setting is important because the disk sort efficiency and the memory sort efficiency are different. When there is a lot of sorting disk I / O operation, you can consider adding the value of sort_AREA_SIZE. Sort_AREA_SIZE is Oracle's maximum number of memory required for one sort, but before the resulting sequence but the result column returns, Oracle will release the Sort_Area_Size size memory, but will retain the Sort_Area_retained_size size memory, know that the last line of result column is returned, only release all Memory. It will result in sorted statements with SELECT DISTINCT, Minus, INTERSECT, UNION, and MIN (), max (), count () operations; without led to Update, SELECT with BETWEEN clauses. These four parameters are set for sessions, which is the size of the memory used by a single session, not the entire database. Occasionally, some people mistakenly misunderstood this parameter to be the size of the entire database, which is extremely serious. If MTS is set, the UGA is assigned in large_pool_size, which means that this part of the memory can be shared between the shared memory, and the different processes (threads) can be shared. On this basis, we assume that the data inventory is in concurrent execution of Server Process, according to our 4 parameters in Oracle8.1.7, we calculate the approximate size of the PGA in standalone mode. Since the session does not often use create_bitmap_area_size, bitmap_merge_area_size, so we usually do not ask four parameters. In consideration of variables saved by the four parameters, the stack and other information are estimated to be 2m, then 200 processes may use 200M PGA. 1.2.2 A empirical formula is now now, according to these assumptions, let's see how much memory can actually reach the SGA. On 1G memory server, we can assign to SGAs from about 400-500m. If it is 2G memory, approximately 1 g of memory to SGA, 8 g of memory can be separated to 5 g of memory to SGA. Of course, we are measured by the default sorting part memory sort_area_size = 64k, if we need to zoom in to the parameters such as the parameter and haveh_area_size, then we should measure this problem based on the number of concurrent processes.

In fact, usually we are more accustomed to expressing such problems through intuitive formulas: OS use memory SGA concurrent execution process * (sort_AREA_SIZE HASH_ARA_SIZE 2M) <0.7 * total memory

(The formula is dead, the system is alive, the adjustment of the actual application is not necessarily the formula, this is not a reference suggestion)

In our practical application, if it is a naked device, we can increase SGA (if needed). Since the currently almost all operating systems use virtual cache, it is actually if it does not cause errors even if the SGA settings are large, but there may be frequent memory pages exchange and replacement (Page IN / OUT). If this phenomenon is observed at the operating system level, then we need to adjust the memory settings.

1.2.3 Settings of each parameter So that each parameter in the SGA should set it to set it. The following discussion: log_buffer For the size setting of the log buffer, I usually I don't have too much suggestion, because reference LGWR Write After the trigger condition, we will find us only more than 3m significance is not very big. As a formal system, it is possible to consider setting this part to log_buffer = 1-3m size, and then adjust it for specific situations. Large_pool_size is recommended for the setting of the big buffer pool, and it is recommended to be enough for 20-30M if not using MTS. This part is mainly used to save some information at parallel inquiry, as if RMAN may be used when backed up. If MTS is set, since the UGA section is moved here, it is necessary to consider the settings of this part of the size according to the setting of the Sort_Arees_Size et al related session memory parameters, which can generally be considered as session * (sort_AREA_SIZE 2M). Here you must remind a little, not to use MTS, we don't advocate using MTS, especially when the number of online users is less than 500. . Java_pool_size If the database does not use Java, we usually think that it is enough to keep 10-20m size. In fact, you can only need 32K, even at least 32K, but the components in the installation database (such as HTTP Server). Shared_pool_size This is the most controversial memory setting so far. According to the description of many documents, this part should be almost almost in size with the data buffer. But actually, is not the case. First of all, we have to be found, that is, this part of the memory role, it is to cache SQL that has been parsed, but it can be reused, no longer parsing. The reason for this is because, for a new SQL (there is no existence of the same SQL that has been parsed in Shared_Pool), the database will perform hard narratism, which is a very resource process. And if you already exist, it is only soft analysis (finding the same SQL in the shared pool), which consumes greatly reduced resources. So we expect to share some SQL more, and if the parameter setting is not large enough, ORA-04031 errors often appear, indicating that in order to analyze new SQL, there is no sufficient continuous free space available, so naturally we expect this parameter to be large some. However, the increase in this parameter also has a negative impact, because it is necessary to maintain a shared structure, the increase in memory will make the aging of SQL higher, bring a lot of management overhead, all of which may cause CPUs. Serious Problem.

In a relatively large system that fully uses the binding variable, the overhead of Shared_Pool_Size should usually remain within 300m. Unless the system uses a large number of stored procedures, functions, packages, such as Oracle ERP, may reach 500m or even higher. So we assume a 1G memory system, which may consider setting the parameter of 100m, 2G system considers to be set to 150m, 8G, can consider setting to 200-300m. This may give us a serious problem for a binding variable system without sufficient use or without using a binding variable. The so-called SQL did not use Bind Var, we called Literal SQL. That is, such as such SQL we think it is different SQL, you need to perform 2 hard analysis: select * from Emp where name = 'Tom'; select * from Emp where name = 'jerry'; if you put 'Tom' and 'Jerry' is changed to variable V, that is, use Bind Var, we can think that it is the same SQL to share well. Sharing SQL is originally shared_pool_size This part of the memory existence, Oracle's purpose is there, and we don't use bind var to violate the original intention of Oracle, which will give our system a serious problem. Of course, if you monitor the operating system, there is no severe CPU problem, we find that the shared pool hits can increase SHRED_POOL_SIZE. But usually we don't advocate this part of the memory exceeds 800M (larger can be larger). In fact, we may even want to avoid soft analysis, this is different in different program languages. We may also get help (this will increase the PGA) on the topic of using binding variables by setting the session_cached_cursors parameter (this will increase the topic of the use of bound variables. Data Buffer Now let's talk about the data buffer, which is allocated to this part of the memory after the size of the SGA is determined and allocated over the previous part. Usually, in the case of allowing, we have tried to make this part of the memory larger. This part of the memory is mainly cache DB Block, which reduces even avoids data from the disk, usually by db_block_buffers * db_block_size to determine the size. If we set buffer_pool_keep and buffer_pool_recycle, we should add the size of these two parts of memory.

It can be seen that the principle that should basically be mastered when setting SGA is: Data Buffer can generally share_pool_size as much as possible should be moderately log buffer within 1MB.

Suppose oracle is 32 bit, server RAM is greater than 2G, pay attention to your PGA, then suggests Shared_Pool_Size Data Buffer Large_Pool_Size Java_Pool_Size <1.6G

Agree, if 512M RAM recommends shared_pool_size = 50m, Data Buffer = 200m

If 1g ram shared_pool_size = 100m, Data Buffer = 500M

If 2g ramshared_pool_size = 150m, Data Buffer = 1.2G

Physical memory has no relationship with parameters.

Assume 64 Bit Oracle Memory 4G Shared_Pool_Size = 200m, Data Buffer = 2.5G Memory 8G Shared_Pool_Size = 300M, Data Buffer = 5g

Memory 12G shared_pool_size = 300M ----- 800M, Data Buffer = 8g

1.3 32bit and 64bit The impact of SGA Why is the experience rules on SGA size set above 32bit Oracle and 64bit Oracle because this is related to the upper limit of SGA size. Under the 32bit database, usually Oracle can only use memory memory, even if we have 12G memory, but we can only use 1.7g, this is a great regret. If we install 64bit database, we can use a lot of memory, almost impossible to reach the upper limit. However, the 64bit database must be installed on the 64bit operating system, but unfortunately, only 32bit database is installed on Windows. We can view the database through the following manner or 64bit: SQL> SELECT * FROM V $ Version; banner --- -------------------------------------------------- ----------- Oracle8i Enterprise Edition Release 8.1.7.0.0 - ProductionPL / SQL Release 8.1.7.0.0 - ProductionCore 8.1.7.0.0 ProductionTns for 32-Bit Windows: Version 8.1.7.0.0 - Productionnlsrtl Version 3.4.1.0.0 - Production

Different from the display under the UNIX platform, it can be seen from the 64bit Oracle, such as on the HP-UX platform: SQL> SELECT * FROM V $ VERSION;

Banner ------------------------------------- --------------- Oracle8i Enterprise Edition Release 8.1.7.4.0 - 64bit ProductionPL / SQL Release 8.1.7.4.0 - ProductionCore 8.1.7.0.0 ProductionTns for HPUX: Version 8.1.7.4 .0 - Productionnlsrtl Version 3.4.1.0.0 - Production

32bit Oracle has SGA restrictions, regardless of the 32bit or 64bit platform, and only 32bit's Oracle can only be running 32bit, but under a specific operating system, it may provide a certain means so that we can use more than 1.7G memory reached 2G or more. Since we are now usually using 64bit Oracle, therefore the problem with how to extend the SGA size on the 32bit platform is not described again.

1.4 9I change Oracle's version of Oracle's version of Oracle is always accompanied by changes in parameters and increasingly tends to make the settings of the parameters easier because complex parameter settings make DBA often focused. With regard to changes in memory, we can examine the following parameters. In fact, the database itself can give a parameter adjustment value for the SGA-related part of the current running system (refer to V $ db_cache_advice, v $ shared_pool_advice), regarding the PGA also related view V $ PGA_TARGET_ADVICE, etc. Data Buffer9i retains parameters in 8i, such as setting new parameters, ignore the old parameters. 9i uses db_cache_size to replace db_block_buffers, replace buffer_pool_keep with db_keep_cache_size, replace buffer_pool_receth with db_recycle_cache_size; hereby pay attention to setting the actual cache size in 9i is no longer the number of blocks. The additional 9i adds DB_NK_CACHE_SIZE, which is set to support the use of different blocks in the same database. For different tablespaces, different data blocks can be defined, while the definition of the buffer relies on this parameter. Where N can be different values ​​such as 2, 4, 6, 8, 16. One parameter referred to herein is DB_BLOCK_LRU_LATCHES, which has become a reserved parameters in 9i, and does not recommend manual settings.

PGA has also changed in this section in 9i. In independent mode, 9i no longer advocates the use of the original UGA-related parameter settings, and in order to replace new parameters. If Workarea_SIZE_POLICY = Auto, all sessions' UGA share a large number of memory, which is set by the PGA_AGGREGATE_TARGET. After we evaluate the maximum PGA memory that all processes, we can set this parameters in the initialization parameters, so we can no longer care about other "* _Area_size" parameters.

SGA_MAX_SIZE If SGA_MAX_SIZE is set in 9i, the SGA_MAX_SIZE is set in total and less than or equal to this value, the size of the data buffer and shared pool can be dynamically adjust the size of the data buffer and the shared pool SQL> show parameters SGA_MAX_SIZENAME TYPE VALUE ------------- --- -------------------- ------------------- SGA_MAX_SIZE UNKNOWN 193752940SQL> SQL> ALTER System set db_cache_size = 30000000; system altered.sql> ALTER system set shared_pool_size = 20480000; system altered.

1.5 Lock_sga = True Problem Since almost all operating systems support virtual memory, even if the memory we use is less than physical memory, the operating system cannot be avoided to change SGA to virtual memory (SWAP). So we can try to make SGA locks are not changed in physical memory to virtual memory, so that the page exchanges and exchanges, thereby improving performance. But here is unfortunate here, Windows cannot avoid this situation. Let's refer to how to implement Lock_Sgaaix 5L under different systems (AIX 4.3.3 or more) logon aix rootcd /usr/sample/kernel./vmtune (the information is as follows) V_Pingshm is already 1./Vmtune -s 1 Oracle users modify the lib_sga = true in INITSID.ORA Restart Database HP Unixroot Login Create the file "/ etc / privter": vi / etc / privroupadd line "DBA MLOCK" To Files Root, Run the command "/ etc / setPrivgrp - f / etc / privitoup ": $ / etc / setprvgrp -f / etc / privrouporacle user modified INITSID.ORA LOCK_SGA = TRUE Restart Database

Solaris (Solaris2.6 or more) 8i version of the database default use hidden parameter USE_IM = TRUE, automatically lock SGA in memory without setting up LOCK_SGA, if set Lock_SGA = TRUE uses a non-root user to start the database will return an error.

Windows cannot set Lock_SGA = True, which can be loaded with all memory pages when the database starts, which can play a role.

2. Apply Optimization The following we will start from the perspective of technology to explore the problem of database optimization. It is usually used as a person that optimizes the Oracle system, in fact, many times, it is not very understandable that it can be said to be completely not understanding, let alone understand the application code. In fact, a system runs fast or slowly believes that everyone understands that the first important is the design of the database, then the application design, the writing of the SQL statement, and finally the adjustment and hardware, network problems, and many more. So when we don't understand a system, you can optimize the database application is not a relaxing thing. So what should our first step? There are usually two types of methods: one of them is usually used, using Statspack to perform bottlenecks of the diagnostic system. Oracle in Statspack gives almost information on most of Oracle. Another way is TRACE session. If a session runs very slow or a user's query is very slow, then we can diagnose the way in the TRACE session method is slow, see what the execution plan is, then in user_dump_dest The session process number or the wire number can find a generated trace file. By using TkProf, we can see a lot of statistics, including execution plans, PARSE / FETCH steps to consume the CPU. Usually we are observing the consistent gets in Query mode to first see if SQL uses an index, and then look at the execution plan is normal, is there any room for adjustment. Of course, if you don't have actually done it, these contents are very abstract. This is the diagnosis and adjustment process for specific session without understanding applications and procedures. The way of trace session is a bottom-up method, starting from SQL; and Statspack is a bottom-down method, which is where to diagnose the bottleneck of the database first, and then adjust from the bottleneck to make adjustments, this It is used to itself, it can be referred to as a method of starting with Wait Event. 2.1 Using StatsPackStatsPack is a performance diagnostic tool, first published in Oracle8.1.6, which is enhanced in version 8.1.7. In addition to the performance issues in the instance, STATSPACK can also find the high-load SQL statement in the application, it is easy to determine the bottleneck of the Oracle database and record the database performance status. In the database, STATSPACK is located in the $ ORACLE_HOME / RDBMS / Admin Directory, for Oracle8.1.6, is a set of STATs: for Oracle8.1.7, it is a set of SP. Before the StatsPack is released, we can usually use the tools for the diagnostic database to be two scripts UTLBSTAT.SQL and UTLESTAT.SQL, BSTAT / ESTAT is a very simple performance diagnostic tool. UTLBSTAT gets a snapshot of many V $ view at the beginning, UTLESTAT generates a report through the previous snapshots and the current view. The report is actually equivalent to two sampling points in Statspack. STATSPACK can provide us with a vital trend analysis data by continuous sampling. This is a huge progress. Environments where you can use Statspack We try not to use BSTAT / ESTAT to diagnose database issues.

2.1.1 Installing Statapack § Step 1: In order to be able to install and run Statspack, first need to set the following two system parameters: 1. Job_Queue_Processes In order to be able to create automatic tasks, execute data collection, which requires greater than 0. You can modify this parameter in the initial test parameter file (so that the parameter is active after restarting). This parameter can be modified in system-level dynamic modification (after reconnection). SQL> ALTER SYSTEM SET JOB_QUE_PROCESS = 6; System Altered

Among Oracle9i, you can specify a range, such as Both, so that this modification remains valid in the current and after you use SPFile, if you still use PFile in 9i, then change the same 8i):

SQL> ALTER SYSTEM SET JOB_QUE_PROCESS = 6 Scope = Both; System Altered

2. TIMED_STATISTICS collects timing information for the operating system, which can be used to display time-like statistics, optimize databases and SQL statements. To prevent overhead due to request time from operating system, set this value to false. When collecting statistics using StatsPack, it is recommended to set this value to true, otherwise the resulting statistics can only play only 10% of the role, and the performance impact of TIMED_STATISTICS is set to TRUE is negligible. This parameter makes the collected time information in dynamic performance views such as V $ SESSSTATS and V $ SysStats. The TIMED_STATISTICS parameter can also be changed in instance levels.

SQL> ALTER System Set Timed_statistics = true; system altered

If you are worried about the implementation of TIMED_STATISTICS for performance, you can change this parameter to false before using Statspack before sampling.

§ Step 2: You need to create a table space for Storage data separately, if the sampling interval is short, the cycle is longer, and it is planned for a large number of tablespace, if each hour is sampled once, continuous Sampling one week, the amount of data is very large. The following example creates a 500M test table space. Note: The table space created here is not too small. If it is too small to create an object, it is recommended to establish at least 100M tablespace.

SQL> CREATE TABLESPACE Perfstat2 DataFile '/oracle/oradata/oradata/res/perfstat.dbf'3 size 500m; TableSpace Created.

§ Step 3: Log in with INTERNAL in SQLPLUS, or users with Sysdba (Connect / As Sysdba) license. Note: In Oracle9i, there is no Internal user, you can use the SYS user as a SYSDBA identity. Turn to the $ oracle_home / rdbms / admin directory, check if the installation script exists, and we can perform scripts.

$ cd $ oracle_home / rdbms / admin $ ls -l sp * .sql-rw-r - r - 1 Oracle Other 1774 Feb 18 2000 spauto.sql-rw-r - r - r - 1 Oracle Other 62545 JUN 15 2000 spcpkg.sql-rw-r - r - R - R - R - 1 Oracle Other 31193 JUN 15 2000 SPCTAB.SQL-RW-R - R- - 1 Oracle Other 6414 JUN 15 2000 SPCUSR.SQL-RW-R - R - R - 1 Oracle Other 758 JUN 15 2000 SPDROP.SQL-RW-R - R - 1 Oracle Other 3615 JUN 15 2000 SPDTAB.SQL- RW-R - R - 1 Oracle Other 1274 JUN 15 2000 SPDUSR.SQL-RW-R - R - 1 Oracle Other 6760 JUN 15 2000 SPPURGE.SQL-RW-R - R - 1 Oracle Other 71034 JUL 12 2000 SPREPORT.SQL-RW-R - R - 1 Oracle Other 2191 JUN 15 2000 Sptrunc.sql-RW-R - R - 1 Oracle Other 30133 JUN 15 2000 Spup816.sql $ Next We can Start installing statspack. Run statscre.sql in Oracle8.1.6; run spcreate.sql in Oracle8.1.7. This period will prompt you to enter the location of the default tablespace and temporary table space, enter our table space created for Perfstat and your temporary table space. The installation script automatically creates a Perfstat user.

$ SQLPLUS

SQL * Plus: Release 8.1.7.0.0 - Production On Sat Jul 26 16:27:31 2003

(c) CopyRight 2000 Oracle Corporation. All Rights Reserved.

Enter User-name: Internal

Connected to: Oracle8i Enterprise Edition Release 8.1.7.0.0 - Productionwith The Partitioning OptionjServer Release 8.1.7.0.0 - Production

SQL> SQL> @SpCreate ... Installing Required Packages

Package created.

Grant succeeded.

View created.

Package body created.

Package created.

Synynym Dropped.

Synynym Created. ......

Specify Perfstat User's Default TableEnter Value for Default_Tablespace: Perfstatusing Perfstat for the Default TableSpace

User altered.

User altered.

Specify Perfstat User's Temporary TableEnter Value for Temporary_Tablespace: Tempusing Temp for the Temporary TableSpaceUser Altered.

NOTE: SPCUSR Complete. Please check spcusr.lis for any errors.

......

If the installation is successful, you can then see the following output information: ... .creating package stats ...

Package created.

No errors.creating package body stats ...

Package body created.

No Errors.

NOTE: SPCPKG Complete. Please check spcpkg.lis for any errors.

You can view the .lis file to view the error message when the installation is installed.

§ Step 4: If an error occurs during the installation process, you can run the spdrop.sql script to delete the objects established by these installation scripts. Then re-run spcreate.sql to create these objects.

SQL> @spdropdropping Old Versions (if any)

Synynym Dropped.

Sequence Dropped.

Synynym Dropped.

Table Dropped.

Synynym Dropped.

View Dropped. ... Note: SPDUSR Complete. Please check spdusr.lis for any errors.

(The above installation process description is obtained on the HP 11.11 Oracle 8.1.7 platform)

2.1.2 Test Statspack Run StatsPack.snap can generate system snapshots, run twice, then execute spreport.sql to generate a report based on two time points. If everything is normal, the installation is successful.

SQL> Execute Statspack.snAppl / SQL Procedure SuccessFully Completed.SQL> Execute Statspack.SnAppl / SQL Procedure SuccessFully Completed.SQL> @ Spreport.sql

But it is possible that you will get the following error:

SQL> exec statspack.snap; BEGIN statspack.snap; END; * ERROR at line 1: ORA-01401: inserted value too large for columnORA-06512: at "PERFSTAT.STATSPACK", line 978ORA-06512: at "PERFSTAT.STATSPACK ", Line 1612ra-06512: AT" perfstat.statspack ", Line 71RA-06512: AT line 1

This is a bug of Oracle, a BUG number 1940915. This bug is corrected from 8.1.7.3. This problem will only appear in multiple character sets, need to modify spcpkg.sql scripts, $ Oracle_Home / Rdbms / Admin / SPCPKG.SQL, modify "substr" to "SubStrb", and then re-run the script. This script error section: select l_snap_id, p_dbid, p_instance_number, substr (SQL_Text, 1, 31). . . . . . . . . . . Substr will use multiple characters as a byte.substrb as multiple BYTEs. When collecting data, StatPack stores the top 31 bytes of TOP10 in the top 31 bytes of SQL, and this error occurs if there is Chinese in the first 31 words in SQL. Note: Run spcpkg.sql also requires INTERNAL users to log in to SQLUS2.1.3 Generation Statspack Report call spreport.sql can generate an analysis report: When calling Spreprot.SQL, the system will first query the snapshot list, and then ask you to choose the beginning of the report. ID (begin_snap) and end snapshot ID (end_snap) generate a report. To generate a Report, we need at least two samples:

SQL> @spreport

DB ID DB Name Instal INSTANCE -------------------------------------------------------------------------------------------------------------------------- 2749170756 RES 1 RES

Completed Snapshots

Snap SnapInstance DB Name ID Snap Started Level Comment ------------ -------------------------------------------------------------------------------------------------------------------------------- ------------------------------ RES RES 1 26 JUL 2003 16:36 5 2 26 Jul 2003 16:37 5 3 26 Jul 2003 17:03 5

Specify THE BEGIN AND end Snapshot IDs ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Enter Value for begin_snap: 2Begin Snapshot ID Specified: 2

Enter value for end_snap: 3nd Snapshot ID Specified: 3

Specify The Report Name ~~~~~~~~~~~~~~~~~~~~~~~ The Default Report File Name is SP_2_3. To Use this name, Press

To Continue, Otherwise Enter an Alternative.

ENTER VALUE for Report_name: rep0726.txt

......

End of report

In the process of running Spreport.sql to generate a STATSPACK report, there will be three places prompt user input: 1. Start snapshot ID; 2, end snapshot IDs; 3, the file name of the output report file, the default file name is sp__

The start snapshot ID input is 2, start the snapshot ID is 3, the file name of the output report file is rep0726.txt

Successfully running statspack.snap will generate a SnapShot, you can see this SNAP ID and SNAP running time when generating a StatsPack report. Running statspack.snap is the sample Sampling, the StatsPack report is to analyze the various situations between the two sampling points.

2.1.4 Deleting History Snapshot Data Before Successful Run StatsPack.snap will generate a snapshot, which is stored in the Perfstat.Stats $ SNAPSHOT table, and query the data of the table when generating a StatsPack report. , For users to choose from Snapshot. If there is more than the number of statspack.snap, the data of the table will increase, and historical data will affect the normal operation, so you need to clean up historical snapshot data. Delete the corresponding data in the Stats $ SNAPSHOT data table, and the data in other tables will be deleted accordingly:

SQL> SELECT MAX (SNAP_ID) from Stats $ SNAPSHOT; Max (Snap_ID) ------------ 166

SQL> delete from stats $ snapshot where snap_id <= 166; 143 ROWS DELETED

You can change the scope of Snap_ID to keep the data you need. During the above deletion, you can see that all related tables are locked. SQL> select a.object_id, a.oracle_username, b.object_namefrom v $ locked_object a, dba_objects bwhere a.object_id = b.object_id / OBJECT_ID ORACLE_USERNAME OBJECT_NAME ------------------ ------------------------------------------------- ------------------------------------------------- 156 PERFSTAT SNAP $ 39700 PERFSTAT STATS $ LIBRARYCACHE39706 PERFSTAT STATS $ ROLLSTAT39712 PERFSTAT STATS $ SGA39754 PERFSTAT STATS $ PARAMETER39745 PERFSTAT STATS $ SQL_STATISTICS39739 PERFSTAT STATS $ SQL_SUMMARY39736 PERFSTAT STATS $ ENQUEUESTAT39733 PERFSTAT STATS $ WAITSTAT39730 PERFSTAT STATS $ BG_EVENT_SUMMARY39724 PERFSTAT STATS $ SYSTEM_EVENT39718 PERFSTAT STATS $ SYSSTAT39715 PERFSTAT STATS $ SGASTAT39709 PERFSTAT STATS $ ROWCACHE_SUMMARY39703 PERFSTAT STATS $ BUFFER_POOL_STATISTICS39697 PERFSTAT STATS $ LATCH_MISSES_SUMMARY39679 PERFSTAT STATS $ SNAPSHOT39682 PERFSTAT STATS $ FILESTATXS39688 PERFSTAT STATS $ LATCH174 PERFSTAT JOB $ 20 rows selected

Oracle also provides system scripts for TRUNCATE for these statistics tables, this script name is: sptrunc.sql (8i, 9i) This script is as follows, seeing all system tables related to StatsPack: Truncate Table Stats $ FILESTATXS; truncate table STATS $ LATCH; truncate table STATS $ LATCH_CHILDREN; truncate table STATS $ LATCH_MISSES_SUMMARY; truncate table STATS $ LATCH_PARENT; truncate table STATS $ LIBRARYCACHE; truncate table STATS $ BUFFER_POOL_STATISTICS; truncate table STATS $ ROLLSTAT; truncate table STATS $ ROWCACHE_SUMMARY ; truncate table STATS $ SGA; truncate table STATS $ SGASTAT; truncate table STATS $ SYSSTAT; truncate table STATS $ SESSTAT; truncate table STATS $ SYSTEM_EVENT; truncate table STATS $ SESSION_EVENT; truncate table STATS $ BG_EVENT_SUMMARY; truncate table STATS $ WAITSTAT; truncate table STATS $ ENQUEUESTAT; truncate table STATS $ sQL_SUMMARY; truncate table STATS $ SQL_STATISTICS; truncate table STATS $ SQLTEXT; truncate table STATS $ PARAMETER; delete from STATS $ SNAPSHOT; delete from STATS $ DATABASE_INSTANCE; commit; 2.1.5 some important script 1 . By exporting and sharing data in diagnostic system problems, you may need to provide raw data to a professional, then we can export StatsPack table data, where we may use: spuexp.par is: file = spuexp.dmp log = Spuexp.log compress = y grants = y indexes = y rows = y online = y owner = perfstat consistent = y We can export as follows: exp userid = perfstat / my_perfstat_password parfile = spuexp.par

2. Deleting data spdrop.sql mainly calls two scripts at execution: SPDTAB.SQL, SPDUSR.SQL former delete tables and synonyms and other data, the latter deletes users

3. Oracle92 New Script 1) Upgrade the script for upgrading the StatsPack object, these scripts need to run with Sysdba permissions, please back the existing schema data before upgrading: spup90.sql: Used to upgrade the 9.0 version to 9.2 version. Spup817.sql: If you upgrade from STATSPACK 8.1.7, you need to run this script spup816.sql: From StatsPack 8.1.6 upgrade, you need to run this script, then run spup817.sql2) sprepsql.sql is used to determine the SQL HASH value according to a given SQL Hash value Generate SQL report

2.1.6 Adjusting Statspack's collection thresholds Statspack has two types of collection options:

1. Level: Controlling Type Statspack for Collecting Data has three snapshot levels, the default value is 5A. Level 0: General performance statistics. Including waiting events, system events, system statistics, rollback segments, rows, SGA, session, lock, buffer pool statistics, etc. b. Level 5: Increase the SQL statement. In addition to all content including Level0, the collection of SQL statements is included, and the result is recorded in Stats $ SQL_SUMMARY. c. Level 10: Add sub-latch statistics. Includes all content of Level5. The additional sublock will also store the attached STATS $ Lathc_Children. Be cautious when using this level, it is recommended to perform under the guidance of Oracle Support. You can modify the default level settings by the StatsPack package SQL> execute statspack.snap (i_snap_level => 0, i_modify_parameter => 'True'); through such settings, the collected level will be 0. If you just want this to change the collection level, you can ignore the i_modify_parameter parameter. SQL> Execute statspack.snap (i_snap_level => 10); 2. Snapshot threshold: Set the threshold of the collected data. The snapshot threshold is only applied to the SQL statement obtained in the Stats $ SQL_SUMMARY table. Because every snapshot collects a lot of data, each line represents a SQL statement in the database, so Stats $ SQL_Summary will soon become the largest table in Statspack. The threshold is stored in the Stats $ Statspack_Parameter table. Let us have a variety of thresholds: a. EXECUTIONS_TH This is the number of SQL statements executed (the default is 100) b. Disk_reads_tn This is the number of disk readings executed by the SQL statement (default is 1000) c. Parse_calls_th This is a SQL statement The number of parsed calls (default is 1000) d. Buffer_gets_th This is the number of buffers acquired by the SQL statement (the default is 10000) Any number of thresholds exceeds the above parameters. We can change the default value of the threshold by calling the StatsPack.Modify_statspack_parameter function. For example: SQL> execute statspack.modify_statspack_parameter (i_buffer_gets_th => 100000, i_disk_reads_th => 100000;

2.2 Analysis of the StatsPack report It can be seen from the above description that generating a StatsPack report is relatively simple, but how to read the statspack report is not so easy, you need to have an Oracle architecture, memory structure, wait event, and application system Full understanding, plus constant practice, can basically understand the StatsPack report and find adjustment ORACLE from the report. Let's take an actual StatsPack report and get roughly.

2.2.1 Basic Information Analysis DB Name DB ID Instance Instal Release Ops Host --------------------------------- - - -------- ----------- --- --------- --- Res 2749170756 Res 1 8.1.7.0.0 NO RES

Snap ID Snap Time sessions ------------------------ -------- Begin Snap: 2 26-jul-03 16:37 : 08 38 END SNAP: 3 26-JUL-03 17:03:23 38 ELAPSED: 26.25 (MINS) Statspack Report First describes the basic situation of the database, such as database name, instance name, instance number, Oracle version number, etc. Then, then the beginning of the report and the information of the snapshot, including SNAP ID, SNAP Time, etc.;

Cache sizes ~~~~~~~~~~~ db_block_buffers: 61440 log_buff_size: 8192 shared_pool_size: 52428800

Several important parameters in the Oracle memory structure are then described.

2.2.2 Memory Information Analysis Load profile ~~~~~~~~~~~~ Per Second Per Transaction --------------- ----------- ---- Redo size: 4,834.87 11,116.67 Logical reads: 405.53 932.43 Block changes: 60.03 138.02 Physical reads: 138.63 318.75 Physical writes: 54.27 124.79 User calls: 62.69 144.13 parses: 19.14 44.00 Hard parses: 2.26 5.20 Sorts: 1.83 4.20 Logons: 0.21 0.47 Executes: 21.10 48.50 Transactions: 0.43

% Blocks Changd Per Read: 14.80 Recursive Call%: 34.45 Rollback Per Transaction%: 0.00 Rows Per Sort: 20.57

Redo size: is the amount of log of the log, divided into every second and everything, usually in a busy system, the amount of logged generation may reach hundreds k, even hundreds k;

Logical Reads: Logical Read is actually the meaning of Logical IO = Buffer gets, we can think that Block is in memory, we read a memory every time, it is equivalent to a logical reading; Parses and Hard Parses: Parse and Hard Parse It is usually very prone to problems, and 80% of the system is caused by this reason. The so-called PARSE is divided by Soft Parse and Hard Parse, Soft Parse is when a SQL passes, you need to find the same SQL in Shared Pool, if you find it, that is Soft Parse, if not found, then start hard Parse In fact, Hard Parse is primarily checked whether all objects involved in the SQL and the relationship, etc., after the Hard Parse generates an execution plan according to the rule / cost mode, and then execute SQL. And the root of Hard Parse is basically due to the use of Bind Var, and does not use Bind Var to violate the principles of Oracle's Shared Pool, which violates this design to sharing, this caused the life rate in shared_pool_size to drop . Therefore, it is not used by bind var, which will result in the problem of CPU usage, and it has a sharp drop in performance. There is also to maintain the Internal Structure, you need to use Latch, Latch is an Oracle low-level structure for protecting memory resources, is a Lock for a short life cycle, a large number of Latch will consume a lot of CPU resources.

Sorts: indicates the number of sorts;

Executes: Indicates the number of execution;

Transactions: Represents the number of transactions;

Rollback Per Transaction%: Represents the retraction rate of transactions in the database. If it is not because the business itself, it should usually be less than 10% for a good, and the rollback is a very resource operation.

Instance Efficiency Percentages (Target 100%) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~ Buffer NOWAIT%: 99.98 Buffer Hit%: 65.82 In-Memory Sort%: 99.65 Library Hit%: 91.32 Soft Parse%: 88.18 Execute to Parse%: 9.28 Latch Hit%: 99.99Parse CPU to Parse ELAPSD%: 94.61% Non-Parse CPU: 99.90

Buffer Hit%: Data buffer hits, usually more than 90%;

Library Hit%: The hit rate of libaray cache should usually be greater than 98%;

In-Memory Sort%: Sort the ratio of memory, if this ratio is too small, you can consider increasing sort_area_size so that sorting is performed in memory rather than in the TEMP table space;

Soft Parse%: The percentage of soft parsing should be great, because we must minimize Hard Parse. Soft parse percentage = SOFT / (Soft Hard);

Execute to Parse%: This number should also be, the better, close to 100% is best. This value is negative in some reports, it looks very strange. In fact, this represents a problem, SQL may happen if it is age out, that is, SQL aging, or executes ALTER SYSTEM FLUSH Shared_Pool. Shared pool statistics begin end ---------- Memory USAGE%: 90.63 87.19% SQL with Executions> 1: 71.53 75.39% Memory for SQL W / EXEC> 1: 59.45 65.17

% SQL with Executions> 1: This means that SQL is executed more than one ratio, which should be great, and it means that many SQL is only executed once, and it does not use bind var;

2.2.3 Waiting Event Analysis Next, the StatsPack report is described in Wait Event, which is more complicated in Oracle. Oracle's waiting event is an important basis and metrics that measure Oracle health. The concept of waiting for the event is introduced in Oracle7.0.1.2, there are roughly 100 waiting events. In Oracle 8.0, this number increased to approximately 150, about 200 events in Oracle8i, about 360 wait events in Oracle9i. There are mainly two types of waiting events, namely idle (IDLE) waiting events and non-idle, waiting events. The idle event refers to Oracle is waiting for a job. When you diagnose and optimize the database, we don't have to pay too much attention to this part of the event. Common idle events are:? Dispatcher Timer? Lock Element Cleanup? Null Event? Parallel Query Dequeue Wait? Parallel Query Idle Wait - SLAVES? PIPE GET? PL / SQL LOCK TIMER? PMON TIMER- PMON? RDBMS IPC Message? Slag Wait? SMON TIMER? SQL * NET BREAK / RESET To Client? SQL * NET Message from Client? SQL * NET MESSAGE TO Client? SQL * NET MORE DATA TO CLIENT? Virtual Circuit Status? Client Message

Non-idle waiting events are specifically targeted by Oracle, referring to the number of waits that have occurred during the database or the application run, which should be concerned with the research when adjusting the database. Some common non-air waiting events are:? DB File Scattered Read? DB File Sequential Read? Buffer Busy Waits? Free Buffer Waits? Enqueue? Latch Free? Log File Parallel Write? Log file Sync

Some wait events in Statspack are described below.

Top 5 Wait Events ~~~~~~~~~~~~~~~~~ Wait% Totalevent Waits Time (CS) WT Time ------------------- ------------------------ ------------ ------ db file scattered read 26,877 12,850 52.94db file parallel write 472 3,674 15.13log file parallel write 975 1,560 6.43direct path write 1,571 1,543 6.36control file parallel write 652 1,290 5.31 ---------- -------------------------------------------------- -db file scattered read: DB file is scattered. This wait event is very common, often appearing in TOP5, this means that more than one Block data is read when reading data from disk, and these data is dispersed in a discontinuous memory block, because once Reading is more than one block. In general, we can think that the full-table scan type is read, because the read table is read by reading table data, if this number is too large, it indicates that the table can not find the index, or only the limited index can be found. It may be that there are too many full table scans, and you need to check whether SQL uses an index, or whether a reasonable index is required. When full todule scan is limited to memory, they rarely enter continuous buffers, but are dispersed in the entire buffer. Although the full surface scan may be more effective in specific conditions, it is best to check if this full table scan is necessary, can it reduce the full surface scan by establishing a suitable index The resulting large scale data is read. For small tables that are often used, try to use them in memory, avoid unnecessary aging clearance and repeat.

DB File Sequential Read: DB file is continuously read. A single block read (usually indexed) is usually displayed, indicating that the block of the read disk is placed in a continuous memory block. In fact, most of them represent a single Block read, which can be said that it is more than IO or more reads through indexing. Because a few IOs read into multiple Block, the chance to put in a continuous memory block is small, and the distribution will encounter this event in a large number of records of different blocks. Because of the index read data, assume 100 records, according to the index, it is not necessary to read the table data according to the index of each value, theoretically, theoretically, 100 buffer gets, and if it is Full Table Scan, Then 100 data is mostly inside a block, almost read this Block almost once, will have such a big difference. This kind of waiting has a lot of the number of words, and the connection order of the table is not good, or the index is not selected. For advanced transaction processing, good (WellTun) system, this value is very normal, but in some cases, it may implicit the problem in the system. You should link this wait statistic to known issues in the StatsPack report (such as low efficiency SQL). Check the index scan to ensure that each scan is necessary and check the connection order of the multi-table connection. DB_CACHE_SIZE is also a determinant of these frequencies. A Hash-area connection of a problem should appear in the PGA memory, but they also consume a large amount of memory, resulting in a large number of waiting during order. They may also appear in the form of direct path read / write waiting. Free Buffer Wait: Release buffer. This waiting indicates that the system is waiting for buffering in memory because there is no buffer space available in memory. If all SQL get tuning, this waiting may indicate that you need to increase db_buffer_cache. Release buffer waits may also indicate that the unselected SQL causes the data overflows a buffer memory with an index block, and there is no buffer for a particular statement that is waiting for system processing. This is usually indicated that a considerable number of DMLs (insert / update / deletion) may mean that the speed of the DBWR written is not fast enough, the buffer memory may be filled with multiple versions of the same buffer, resulting in very low efficiency. To solve this problem, you may need to consider adding checkpoints, using more DBWR processes, or increasing the number of physical disks.

Buffer Busy Wait: Buffer is busy. The wait event represents a buffer that is waiting for a Unshareable mode, or represents the current being read into buffer cache. That is, when the process wants to get or operate a block, it is found that the subject is waiting for use. In general, Buffer Busy Wait should not be greater than 1%. Check the buffer waiting statistics (or V $ Waitstat), look at whether it is in the section. If so, consider adding a free list (freeList, for Oracle8i DMT) or increases Freelist Groups. It modifies the syntax for: SQL> ALTER TABLE SP_ITEM Storage (FreeLists 2); Table Altered.

For Oracle8i, increase the Freelist parameters, which can significantly relieve waiting, if you use LMT, that is, local management tablespace, segment management is relatively simple, you can also consider modifying the PCTUSED / PCTFREE value of the data block, such as increasing PCTFree The distribution of data can be expanded to a certain extent, you can reduce the competition of hot blocks. If this is waiting to be in Undo HEADER, you can solve the problem of the buffer by increasing the rollback segment. If you wait on the undo block, we may need to check the related applications, properly reduce large-scale consistency read, or reduce the data density in the table of consistent reads or increase db_cache_size. If you wait for Data Block, you can consider moving frequently accessible tables or data to another data block or makes a wider range of distributions (can increase the PCTFREE value, expand data distribution, reduce competition) to avoid this "hotspot" Data blocks, or you can consider adding a list of free lists in the table or using localized table spaces (Locally Managed TableSpaces). If you wait in the index block, you should consider reconstructing an index, split indexing, or using a reverse key index. In many cases, the reverse key index can greatly alleviate the competition, and the principle is a bit similar to the efficacy of the Hash partition. Reverse Key Index is often built on some values, such as the value in the column is generated by sequence.

In order to prevent buffering waits associated with the data block, a smaller block can also be used: In this case, the record in a single block is less, so this block is not so "busy"; or can set more PCTFREE enables data to expand physical distribution and reduce hotspots in recording. When executing DML (Insert / Update / Delete), Oracle writes information to the data block, for multi-transaction-connected data sheets, about ITL's competition and waiting may occur, in order to reduce this waiting, you can add initrans, more ITL slot. The following is a production system V $ WaitStat tries to wait for the waiting information: SQL> SELECT * FROM V $ WAITSTAT WHERE COUNT <> 0 or Time <> 0; Class Count Time ------------- -------------- ---------- Data Block 453 6686undo header 391 1126undo block 172 3

Latch Free: Latch Free Latch is a low-level queuing mechanism for protecting the shared memory structure in SGA. Latch is like a memory lock that is quickly acquired and released. Latch is used to prevent shared memory structures from being accessed simultaneously by multiple users. If LATCH is not available, Latch Free Miss is logged. There are two types related to latch: ■ immediate. ■ Can wait. If a process is attempting to get the latch in immediately, the latch has been held by another process. If the latch cannot be used immediately, then the process will not wait for the latch. It will continue another operation. Most LATCH issues are related to the following: Nothing is to use binding variables (redo allocation latch), buffer memory competition issues, and buffer cache There is a "cache buffers chain). Usually, we say that if you want to design a failed system, this condition is enough, this condition is enough, and the consequences of non-binding variables are extremely severe for systems with strong heterogeneous systems. In addition, there are some Latch waiting to be related to BUG, ​​and you should pay attention to the announcement of MetAlink related bugs and the release of patch. This issue should be studied when the Latch Miss Ratios is more than 0.5%. Oracle's Latch mechanism is competition, which deals with CSMA / CD in the network, all user processes compete for Latch, for a Willing-to-Wait, if a process is in the first attempt did not get latch Then it will wait and try again, if the _spin_count will not get the latch, then the process is transferred to the sleep state, continue to specify the length of the length, then wake up again, repeat the previous steps in order. At 8i / The default value in 9i is _SPIN_COUNT = 2000. If the SQL statement cannot be adjusted, Oracle provides a new initialization parameter in version 8.1.6: Cursor_sharing, can force the bind variable to the server by setting Cursor_sharing = force. Setting this parameter may bring a certain side effect, for Java programs, related bugs, specific applications should pay attention to MetAlink's bug announcement.

EnqueueenQueue is a lock mechanism to protect shared resources. This locking mechanism protects shared resources, such as data in the record to avoid two people updating the same data at the same time. Enqueue includes a queuing mechanism, namely FIFO (advanced first out) queuing mechanism. ENQUEUE Wait for Common ST Enqueue, such as ST, HW, TX, TM, etc., for the assignment of table space (DMT) managed by spatial management and dictionary. For versions that support LMT, you can consider using local management table spaces for Oracle8i because the related bugs do not set the temporary tablespace to LMT. Or consider pre-assigned a certain number of zones. HW enqueue indicates the high water level mark of the section; manually assigning appropriate sections to avoid this. TX is the most common enqueue waiting. TX Enqueue Waiting is usually the result of one of the following three issues. The first question is the repeated index in the unique index, you need to perform submit / rollback operations to release Enqueue. The second problem is multiple updates to the same bit map index. Since a single bit segment may contain multiple row addresses (RowID), wait for the appearance when multiple users attempt to update the same paragraph. The enqueue released until submit or roll back. The third issue is also the most likely problem that multiple users update the same block at the same time. Block-level lock occurs if there is no free ITL slot. This is easily avoided by increasing initrans and / or maxTrans to allow multiple ITL slots, or increasing the PCTFREE value on the table. TM Enqueue is generated during DML to avoid using DDL for affected objects. If you have a foreign key, you must index them to avoid this common lockup problem. Log Buffer Space: Log Buffer When you generate log buffer to generate a log buffer than the LGWR's write speed, or when log switch is too slow, it will happen. To solve this problem, you can increase the size of the log file, or increase the size of the log buffer. Another possible reason is that the disk I / O is bottleneck, you can consider the use of a faster write speed.

Log file switch (Archiving Needed) This wait event is usually because the first log archive has not been completed because the first log archive has not been completed, and there is a problem that the waiting may be IO. Workaround: Can consider increasing log files and increasing log group moving archiving to fast disks adjustment log_archive_max_processes.

Log file switch (checkpoint incomplete): Log Switching (check point is not completed) When your log group is finished, LGWR tries to write the first log file, if this database does not complete the record in the first log file When Dirty blocks (eg, the first checkpoint is not completed), the wait event appears. Waiting event indicates that your log group is too small or the log file is too small. You may need to add your log group or log file size.

Log file switch: Log file conversion All submission requests need to wait "log file conversion (necessary archive)" or "log file conversion (CHKPT. Not complete)". Make sure the archive disk is not full, and the speed is not too slow. DBWR may become very slow because of the input / output (I / O) operation. You may need to add more or more redo logs, and if DBWXR is what is in question, you may need to add a database writing.

Log file synchronization When a user submits or rollback data, LGWR writes the SESSION session by Redo Buffer to the redo log. Log file Sync must wait for this process successfully (Oracle guarantees that the data successful data is not lost by writing Redo log file), this event will be too frequent, the batch submission can maximize the efficiency of LGWR, excessive submission will cause LGWR Frequent activation, expanded the cost of LGWR. In order to reduce this waiting event, you can try to submit more records each time. The redo log is placed on a faster disk or alternately uses the reform log on different physical disks to reduce the influence of archiving to LGWR. For soft RAID, it is generally not to use RAID 5. RAID5 will bring large performance losses for frequent writing systems. You can use the file system to enter / output, or use Naked Device, which can be obtained. The performance of the write is improved. Log file Single Write This event is only related to the header of the write log file, usually occurs when adding new group members and enhancing serial numbers. The header is written, because some information of the head is the file number, each file is different. Update the log file header This is done in the background, which is generally rare, no need to pay too much attention.

Log File Parallel Write records Redo from Log Buffer to the Redo Log file, mainly refers to a regular write operation (relative to log file synt). If your log group has multiple group members, when Flush log buffer, the write operation is parallel, this time this wait event may appear. Although this write operation is parallel until all I / O operations are completed (if your disk supports asynchronous IO or uses IO Slave, then this is also possible to wait for only one redo log file member. This parameter and the log file Sync time can be used to measure the write cost of the log file. The synchronous cost rate is often referred to.

Control File Parallel Write: Control File Parallel Write When the Server process updates all control files, this event may appear. If you wait very short, you can consider it. If the waiting time is longer, check whether the physical disk I / O of the storage control file exists in a bottleneck. Multiple control files are identical copies for mirroring to improve security. For business systems, multiple control files should be stored on different disks. Generally, three are sufficient. If there are only two physical hard drives, then two control files are also acceptable. Saving multiple control files on the same disk is not practical. Reduce this waiting, you can consider the following method: reduce the number of control files (under ensuring security) If the system is supported, use asynchronous IO transfer control files to IO burden light physical disks

Control File Sequential Read / Control File Single Write Control File Continuous Read / Control File Single write to a single control file I / O problem, the two events will appear. If you wait more obvious, check the single control file to see if the location of the I / O bottleneck is present. Use the query to get the control file access status: SELECT P1 from V $ sessions_waitwhere event like 'control file%' and state = 'waiting'; Solution: Mobile control files to fast disk If system support, enable asynchronous I / O

Direct Path Write: Direct path Write to wait, waiting to confirm that all unfinished asynchronous I / O is written to disk. You should find a frequent data file that I / O operations, adjust its performance. There may also be more disk sorting, and temporary table space is frequently operated, consider using the local management table space, divided into multiple small files, writes different disks or naked devices. SQL * NET Message from DBLINK This wait usually refers to the waiting for distributed processing (from SELECT in other databases). This event is generated when accessing other databases online through the DBLINKS. If you find the data that is static, you can consider moving this data to the local table and reducing access to database access as needed, by snapshot or physical chemical view, will be greatly improved in performance.

SLAVE WAIT: Slave Wait, etc. Slave Wait is a Slave I / O process waiting request, an idle parameter, generally does not explain the problem.

2.2.4 High Load SQL Analysis For a specific application or system, adjust the optimization of its performance, the best way is to check the program's code and the SQL statement used by the user. If you use the level 5 level Snapshot, the state of the high-load SQL statement (High Load SQL) in the system is displayed in the report, and the details can be found in the Stats $ SQL_SUMMARY table. By default, the level of Snapshot is Level 5. According to the descending order of the Buffer Gets, Physical Reads, Executions, Memory Usage and Version Count, divide the SQL statement into several parts of the columns in the report.

2.2.5 statspack other parts of the rest of the report, including the report of the Instance Activity Stats, Tablespace IO Stats, Buffer Pool Statistics, Buffer wait Statistics, Rollback Segment Stats, Latch Activity, Dictionary Cache Stats, Library Cache Activity, SGA breakdown difference and init . Oval parameters, etc. This article is not discussed in detail herein, please participate in other detailed documents.

2.3 TRACE session

2.4 Cost Optimizer Technology Insider Oracle Based on Cost Optimizer (CBO), it is a very complex part of Oracle, which determines the execution path of each SQL in Oracle. CBO is a challenging work that evaluates SQL statements and produces the best implementation, so it also makes it the most complex software components of Oracle. As we all know, SQL execution plans is almost important aspects of Oracle performance adjustment. So I want to learn how to adjust the performance of the Oracle database, I have to learn how to adjust SQL, you need to delve into CBO. The execution path of CBO depends on some external factors, internal Oracle statistics, and how data is distributed. We will discuss the topic below: CBO parameters: We start learning from basic optimizer parameters, then learn how each optimizer parameter affects the execution of Oracle's optimizer.

CBO's statistics: Here we will discuss, using Analyze or DBMS_STATS to collect correct statistics, how important is it for Oracle optimers. We will also learn how to copy the statistics of the optimizer from a system to another system, which ensures that the execution path of SQL will not change in the development environment and product database environment. Below we start discussing CBO optimization mode and affecting CBO Oracle parameters

2.4.1 Parameters of CBO CBOs are subject to some important parameters to modify these parameters, they can see the dramatic changes in CBO performance. First start from setting the CBO's Optimizer_Mode parameter, then discuss settings for other important parameters.

In Oracle 9i, the Optimizer_Mode parameter has four values, which determines four optimization modes: rule, choose, all_rows, and first_rows, where Rule and Choose are currently outdated based on rule-based optimizer mode (Rule- Based Optimizer, referred to as RBO), so we focus on the two CBO modes.

The setting of the optimization mode can be performed in the system level, or a session (session) can also be set, or set a SQL statement. The corresponding statement is as follows: ALTER SYSTEM SETIMIMIZER_MODE = first_ROWS_10; ALTER session set Optimizer_Goal = all_rows; select / * first_rows (100) * / from student

Let's first need to know what is the best implementation plan for a SQL statement (The Best Execution PLAN)? Is it the fastest speed of the SQL statement to return the result, or the SQL statement occupies the minimum system resources? Obviously, this answer depends on the processing of the database.

For a simple example, such as the following SQL statement: select customer_namefrom Customerwhere Region = 'South'Rder by customer_name

If the best execution plan is the fastest speed, then you need to use the index on the Region column and the Customer_Name column, quickly read all columns from the Customer table, without taking whether or not the tube is physically read A large number of IO operations caused by a lot of discontinuous data blocks. (See below)

Suppose this execution plan consumes 0.0001 seconds from the beginning to the result, and generates 10,000 db_block_gets, but if your goal is the minimization of the calculation of resources? If this SQL statement is executed in a batch program, it may be less important to return the result of the return result, and another execution plan may cost less system resources. In the example shown below, the parallel full mete scan is not required to re-read the data block in order, the system has fewer resources, and there is not much IO operation. Of course, since there is no sorting during the execution of the SQL statement, the time of the expected result is longer, and the resource consumption is less. Suppose this execution plan consumes 10 seconds from the beginning to the return result, while generating 5,000 DB_BLOCK_GETs

Oracle provides several Optimizer_Mode setting parameters that allow you to get the best execution plan you want.

Optimizer_mode = first_ROWS is set to this CBO mode, the speed of the SQL statement returns as fast as possible, regardless of whether the system has time consuming long or too much system resources. Since the index will speed up the query speed, the first_rows optimization mode is indexed on the full table scan. This optimization pattern is generally suitable for some OLTP systems to meet the requirements of users who can see smaller query result sets in a short time. Optimizer_mode = all_rows After setting up to this CBO mode, all calculated resources will be guaranteed, although there is no result to return after the query ends. ALL_ROWS optimization mode is more inclined to full-mete scanning, not full index scanning and utilizing index sorting, so this optimization mode is suitable for data seeing real-time data warehouses, decision support systems and batch databases (BATCH -oriented databases, etc.

Optimizer_mode = first_ROWS_N ORACLE 9I is enhanced for SQL statement optimization modes that are smaller than the amount of data of the expected return result set, add four parameter values: first_rows_1, first_rows_10, first_rows_100, first_rows_1000. The CBO determines the number of the number of returned results set by the N value in first_rows_n, and we may only need a part of the query result set, and the CBO determines whether to use index scans based on such N values.

Optimizer_Mode = Rule Based on Rule-based optimizer mode, RBO is an optimized mode used in the early Oracle version. Since RBO does not support new features in 1994, such as BitMap Indexes, Table Partitions, Function-Based Indexes, etc., the RBO is no longer updated in the Oracle version, and users do not recommend this optimization mode using RBO. .

As can be seen from the above discussion, the setting of the Optimizer_Mode parameter is very important to CBO, which determines the basic mode of CBO, and some other parameters have a great impact on CBO. Due to the importance of CBO, Oracle provides some system-level parameters to adjust the global performance of CBO, including the selection of index scans and all scans, and the selection of the table connection, and the like. The following is a brief discussion.

Optimizer_index_cost_adj This parameter is used to adjust the cost algorithm for accessing the access path of the index. The smaller the parameter value, the lower the cost of the index access.

Optimizer_index_caching This parameter tells the number of indexes in the memory buffer. The setting of this parameter affects how CBO determines that the index of the table connection (nested loop) is still using a full table scan.

When the value of DB_FILE_MULTIBLOCK_READ_COUNT This parameter is set, the CBO will be discrete, and the reading of multi-data blocks is lower than the cost of reading, making CBO more inclined to full table scans.

When the parameter value is set to ON, it means that the full table scan using parallel is used, so that the CBO thinks that the indexed access is costly, and it is more inclined to the full table scan.

Hash_Area_size If the PGA_AGGREGATE_TARGET parameter is not used, this parameter is valid. The setting size of this parameter determines if the CBO is more inclined to Hash Joins instead of the index merge of nested loops and table connections.

Sort_Area_size If the PGA_AGGREGATE_TARGET parameter is not used, this parameter is valid. The setting size of this parameter affects whether the CBO determines whether the index access and result set is sorted, the larger the parameter value, the larger the possibility of sorting in memory, and CBO is more inclined to sort. Since modifications to these parameter values ​​affect the execution plan of thousands of SQL statements in the system, Oracle does not recommend modifying the default values ​​for these parameters.

After a rough understanding of the parameters of CBO, the following discusses how to help CBOs in accordance with data provided to CBOs to make a good execution plan.

2.4.2 CBO Statistics For CBO, the most important thing is to define and manage your statistics. In order to make CBOs to generate a best execution plan for your SQL statement, there must be tables related to SQL statements And index statistics. Only when CBO knows related information, such as the size of the table, the distribution, the base, and the column value, the SQL statement can be made correctly, so that the best execution plan is obtained.

Let's discuss how to get high quality CBO statistics, how to create an appropriate CBO environment for your database system.

CBO generates the ability to perform a plan comes from the effectiveness of statistics, and obtains the comparison of statistical data is Analyze Table and DBMS_UTILITY. These two methods have some hazards for the performance of SQL statements, because we know that CBO is used Object Statistics (Object Statistics) Come selects the best execution plan for all SQL statements. The DBMS_STATS application function package is a method of generating a statistical data, especially for large partition tables. Let's take an example of using dbms_stats. exec dbms_stats.gather_schema_stats (ownname => 'SCOTT', options => 'GATHER AUTO', estimate_percent => dbms_stats.auto_sample_size, method_opt => 'for all columns size repeat', degree => 34)

Several options for the Options parameter in the above example need to explain it. Gather re-analyzes the entire Schema to generate statistics;

Gather Empty only analyzes those that have not statistical data;

Gather Stale only re-analyzes those that have changed 10% change (Changes may be inserts, updates, deletes)

Gather Auto only researches with tables that have not had statistical data and 10% change, which is equivalent to Gather Empty and Gather Stale.

You need to monitor both Gather Auto and Gather Stale, if you perform the alter table xxx monitoring command, Oracle uses the change of the DBA_TAB_MODIFICATIONS trace table, records the accurate record of INSERT, UPDATE, DELETE since the most recent statistics analysis, records . SQL> Desc DBA_TAB_MODIFICATIONS; Name Type ------------------------------- Table_OWNER VARCHAR2 (30) Table_name varchar2 (30) partition_name VARCHAR2 (30) Subpartition_name varcha2 (30) INSERTS NUMBER Updates Number deletes Number TimeStamp Date Truncated Varchar2 (3) More interesting options are Gather Stale, such as in a frequent update OLTP system, almost all statistics will be very fast Outpress, and we must remember that the Gather Stale option is to re-analyze the statistics when the table is changed in the table. Therefore, in addition to reading all the tables other than the read-only table use the Gather Store option to re-analyze production statistics. Data, so the Gather Stale option is mainly used in systems that are mainly read-only.

In the example of using dbms_stats above, we see a parameter Estimate_Percent, which is dbms_stats.auto_sample_size, which is Oracle 9i to start using, and this parameter value has greatly facilitated statistics. We know that the higher the quality of statistics, the stronger CBO's ability to perform the planned ability, but due to the problem of database statistics sampling, a complete statistical analysis of a large database system will take time to time The best way is to get a balance between high quality statistics and database statistics sampling sampling. In the early Oracle version, in order to get statistics, DBA has to guess a percentage of the best data sampling size. However, from Oracle 9i, you can specify the value of the Estimate_Percent parameter through the dbms_stats package, that is, DBMS_STATS.AUTO_SAMPLE_SIZE sets the automatic sampling in this way, we can verify these automatic generation through the Sample_Size field of the following data dictionary view. Statistical sampling size. DBA_ALL_TABLES DBA_INDEXES DBA_IND_PARTITIONS DBA_IND_SUBPARTITIONS DBA_OBJECT_TABLES DBA_PART_COL_STATISTICS DBA_SUBPART_COL_STATISTICS DBA_TABLES DBA_TAB_COLS DBA_TAB_COLUMNS DBA_TAB_COL_STATISTICS DBA_TAB_PARTITIONS DBA_TAB_SUBPARTITIONS

After using the automatic statistics sampling, Oracle will take values ​​between 5% and 20% depending on the size of the table and the distribution of the column value. Remember: The higher the quality of your statistics, the more you make it more.

Now we have some understanding of CBO statistics, let's take a look at how to manage CBO statistics in a successful Oracle system. 2.4.3 The correct environment of CBO successfully used CBO is stability, which is some basic matters to successfully use CBO.

● Rehabilitation of statistics only in the required statistics is the most important mistake of Oracle DBA is the regular analysis of the statistics of the system. Remember: The only purpose of doing this is to change the execution plan of the SQL statement. If this execution plan is not destroyed, don't fix it. If you are still satisfied with the performance of the SQL statement, researchers will have a large performance problem after generating statistics, and will have an impact on the development team. In actual use, it is also a very small number of Oracle systems to re-analyze statistics. Generally speaking, the basic architecture of a database application system will not change easily, and the table of large data is still very large, the distribution of index columns, the base value, etc. are rare. Only the following cases may often regain statistical data on the entire system: 1. Database for data analysis has some database systems due to scientific test data, often replace the entire set of test data, then this In the case, after the database re-LOAD has a set of data, the statistics can be re-analyzed. 2, highly changed database This is a very small number of examples, the size of the table or the data of the index, such as 100 records of the table, and turn 100,000 records after one week. In this case, the periodic statistics analysis can also be considered.

● Forced developers to adjust their SQL many developers mistakenly believe that their task is to write SQL statements and get the correct data from the database. But actually writing SQL statements just half of the developer, in a successful Oracle application, will request the developer's SQL statement to access the database in the optimized manner, and ensure that the execution of SQL statements is planned in new SQL Portability of portability. Surprisingly, not how to consider the implementation plan of specific SQL statements in many Oracle applications, think that CBO is very intelligent, no matter how best to provide the best SQL statement execution plan for us. The same query may have different ways in the SQL statement, and each write may have different implementation plans. Observe the following example, each query is the same, but the implementation plan is far away.

- Use incorrect sub-query Select Book_TitleFrom Bookwhere Book_Key Not in (SELECT BOOK_KEY from Sales); Execution Plan -------------------------------------------------------------------------------------------------------------------------------- -------------------------------- 0 Select Statement Optimizer = Choose (COST = 1 Card = 1 BYtes = 64) 1 0 Filter2 1 Table Access (Full) of 'Book' (COST = 1 Card = 1 Bytes = 64) 3 1 Table Access (Full) Of 'Sales' (COST = 1 Card = 5 BYtes = 25)

- Use two tables to connect SELECT BOOK_TITLEFROM BOOK B, SALES SWHERE B.BOOK_KEY = S.BOOK_KEY ( ) and Quantity Is Null;

Execution Plan ------------------------------------------------ ---------- 0 Select Statement Optimizer = Choose (COST = 3 Card = 100 Bytes = 8200) 1 0 Filter2 1 Filter3 2 Hash Join 4 3 Table Access (Full) of 'Book' COST = 1 card = 20 bytes = 1280) 5 3 Table access (full) of 'sales' (COST = 1 card = 100 bytes = 1800)

- Use three correct sub-query Select Book_titlefrom Bookwhere Book_Title Not in (Select Distinct Book_title from Book, Sales Where Book.book_Key = Sales.book_key and Quantity> 0);

Execution Plan ------------------------------------------------ ---------- 0 Select Statement Optimizer = Choose (COST = 1 Card = 1 Bytes = 59) 1 0 FILTER2 1 Table Access (Full) of 'Book' (COST = 1 Card = 1 Bytes = 59 3 1 FILTER4 3 NESTED LOOPS (COST = 6 Card = 1 Bytes = 82) 5 4 Table Access (Full) of 'Sales' (COST = 1 Card = 5 Bytes = 90) 6 4 Table Access (By Index RowID) of 'Book' (COST = 1 Card = 1) 7 6 INDEX (Unique Scan) of 'PK_BOOK' (Unique)

We see that the correct SQL statement writing is so different. Wise developers know how to write SQL statements that produce the best implementation plan, a sensible Oracle application system also actively trains developers to write the most effective SQL statements.

Here is some skills to help developers to optimize SQL statements: 1. Use AutoTrace and TKPROF to analyze the execution plan of SQL statements; 2. Ensure that SQL statements in all production environments are optimized in the test environment; 3, formulate A performance optimization standard, rather than only requires developers to write the fastest SQL statement. According to this standard, good developers should be able to write the most effective SQL statement.

● Carefully manage CBO statistics successful Oracle systems will carefully manage their CBO statistics to ensure that CBO works in the same way in the test environment and production environment. A smart DBA will transplant these statistics into the test environment after getting high quality CBO statistics, so that the execution plan of the SQL statement is the same in the test environment and the production environment.

For DBA, an important job is to collect and publish CBO statistics and keep the most accurate statistics of the current operating environment. In some cases, there may be more than a set of optimized statistics. For example, the best statistics to OLTP can run in the data warehouse is not the best. In this case, DBA needs to maintain two sets of statistics and import the system according to different operating conditions. You can use the export_system_stats stored procedure in the DBMS_STATS package to complete the export of CBO statistics. In the following example, we export the current CBO statistics to a table called Stats_Table_OLTP. DBMS_STATS.EXPORT_SYSTEM_STATS ('Stats_Table_OLTP')

After the export, we can copy this table to another instance. When the system's operating mode changes, use the import_system_stats stored procedure in the dbms_stats package to complete the import of CBO statistics. DBMS_STATS.IMPORT_SYSTEM_STATS ('Stats_Table_OLTP')

● Don't change the value of CBO parameters to change the value of CBO-related parameters is very dangerous, because a small change may have a great negative impact on the performance performance of the entire system, only in strict systems The value of these parameters can be changed later. The parameter values ​​that may have a great impact include: Optimizer_Mode, Optimizer_index_cost_adj, and optimizer_index_caching. Other parameters, such as Hash_Area_size, sort_area_size, the change in parameter values ​​is not that dangerous, can change in the session level to help CBO optimize queries.

● Ensure that the Static execution plan successfully CBO applications can lock the SQL execution plan by cauting statistics while ensuring stability of the stored optimization plan, or add some detail on the specific SQL statement. Remember: Reissue a system statistics may result in thousands of SQL statements to change their implementation plans. Many Oracle Application Systems require all SQL statements to be verified in the test environment, ensuring that the functional and production environments are consistent.

2.4.4 CBO Thinking Although we have already learned a lot of details of CBO, because of the continuous launch of Oracle's new version, CBO has become more and more powerful, and it is more and more complicated, we still have a lot about The knowledge of CBO needs to be learned. Here are some proposals for CBO adjustment, for reviewing DBAs for CBO adjustment.

● DBA can provide some Oracle parameters to control CBO, but can only change these parameters in a limited environment;

● CBO relies on statistics to generate an optimized execution plan for SQL statements, can be analyzed by dbms_stats packages;

● An important task of DBA is to collect, manage CBO statistics, which can be collected, stored, or migration in related instances to ensure the coherence of the implementation plan.

● Before the original statistics were exported to the export_system_stats stored procedure, the statistics of the system were analyzed, because thousands of SQL statements will be able to change all, but you can't restore the original SQL performance. Only when the system's data is huge, you may need to re-analyze the statistics of the entire system.

转载请注明原文地址:https://www.9cbs.com/read-123856.html

New Post(0)