STATSPACK report data results explain

xiaoxiao2021-03-06  53

This article comes from the article of Oracle Chinese User Group (www.racle.com.cn), found that it is very helpful for learning performance tuning:

Original link: http://www.cnoug.org/viewthread.php? TID = 25353

StatsPack report data results Interpret 1 Events 4, Wait Events 5, latch Waiting 6, the first SQL (TOP SQL) 7, instance activity 8, file I / O (File I / O) 9, Memory Allocation (Memory Allocation 10, buffer WAITS) Second, output results Interpret 1, report head information database instance related information, including database name, ID, version number, and host and other information

Quote:

STATSPACK Report Fordb Name DB ID Instance Instal Release Cluster Host -------------------------------- --- ---- - ---------------------------- PORMALS 3874352951 PORMALS 1 9.2.0.4.0 no njlt-server1 SNAP ID Snap time sessions curs / sess comment --------------------------------------- ------------------ Begin Snap: 36 18- July -04 20:41:02 29 19.2 End Snap: 37 19- July -04 08:18:27 24 15.7 ELAPSED: 697.42 (MINS) Cache Sizes (End) ~~~~~~~~~~~~~~~~~ Buffs: 8k Shared Pool Size: 96M log buffer: 512K

2, the load inter-unit file provides statistics per second and each thing, which is an important part of monitoring system throughput and load changes.

Quote:

Load profile ~~~~~~~~~~~~ Per second (second) per transaction things --------------- ------------- - Redo size: 148.46 3,702.15 Logical reads: 1,267.94 31,619.12 Block changes: 1.01 25.31 Physical reads: 4.04 100.66 Physical writes: 4.04 100.71 User calls: 13.95 347.77 parses: 4.98 124.15 Hard parses: 0.02 0.54 Sorts: 1.33 33.25 Logons: 0.00 0.02 Executes : 2.46 61.37 Transactions: 0. 04% BLOCKS CHANGED Per Read: 0.08 Recursive Call%: 30.38Rollback Per Transaction%: 0.42 Rows Per Sort: 698.23 Description: REDO Size: The log size generated per second (unit bytes), the bans of the database task can be logical Reads: Pixabay-based logic read, units in blocks, the number of Block Changes, the number of blocks, the number of blocks, the number of blocks, Physical Reads: Average Database PHYSICAL WRITESICE SRITES: Average per second Database writes of files for disk users: User Calls: Various number per second Parses: Extraction per second, approximately the number of execution of each second statement is soft parsing more than 300 times per second means your "application" efficiency is not high, no Using Soft Soft Parse, Adjust Session_Cursor_Cachehard Parses: Hard Elementary Single Sorts Generated Sorts: Sorting Extecutes Generated EXECUTES: Period TRANSACTIONS: The number of transactions generated per second, reflects the database task. Overword 3, instance hit rate This section can find out the performance issues that Oracle potential will happen in advance.

Quote:

Instance Efficiency Percentages (Target 100%) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~ Buffer NOWAIT%: 100.00 Redo NOWAIT%: 100.00 Buffer Hit%: 99.96 In-Memory Hit%: 99.53 Soft Parse%: 99.53 Soft Parse%: 99.57 Execute to Parse%: -102.31 Latch Hit%: 100.00Parse CPU To PARSE ELAPSD%: 81.47% Non-Parse CPU: 96.46 Description: Buffer NOWAIT%: Get buffer's unwaiting ratio redo nowait% in the buffer: NOT% in the Redo buffer acquisition Buffer Hit%: Data block in data The winning range in the buffer is usually in 90%, otherwise, the in-memory sort%: in memory ilbrary hit%: mainly represents the hit rate of SQL in the shared area, usually more than 95%, No, you need to consider increasing the shared pool, binding variables, and modifying parameters such as Cursor_sharing. Soft Parse%: Approximately the hit rate of SQL in the shared area, less than <95%, need to take into account the binding, if it is less than 80%, then SQL may not be reused with execute to parse%: SQL statement analysis Repeat the number of times, if too low, you can consider setting the session_cached_cursors parameter PARSE CPU to PARSE ELAPSD%: parsing the actual running event / (parsing the actual runtime resolution waiting resource time), the better the Non-Parse CPU: Query actual Runtime / (query the actual runtime SQL parsing time), too low indicates that the resolution time is too much.

Quote:

Shared pool statistics begin end ---------- Memory usage%: 33.79 57.02% SQL with Executions> 1: 62.62 73.24% Memory for SQL W / EXEC> 1: 64.55 78.72

Shared pool correction statistics Memory usage%: shared pool memory usage rate, should be stable between 75% and 90%, too small waste memory, too large, insufficient memory. % SQL with Executions> 1: The number of executions is greater than 1, if it is too small to use Bind Variables. % Memory for SQL W / EXEC> 1: Memory for SQL with Execution> 1: SQL consumption of the execution of 1 SQL consumes memory / all SQL consumed memory 4, primary wait event FAQ waiting for event Description: Oracle Waiting Event is measure Oracle's important basis and instructions, mainly free wait events and non-idle waiting events; idle wait events are Oracle waiting for some kind of job, do not pay too much attention to this part of the event, non-free wait events in diagnosing and optimizing databases Specialized in Oracle activities, referring to the number of waits that occur during the operation of the database or the application run, these wait events are our adjustment of the database. Comparison Effects Performance Waiting Forward DB File Scattered Read This event is usually related to full mete scans. Since full-tunition scan is performed in memory, it is often not possible in a continuous buffer, so it is spread in the buffer. The number of this index is too large to illustrate the lack of indexing or limit the index. This situation may also be normal because the full surface scan may be higher than the profile scanning efficiency. When the system is waiting, it is necessary to determine if the full mete scan is required to adjust. You can try to place a smaller table into the cache Keep to avoid reading them repeatedly. DB File Sequential Read This event is a large amount on a single data block, which is too high, usually because the interval connection order is very bad, or the non-selective index is used. By linking this waiting to the StatsPack report (such as the Non-high SQL), by checking ensuring that the index scan is required, and ensure that the multi-table connection connection order is adjusted to adjust the buffer busy wait as a buffer A non-shared method or if it is being read into the buffer, the waiting will occur. This value should not be greater than 1%, confirm that the hotspot block is caused (if it can be used in reverse index, or with a smaller block Size) The Latch Free latch is the underlying queue mechanism (more accurate names should be a mutective mechanism), which is used to protect the system's global zone (SGA) shared memory structure. The latch is used to prevent parallel access to the memory structure. If the latch is not available, the latch is lost. Most latching problems are failed to use binding variables (library bury latch), generate re-work issues (re-execute distribution latch), cache contention (cache LRU chain), and cached hot data wide block (Cache chain) is related. This problem is required when the latch loss rate is higher than 0.5%. The Log Buffer Space log buffer is fast than the speed of LGWR Write RedOfile, which can increase the size of the log file, increase the size of the log buffer, or write data using faster disks. Logfile Switch is usually because the archive is not fast enough, you need to increase the log file Sync when a user submits or rollback data, the LGWR will be able to resto it to the log file from the log buffer to the log file, and the user's process must Waiting for this fill work. To reduce this wait event, you must submit more records at a time, or the redo log Redo Log file is accessed on a different physical disk.

Add some part of the content:

Adjusting the StatsPack's collection thresholds Statspack has two types of collection options: Level: Controlling the Type Threshold for Collecting Data: Set the threshold of the collected data. 1. Level Statspack has three snapshots, the default is 5 a.level 0: General performance statistics. Including waiting events, system events, system statistics, rollback segments, rows, SGA, session, lock, buffer pool statistics, etc. B.Level 5: Increase the SQL statement. In addition to all content including Level0, the collection of SQL statements is included, and the result is recorded in Stats $ SQL_SUMMARY. C.LEVEL 10: Add sub-latch statistics. Includes all content of Level5. The additional sublock will also store the attached STATS $ Lathc_Children. Be cautious when using this level, it is recommended to perform under the guidance of Oracle Support. You can modify the default level settings by the StatsPack package SQL> execute statspack.snap (i_snap_level => 0, i_modify_parameter => 'True'); through such settings, the collected level will be 0. If you just want this to change the collection level, you can ignore the i_modify_parameter parameter. SQL> Execute statspack.snap (i_snap_level => 10); 2. The snapshot threshold snapshot threshold is applied only to the SQL statement obtained in the Stats $ SQL_Summary table. Because every snapshot collects a lot of data, each line represents a SQL statement in the database, so Stats $ SQL_Summary will soon become the largest table in Statspack. The threshold is stored in the Stats $ Statspack_Parameter table. Let us have a variety of thresholds: a. EXECUTIONS_TH This is the number of SQL statements executed (the default is 100) b. Disk_reads_tn This is the number of disk readings executed by the SQL statement (default is 1000) c. Parse_calls_th This is a SQL statement The number of parsed calls (default is 1000) d. Buffer_gets_th This is the number of buffers acquired by the SQL statement (the default is 10000) Any number of thresholds exceeds the above parameters. We can change the default value of the threshold by calling the StatsPack.Modify_statspack_parameter function. For example: SQL> execute statspack.modify_statspack_parameter (i_buffer_gets_th => 100000, i_disk_reads_th => 100000;

Instance Efficiency Percentages Data Buffer Hit Ratio # <# 90 # The hit rate in the data buffer in the data buffer should usually be more than 90%, otherwise consider increasing DB_BLOCK_BUFFERS (9i but more DB_Cache_SIZE) Buffer NOWAIT RATIO # <# 99 # Non-waiting ratio library hit ratio # <# 98 # mainly represents SQL in the hit rate of SQL, usually more than 98% in Memory Sort Ratio # <# 0 # If you have a lot of sorting Perform in temporary table space, try to increase sort_area_size redo nowait Ratio # <# 98 # Write logs not waiting, too low adjustable log_buffer (increasing) and _Log_io_size (decrease, default 1/3 * log_buffer / log_block_size Make _log_io_size as the right value, such as 128k / log_block_size) Soft Parse Ratio # <# 90 # Nearly as the hit rate of SQL in the shared area, usually high representatives use binding variables, too low, need to adjust the application use binding Variables, or refer to Cursor_SHARING = Force (9i added similar) Latch Hit Ratio # <# 99 # Internal structure Maintenance lock hit rate, above 99%, usually because Shared_Pool_Size is too large and no binding variables result in hard eullevance Many, please refer to _SPIN_COUNT parameter set percent non-parse cpu # <# 95 # query actual runtime / (query actual runtime SQL parsing time), too low indicating resolution time excess length Percent Parse CPU to Parse Elapsed # # 90 # Analyze the actual time / (parses the actual time resolution waiting for resource time), the higher the exceeding to parse percent # <# 10 # This value is, the more times the value is repeated, the more times, the more, If you can consider setting session_cached_cursors> 0 Memory Usage Percent # <# 75 # shared pool usage, should be stable between 75% - 90%, too small waste memory, too much memory is less than SQLS with Execution> 1 # <# 40 # SQ executable greater than 1 l The ratio (if it is too small, it may be no binding variable) Percent of memory for sql with execution> 1 # <# 0 # SQL consumption memory with greater than 1, (All SQL consumption memory) Instance Load Profile Redo Size / Sec #> # 100000 # Growth of the log size (unit byte) per second, the heavy or not if the database task is redo size / tx #> # 0 # Average log generation of each transaction Logical READS / SEC (logic read ) #> # 0 # Averaged logically read in a second, the unit is Block Logical Reads / TX #> # 0 # Averaged average of each transaction, the unit is Block Block Changes / Sec #> # 100 # 一 b b Number of change, the number of blocks, blocks, blocks, blocks, Block Changes / TX #> # 0 # Averages the changed blocks caused by each transaction Physical Reads / Sec #> # 100 # Average per second database read from disk from disk Block Number Physical Reads / TX #> # 0 # Average per-transaction BLOCK number Physical Write / Sec #>

# 50 # Average per second write disk BLOCK PHYSICAL WRITE / TX #> # 0 # Average per-transaction written block BLOCK USER CALLS / Sec #> # 0 # per second user call number user calls / tx #> # 0 # Each transaction user CALL Parses / Sec #> # 100 # Each second analysis, approximately the number of execution of each second statement Parses / TX #> # 0 # Each transaction is generated in parsing Hard Parses / Sec #> # 10 # Hard episode of Hard Parses / TX #> # 0 # Each transaction generated Sorts / Sec #> # 20 # Each second is generated Sorts / TX #> # 5 # Each transaction Sort Time Transactions / Sec #> # 0 # 0 Generated Rows / Sort #> # 0 # Number of Ratinal Of Block Changed / Read #> # 0 # each sorted Percent Of Block Changd / Read #> # 0 # Number of changes / Reading, changing blocks require the data from the back segment Recursive Call Percent #> # 0 # recursive operations account for all operations ROLLBACK / TX Percent #> # 5 # Transaction Roll Rolling Rolling Roll (Rolling overhead) Executes / Sec #> # 0 # Per second Execute / TX #> # 0 # Each transaction execution time - 45: Logons / Sec --46: Logons / TX I / O Statistics (I / O Statistics) TABLE Space I / O #> # 0 # indicates the distribution of each table space in IO. If you have a serious imbalance, you should reconsider the disk planning of the object's storage planning and tablespace DataFile I / O #> # 0 # I distribution of each data file, if it is imbalance, you need to reconsider the storage plan of the object Table I / O (Table I / O) #> # 0 # Very large table to these IO, to consider placing on a high speed disk And all as possible around Top SQL TOP SQL WIGH High Buffer Gets #> # 0 # this class SQL read, to check if the SQL is used to use index, or whether there is Reasonable index, for a large table that must be scanned, it can consider Recycle Buffer. For a small table that frequent full-table scanning, it can be considered Keep Buffer, and there is a case where it is necessary to pay attention to the proportion of data proportions of the data. Big, such as 20% (for example data), can cause Buffer gets too large TOP SQL with H IGH Physical Reads #> # 0 # This type of SQL causes a large number of data from disk to acquire data, possibly because the data buffer is too small, it may be too much full-mete scan, it is necessary to check whether the index is reasonable, whether to index Top SQL WITH EXECUTION Count #> # 0 # This type of SQL is a need for focus. Perhaps these SQL itself performs a large amount of time or space, but due to frequent execution of the system, as long as it is optimized To optimize these SQL.

There are other cases, that is, some programs may be used in a large number of DUAL tables to get some information (such as time calculations, etc.), as much as possible to convert such SQL into the function of the application, or some Due to the unnecessary query, it is necessary to avoid these queries Top SQL With High Shared Memory #> # 0 # this type of SQL in the design perspective, not necessarily executed, but it may put Some data extruded buffers performed frequent SQL, which will also result in many problems, so it also requires asymptomating Top SQL with High Version Count #> # 20 # indicates that multiple users' SQL is on the literal The same, or SQL although the same but Sort_Area_size changes, it is changed to Wait Events ALTER SYSTEM SET MTS_DISPATCHER #> # 0 # When the session decides "ALTER SYSTEM SETS_DISPATCHERS =" Waiting for DispatChers

Bfile Check if EXISTS #> # 0 # Check if the external BFile file exists

Bfile Check IF Open #> # 0 # Check if the external BFile file has Open

Bfile Closure #> # 0 # Waiting to close the external bfile file

Bfile Get Length #> # 0 # Get the size of the external bfile file

Bfile Get Name Object #> # 0 # get the name of the external bfile file

Bfile Get Path Object #> # 0 # get the path to the external bfile file

Bfile Internal seek #> # 0 #

Bfile Open #> # 0 # Wait for the outside BFile to be opened

Bfile read #> # 0 # Wait for the external bfile file to read

Buffer busy due to global cache #> # 0 #

Buffer Busy Waits #> # 0 # Block is being read into buffers or buffers are being used by other session, and this situation may usually be adjusted in several ways: increase the Data Buffer, increase freeelist, reduce PCTUSED, increase rollback Number of paragraphs, increasing initrans, consider using LMT

Buffer deadlock #> # 0 # Due to the slow system slowly, the dead lock is generated

Buffer Latch #> # 0 # session Wait for 'Buffer Hash Chain Latch'

Buffer Read Retry #> # 0 # ops The following in the process of reading the buffer is re-reading

Cache Simulator HEAP #> # 0 #

checkpoint completed #> # 0 # wait for the completion of the checkpoint, the problem usually occurs because the IO problem is serious, the adjustable parameters associated with the checkpoint log_checkpoint_interval, log_checkpoint_timeout, db_block_max_dirty_target, fast_start_io_target, can indirectly increase log file size and increase Log file group

Contacting SCN Server or SCN LOCK MASTER #> # 0 #

Control File Parallel Write #> # 0 # Wait to write all control files to disperse the control file on different disks

Control File Sequential Read #> # 0 # read control files, generate in the backup control file, OPS, etc.

Control File Single Write #> # 0 # OPS Only one session is allowed to write to disk

Conversion file read #> # 0 #

DB File Parallel Read #> # 0 # Make recovery parallel from the data file to get data

DB File Parallel Write #> # 0 # When multiple IO can occur at the same time (multiple disk), DBWR can write in parallel, DBWR waits for the last IO completion

DB File Scattered Read #> # 0 # One-time BLOCK is dispersed in the discontinuous space of the buffer, usually indicates that the full table scan is excessive, and the application can be reasonably used, and the database is reasonable to create an index

DB File Sequential Read #> # 0 # typically suggests that the amount of connection to obtain data is larger (for example, the percentage of the scanned acquisition table data is too large), the multistorative connection is improperly connected, Hash Join, Hash_Area_size, unable to accommodate Hash TABLE

DB File Single Write #> # 0 # Update Data File Header Waiting

Debugger Command #> # 0 #

DFS DB FILE LOCK #> # 0 # OPS The under the data file has a shared global lock on the data file, waiting for other instance synchronization files when it is going to offer offline data files.

DFS LOCK HANDLE #> # 0 # session Wait a global lock request

Direct Path Read #> # 0 # usually occurs in temporary table space sorting, in parallel inquiry

Direct Path Read (LOB) #> # 0 #

Direct Path Write #> # 0 # Direct mode Import data (SQLLDR, CTAS), PDML, temporary table space sort

Direct Path Write (LOB) #> # 0 #

Dispatcher Listen Timer #> # 0 #

Dispatcher Shutdown #> # 0 #

Dispatcher Timer #> # 0 #

DLM Generic Wait Event #> # 0 #

DUPL. Cluster Key #> # 0 #

Enqueue #> # 0 # Acquisition of Shared Resources Requires a Mechanism of a Queuing (FIFO) to protect shared resources, St Enqueue indicates that the problem of spatial allocation or release can be avoided by LMT table space, TX enqueue is mainly generated in a unique index. Repeat, Bitmap Index is frequently updated, INITRANS is too small or PCTFREE is too small

File Identify #> # 0 #

File Open #> # 0 #

Free Buffer Waits #> # 0 # Finding in the buffer can be waiting in Buffer, possibly the data buffer is too small, or it may check that the point is too long, or the DML may also become a bottleneck

Free Global Transaction Table Entry #> # 0 # Distributed Database Session Wait a Global Transaction Slot

Free Process State Object #> # 0 #

Global Cache BG ACKS #> # 0 #

Global Cache Cr Request #> # 0 #

Global Cache Freelist Wait #> # 0 #

Global Cache Lock Busy #> # 0 # session Waiting to convert a buffer from the current shared status to the current exclusive state

Global Cache Lock Cleanup #> # 0 #

Global Cache Lock Null To S #> # 0 #

Global Cache Lock Null TO X #> # 0 #

Global Cache Lock Open S #> # 0 #

Global Cache Lock Open X #> # 0 #

Global Cache Lock S TO X #> # 0 #

Global Cache Multiple Locks #> # 0 # Global Cache Pending Ast #> # 0 #

Global Cache Pending ASTS #> # 0 #

Global Cache Retry Prepare #> # 0 #

Global Cache Retry Request #> # 0 #

IMM OP #> # 0 #

Inactive session #> # 0 #

Inactive Transaction Branch #> # 0 #

Index Block Split #> # 0 # When you find a key in an index, if you find that the index block is fell, wait for a fission to complete

Io done #> # 0 # session waiting for the completion of IO

Ksim GDS Request Cancel #> # 0 #

Latch Activity #> # 0 #

Latch free #> # 0 # latch is a lock that maintains memory, does not use queuing mechanisms, fast acquisition and release quickly, the cause of the programs do not use bind variables, shared_pool_size set too large (such as 1G), Lru competition, some blocks overheat (access too frequent)

LGWR Wait for Redo Copy #> # 0 # means waiting for Redo Allocation and Redo Copy Latches, can increase _log_simulteneous_copies, but it is easy to introduce Redo Allocation Latch Content, so it needs to be cautious

Library Cache Load Lock #> # 0 #

Library Cache Lock #> # 0 #

Library Cache Pin #> # 0 #

Listen Endpoint Status #> # 0 #

Lmon Wait for LMD to Inherit Communication Channels #> # 0 #

Local Write Wait #> # 0 #

Lock Manager Wait for Dlmd To Shutdown #> # 0 #

Lock Manager Wait for Remote Message #> # 0 #

Log Buffer Space #> # 0 # Generate Log Waiting for LGWR to quickly write files and make Log Buffer, increase log_buffer in the init parameter file, place log files on high-speed disks

Log File Parallel Write #> # 0 # When the LGWR write log file is waiting, this wait usually causes the log file sync event, place the log file on the high speed disk.

Log File Sequential Read #> # 0 #

Log file sales #> # 0 #

Log file switch #> # 0 # When log switching is used, the log group is used to use a circle but the log archive is not completed, usually IO has serious problems, increase the log file and add log group, adjustment LOG_ARCHIVE_MAX_PROCESSES

Log File Switch #> # 0 # When the log switch is switched, the log group cycle uses a lap but will have a serious problem with the checkpoint in the log group used, usually IO has a serious problem, which can increase Log file and increased log group

Log file switch (clearing log file) #> # 0 #

Log file switch completion #> # 0 #

Log file sync #> # 0 # When users commit, inform the LGWR written log but LWGR is busy, the possible cause is that commit is too frequent or the LGWR is too long (may be because of the log io size too big), Adjustable _log_io_size, combined with log_buffer, make (_log_io_size * db_buff_ = log_buffer, avoiding and increasing log_buffer arms; place log files on high-speed disk WRITE COMPLETE WAITS #> # 0 # User Waiting Buffer Document, suggesting that there is a wait while writing, need to adjust the rollback segment, including reasonable returns and size, and rollback segments are at high speed disks.

appendix:

Statspack Analysis Load profile ~~~~~~~~~~ Per Second Per Transaction ----------------------------- Redo Size : 22,007.09 2,921.10 - a very important parameter that you change the frequency of the data Logical reads: 22,890.62 3,038.38 Block changes: 95.88 12.73Physical reads: 5,413.37 718.54Physical writes: 5.67 0.75User calls: 750.85 99.66 parses: 183.20 24.32 ---- soft parse More than 300 times per second means your "application" is not high, no Soft Soft Parse, adjust session_cursor_cachehard Parses: 20.41 2.71 - More than 100 times per second, you may indicate that you are binding to use Sorts: 5.17 0.69 Logons: 0.03 0.00Executes: 185.17 24.58Transactions: 7.53% blocks change per read: 0.42 Recursive Call%: 21.95 - If there are many PLSQL, then he will relatively Rollback Per Transaction%: 0.01 Rows Per Sort: 159.13 - Is it very high when I see the roll rate, because the rollback is very resource INSTANCE Efficiency Percentages - This part can find out the performance issues that Oracle potential will have passed in advance (it's important) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~ Buffer no, 99.71 redo no,: 100.00 --Buffer NOWAIT <99% description, it is possible to have a hot block (lookup X $ BH TCH and V $ Latch_children Cache Buffers Chains) Buffer Hit%: 76.54 In-Memory Sort%: 100.00 --Buffer Hit <95%, important parameters, less than 95% may be add DB_CACHE_SIZE, but a lot Non-selected indexes can also cause this value (a large number of DB File Sequential Read) library hit%: 97.07 Soft Parse%: 88.86 --Library Hit <95%, to consider increasing shared pool, binding variable, modifying Cursor_sharing Equivation to Parse%: 1.06 Latch Hit%: 99.76 - Soft Parse <95%, need to be considered binding, if less than 80%,

Then there may be SQL basically no use of Parse CPU to Parse Elapsd%: 89.28% Non-Parse CPU: 91.37 --latch Hit <99%, to ensure> 99%, otherwise there is a serious performance problem, such as binding, etc. This parameter If an index on a column that is regularly accessed is deleted, it may cause a significant drop in buffer HIT if it adds an index, but he affects the driver sequence of Oracle's correct selection table, then it may cause the buffer HIT to be significantly increased. If your hit rate change is large, you have to change SQL mode Shared pool statistics begin end ------ ------ Memory usage%: 89.18 85.56 - 70% -98% of shared memory usage All in normal range% SQL with Executions> 1: 36.31 36.10 -% Memory for SQL W / EXEC> 1: 38.86 38.33 Top 5 Timed Events ~~~~~~~~~~% Totalevent Waits Time (s) Ela Time -------------------------------------------------------------------------------------------------------------------------------------------------------------------------- - ------------ ----------- -------- CPU Time 2,913 32.01db File Sequential Read 2,142,279 2,820 30.99db file scattered Read 1,724,832 1,183 13.00Buffer Busy Waits 198, 624 1,042 11.44lo g File Sync 22, 857 915 10.06timed_statistics = true So waiting for the wait time Sort = false The event is sorted by waiting for the number of common event log file sync: When each submission, if this wait event affects database performance, then Just modify the application's submission frequency DB File Sequential Read: Waiting in a single data block, the value is too high, which is usually because the intervals are very bad.

Or use a non-selective index db_cache_size: You can determine the frequency DB file scattered read: which means that the full table scan is waiting, usually the full table scan table data is placed in memory, but the application-to-memory cache buffer Each zone may not be continuous, which illustrates the lack of indexing or limits the use of indexes (or adjusts Optimizer_index_cost_adj), if full mete scan is often possible, and the table is smaller, and the table is a KEEP pool. If It is a full-scale scan, which should be an OLAP system instead of the buffer busy wait: When the buffer is in a non-shared method or when it is being read into the buffer, the waiting is true. This value It should not be greater than 1%, confirm that it is caused by hot blocks (if it can be used in reverse index, or use smaller blocks) Latch Free: often follows the application without good application bindings about Enqueue: Most likely The user changes the same block at the same time. If there is no idle ITL space, the database block-level lock logfile switch: usually because the archive speed is not fast enough, you need to increase the redo log, Log Buffer space: log buffer write speed fast With the speed of LGWR Write RedOfile, you can increase the log file size Top SQL adjustment primary 25 buffer read operations and primary 25 disk read operations, which will produce 5% to 5000% gain to system performance. Instance Activity Stats for DB: CRMTEMP Instance: CRMTEMP SNAPS:

转载请注明原文地址:https://www.9cbs.com/read-114775.html

New Post(0)