Oracle Optimizer: Migrate to the cost-based optimizer ----- Series 1.2
3.2 Cost-based optimizer (CBO)
Methodology to follow the calculated cost based on cost optimizer. All execution plans with cost identifiers, the optimizer selects the lowest cost. In the implementation plan, higher cost will mean higher resources. The lower the cost, the more efficient in the query. CBO uses all statistical information and histograms stored in the data dictionary, and the user provides a prompt and parameter settings to reach the cost of use, and the CBO generates the arrangement of all possible access methods and then selects the most appropriate. The number of arrangements depends on the number of tables that appear in the query, sometimes reaching about 80,000 arrangements or more, please refer to the second part of the parameters of the series to set the relevant parameters. CBOs may also complete operations such as query conversion, view merge or conversion, increase JOIN predicates, etc. This will change the original statement and change the presence or newly added predicate, and all these purposes of the self-new access plan will be better than the original. Note that the conversion does not affect the returned data, but only the execution path, refer to the information associated with the second part of the parameter sections.
3.2.1 Statistics
The statistics provide accurate inputs in order to provide normal operation of CBO; the generated data is stored in the object and includes information such as the number of layers of row, a single value, a single value in the column, and the number of pieces of the page. The more accurate statistics, the more efficient results provided by the optimizer. Please refer to how to generate this information in a section of this Series to generate this information and how to best maintain it. Statistics may be precise or estimated, which analyzes all the data in the object with the compute clause, which will give the optimizer to work and achieve the perfect implementation plan. With the Estimate clause, the data content of the sample size mentioned in the object will be analyzed to generate statistics. The sample size will be specified as the number or percentage of rows of randomized rows to generate statistics, or specify optional options. Sample of blocks, if there is a huge table data in the system, it will save time. The premise of a good execution program depends on the estimated value and the exact value. It can be tested to set different sample size to achieve a suitable goal or produce different assessment levels for different types of tables. This idea is close to precise statistics. It is quite possible. Statistics stored in the data dictionary of the owner for SYS users, the following view shows the collection statistics of tables, columns, indexes. Table: DBA_TABLESNUM_ROWS - Row Quantity Blocks - Number of used blocks Empty_blocks - Unused empty block Number AVG_SPACE - Average free space (in bytes) assigned to the table, considering all empty and free blocks.Chain_cnt - chain or moving rows of AVG_ROW_LEN - the length of the average line of bytes Last_Analyzed - on Date of the sub-table analysis Sample_Size - Sample size provided to Estimate statistics, for Compute, equal to the value of Num_ROWS columns. Global_stats - For partition table, YES - Collection statistics will be used as a whole, NO - Collection statistics will be estimated Table User_stats If the user is specified as a table setting statistics, you can find a separate partition of a table from DBA_TAB_PARTITIONS, the cluster statistics can be found from DBA_Clusters. Column DBA_TAB_COLUMNS NUM_DISTINCT - Single value Quantity Low_Value - Minimum High_Value - Maximum Density - Column gather. Num_nulls - Records Num_buckets - The number of columns in column maps, reference columnar diagram Sample_size -estimate statistics provide sample size, if it is compute, equal to all rows Last_analyzed - last table date DBA_TAB_COL_STATISTICS analysis showed similar data - for the partition table column statistics can be found in the index DBA_INDEXES BLEVEL from DBA_PART_COL_STATISTICS and DBA_SUBPART_COL_STATISTICS - the depth of the index from the root level to the leaf level LEAF_BLOCKS -. the number of page-level blocks DISTINCT kEYS - single key Number AVG_LEAF_BLOCKS_PER_KEY - The average number of each single key value appears for a unique index should be 1AVG_DATA_BLOCKS_PER_KEY, the average number of clustering_factor - determines the total number of index sorted by index. If the quantity relys, the number The table is sorted in the order of index, that is, all of the pages all pointed to the same piece in the table. If the index is close to the number of rows, the index will randomly sort, that is, page-led levels All of them is the row of dispersion points to multiple blocks.