Problems about the line migrationrow link in Oracle Database

xiaoxiao2021-03-06  65

First, the introduction of the migration / row link

In practical work, we often encounter some Oracle database performance, of course, the reason for the performance of Oracle database performance is multifaceted, we can avoid some Oracle databases through some correct design and diagnosis. Well, ROW MIGRATION & Row Chaining is where we can try to avoid potential problems with low Oracle database performance. Through reasonable diagnostic line migration / row link, we can improve the performance of the Oracle database.

What exactly is a row migration / row link, first let us start from Oracle's Block.

The minimum read and write operation unit of the operating system is the block of the operating system. Therefore, when you create an Oracle database, we should tell the database's block size setting to become an integer multiple of the operating system, Oracle Block is read and written in the Oracle database. The minimum unit, Oracle Database Version before ORACLE9I Oracle Block once is set to change once when the database is created. In order to determine a reasonable Oracle block before creating a database, we need to consider some factors, such as the size of the database itself, and the number of concurrent transactions. It is very important to use a suitable Oracle Block size for database tuning. An Oracle Block consists of three parts, which consists of three parts: data block, free space, and actual data.

Data block: Mainly include some basic information and segments of the data block address, and the table and the actual address containing data.

Free space: means the space that can be assigned to future updates and insert operations, the size is affected by both PCTFREE and PCTUSED.

Actual data: refers to the actual data stored in the row.

When creating or changing any tables and indexes, Oracle uses two storage parameters in spatial control:

PCTFree: Percentage of the existing data reserved space for future updates.

PCTUSED: Used to insert the minimum space of a new row data. This value determines the available status of the block. When the available block, the inserted block can be performed, and the block that is unavailable can only perform deletion and modification, and the block of the available status is placed in FreeElist.

When the data in the table cannot be placed in a data block, two situations will occur at this time, one is a line link, and the other is to migrate.

The row link generates if a block cannot store a line of records when a block is inserted into the data. In this case, Oracle will use a link to store this line record in a block that reserved in this segment, and the row link is relatively prone to occur in a relatively large line, such as lines such as Long, Long Raw, LOB and other data types. Fields, this time link is inevitable.

When the initial insertion of a row record can be stored in a block, due to the update operation, the performance of the BLOCK has been fully full, and this time has been migrated. In this case, Oracle will migrate the entire row data into a new Block (assuming a block can store a full row data), Oracle retains the original pointer to the migrated row points to the block of the new storage line data. This means that the ROW ID of the migrated row will not change.

When a row migration or row link occurs, the performance of this line of data will be reduced, because Oracle must scan more blocks to get information about this line.

The following is an example to specify the process of row migration / row link.

Create a test table for PCTFREE 20 and PCTUSED 50:

Create Table Test

Col1 char (20),

COL2 NUMBER)

STORAGE

PCTFree 20

PCTUSED 50); When an record is inserted, Oracle will look for a free block in Free List and insert the data into this free block. The free blocks present in Free List are determined by the PCTFREE value. The initial empty block is in the free list until the free space in the block reaches the value of the PCTFree, this block will be removed from the free list, and when the space in this block is lower than PCTUSED, this block Also reopened in Free List.

Oracle uses the Free List mechanism to greatly improve performance, for each insert operation, Oracle only needs to look for free List, not to find all blocks to find free space.

For details, we will see how row links and row migrations are generated and reflected in the data file.

First check the data file number of the ALLAN this table space, in order to facilitate testing, I only have a data file.

SQL> SELECT FILE_ID FROM DBA_DATA_FILES where TABLESPACE_NAME = 'ALLAN';

FILE_ID

------------

twenty three

Create a test table Test:

SQL> Create Table Test (X Int Primary Key, A Char (2000), B CHAR (2000), C Char (2000), D Char (2000), E CHAR (2000)) TABLESPACE ALLAN

Table created.

Because my database DB_BLOCK_SIZE is 8K, I have five fields that I have created, each accounting for 2000 bytes, so the line records approximately 10K, you can more than a block size.

Then insert a line of records, there is only one field:

SQL> INSERT INTO TEST (X) VALUES (1);

1 row created.

SQL> commit;

COMMIT COMPLETE.

Find this line record the block, and DUMP comes out:

SQL> SELECT DBMS_ROWID.ROWID_BLOCK_NUMBER (ROWID) from Test;

DBMS_ROWID.ROWID_BLOCK_NUMBER (ROWID)

------------------------------------

34

SQL> ALTER System Dump DataFile 23 Block 34;

SYSTEM altered.

The contents of viewing the trace file in the UDUMP directory are as follows:

Start Dump Data Blocks TSN: 34 File #: 23 Minblk 34 Maxblk 34

Buffer TSN: 34 RDBA: 0X

05C

00022 (23/34)

SCN: 0x

0000.013943F

3 SEQ: 0x01 flg: 0x02 tail: 0x

43F

30601

FRMT: 0x02 Chkval: 0x0000 Type: 0x06 = Trans Data

Block header dump: 0x

05C

00022

Object ID on block? Y

SEG / OBJ: 0x3ccd CSC: 0x00.13943EF ITC: 2 FLG: o TYP: 1 - DATA

FSL: 0 fnx: 0x0 Ver: 0x01

ITL XID UBA FLAG LCK SCN / FSC

0x01 0x

000A 000A

.02e.00000ad7 0x00800036.03de.18 --u- 1 fsc 0x0000.013943f

3

0x02 0x0000.000.00000000 0x0000

0000.0000.00

---- 0 FSC 0x0000.00000000

Data_block_dump, data header at 0xADB

505C

================

TSIZ: 0x1fa0

HSIZ: 0x14

PBL: 0x0ADB

505C

BDBA: 0X

05C

00022

76543210

Flag = --------

NTAB = 1

NROW = 1

FRRE = -1

FSBO = 0x14

FSEO = 0X

Allf

9A

AVSP = 0X

Allf

83

TOSP = 0X

Allf

83

0xE: PTI [0] nrow = 1 OFFS = 0

0x12: Pri [0] OFFS = 0X

Allf

9A

Block_row_dump:

Tab 0, ROW 0, @ 0x

Allf

9A

TL: 6 FB: --H-FL - lb: 0x1 CC: 1

COL 0: [2] C1 02

END_OF_BLOCK_DUMP

END DUMP DATA BLOCKS TSN: 34 File #: 23 Minblk 34 Maxblk 34

Do some of these information:

FB: h is the header of the recorded head, and L is the last column of the record, and F is the first column of the record.

CC: Quantity of columns

NRID: The value of the next ROW ID of the row or row migration

We can see that the current table TEST is not migrated without a line of links or rows.

Then update the TEST table and re-dump comes out:

SQL> Update Test Set a = 'TEST', B = 'Test', C = 'Test', D = 'Test', E = 'Test' Where X = 1;

1 row updated.

SQL> commit;

COMMIT COMPLETE.

There should be a row of migration / row links at this time.

SQL> ALTER System Dump DataFile 23 Block 34;

SYSTEM altered.

The contents of viewing the trace file in the UDUMP directory are as follows:

Start Dump Data Blocks TSN: 34 File #: 23 Minblk 34 Maxblk 34

Buffer TSN: 34 RDBA: 0X

05C

00022 (23/34)

SCN: 0x0000.0139442B SEQ: 0x01 flg: 0x02 tail: 0x442b0601

FRMT: 0x02 Chkval: 0x0000 Type: 0x06 = Trans Data

Block header dump: 0x

05C

00022

Object ID on block? Y

SEG / OBJ: 0x3ccd CSC: 0x00.1394429 ITC: 2 FLG: - TYP: 1 - DATA

FSL: 0 fnx: 0x0 Ver: 0x01

ITL XID UBA FLAG LCK SCN / FSC

0x01 0x

000A 000A

.02e.00000ad7 0x00800036.03de

.18

C

--- 0 SCN 0X

0000.013943F

3

0x02 0x0004.002.00000AE0 0x0080003B.0441.11 --U- 1 FSC 0x0000.0139442B

Data_block_dump, data header at 0xADB

505C

================

TSIZ: 0x1fa0

HSIZ: 0x14

PBL: 0x0ADB

505C

BDBA: 0X

05C

00022

76543210

Flag = --------

NTAB = 1

NROW = 1

FRRE = -1

FSBO = 0x14

FSEO = 0X

178A

AVSP = 0X

177C

TOSP = 0X

177C

0xE: PTI [0] nrow = 1 OFFS = 0

0x12: Pri [0] OFFS = 0X

178A

Block_row_dump:

Tab 0, ROW 0, @ 0x

178A

TL: 2064 FB: --H-F - N lb: 0x2 cc: 3

NRID: 0x

05C

00023.0

COL 0: [2] C1 02

COL 1: [2000]

74 65 20 20 20 20 20 20 20 20 20

20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20

..........

COL 2: [48]

74 65 20 20 20 20 20 20 20 20 20

20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20

END_OF_BLOCK_DUMP

END DUMP DATA BLOCKS TSN: 34 File #: 23 Minblk 34 Maxblk 34

It is not difficult to see that NRID has a value, pointing to the next ROW ID, prove that the just update operation makes this line record generates a row link or line migration.

Second, row migration / row link detection

Through the previous introduction We know that the line link is mainly due to the database's db_block_size is not big enough, and for some big fields cannot be stored in a block, thereby producing a row link. For row links, we don't have any other ways except for DB_BLOCK_SIZE, but because DB_BLOCK_SIZE is unable (before 9i) after the database is established, we can specify different DB_BLOCK_SIZE for different tablespaces. Therefore, the generation of row links is almost inevitable, and there is not much way to adjust. The row migration is mainly due to the update table, since the PCTFree parameter setting of the table is too small, there is no sufficient space in the block to accommodate the updated record, resulting in row migration. It is very necessary for row migration, because this is adjustable and controlled.

How do I detect that there is a row migration and row link in the database? We can use the script Utlchain.sql provided by the Oracle database itself (in the $ ORACLE_HOME / RDBMS / Admin Directory) to generate a chained_rows table, then use the Analyze Table Table_Name List Chained Rows INTO CHAINED_ROWS command to separate the table one by one, and save the results of the analysis into the chained_rows table. in. From the UTLChain.sql script we see the CHAINED_ROWS's construction table script, for partition tables, the Cluster table is applicable. You can then use the scattered statement to generate the script of the table you need, and perform the script to put the specific analysis data into the chained_rows table, for example, the following is the script that analyzes all the tables under the table: spool list_migation_rows.sql

Set echo off

Set heading off

SELECT 'Analyze Table' || Table_name || 'List Chained Rows Into Chained_Rows;' from User_Tables

Spool off

Then query the chained_rows table, you can specifically check how many row links and row migrations on a piece of table.

SELECT TABLE_NAME, Count (*) from chained_rows group by table_name

Of course, you can also query the 'Table Fetch Continued Row' column in the V $ Sysstat view to get the current row link and row migration quantity.

SELECT NAME, VALUE from V $ sysstat where name = 'Table Fetch Continued Row'

You can use the following scripts to directly find a table with row links and row migration, automatically complete all of the analysis and statistics.

accept owner prompt "Enter the schema name to check for Row Chaining (RETURN for All):" promptpromptaccept table prompt "Enter the table name to check (RETURN for All tables owned by & owner):" promptpromptset head off serverout on term on feed off ! veri off echo off clearprompt declarev_owner varchar2 (30); v_table varchar2 (30); v_chains number; v_rows number; v_count number: = 0; sql_stmt varchar2 (100); dynamicCursor INTEGER; dummy INTEGER; cursor chains isselect count (*) from chained_rows; cursor analyze isselect owner, table_namefrom sys.dba_tables where owner like upper ( '% & owner%') and table_name like upper ( '% & table%') order by table_name; begindbms_output.enable (64000); open analyze; fetch analyze into v_owner, v_table; while analyze% FOUND loopdynamicCursor: = dbms_sql.open_cursor; sql_stmt: = 'analyze table' || v_owner || || v_table || 'list chained rows into chained_rows'; dbms_sql.parse (dynamicCursor, sql_stmt '.' , dbms_sql.native; dummy: = dbms_sql.execute (DynamicCursor); dbms_sql.close_cursor (Dynami ccursor; fetch chains; fetch chains inTo v_chains; if (v_chains! = 0) Thenif (v_count = 0) THENDBMS_OUTPUT.PUT_LINE (CHR (9) || CHR (9) || CHR (9) || '<<< << CHAINED ROWS FOUND >>>>> '); v_count: = 1; end if; dynamiccursor: = dbms_sql.open_cursor; sql_stmt: =' SELECT Count (*) v_rows '||' from '|| v_owner ||' . '|| v_table; dbms_sql.parse (dynamicCursor, sql_stmt, dbms_sql.native); dbms_sql.DEFINE_COLUMN (dynamicCursor, 1, v_rows); dummy: = dbms_sql.execute (dynamicCursor); dummy: = dbms_sql.fetch_rows (dynamicCursor); DBMS_SQL.COLUMN_VALUE (DynamicCursor, 1, V_ROWS); dbms_sql.close_cursor (DynamicCursor);

DBMS_OUTPUT.PUT_LINE (v_owner || '); DBMS_OUTPUT.PUT_LINE (chr (9) ||' ---> HAS '|| V_Chains ||' CHAINED ROWS AND '|| V_ROWS ||' NUM_ROWS IN it! '); dynamicCursor: = dbms_sql.open_cursor; sql_stmt: =' truncate table chained_rows'; dbms_sql.parse (dynamicCursor, sql_stmt, dbms_sql.native); dummy: = dbms_sql.execute (dynamicCursor); dbms_sql.close_cursor (dynamicCursor) ; v_chains: = 0; end if; close chains; fetch analyze into v_owner, v_table; end loop; if (v_count = 0) thendbms_output.put_line ( 'No Chained Rows found in the' || v_owner || 'owned Tables!' ); end if; close analysis; end; / set feed on head onprompt, row migration and row link clearance

Since only DB_BLOCK_SIZE can only be cleared for the row link, DB_BLOCK_SIZE can't change after the database is created, so the clearing of the row link does not have too much narrative, mainly for row migration. Talking about how to remove it in the actual production system.

For the clearance of row migration, it is generally divided into two steps: the first step, controlling the growth of housing migration, making it not increasing; the second step, clearing the previously existing row migration.

It is well known that the main reason for the migration is because the table's PCTFree parameter is set too small, and the first step is to control the growth of housing migration, it must set a correct and suitable PCTFree parameter, otherwise even clear the current After the line is migrated, there will be many new rows of rows. Of course, this parameter is not more and better. If the PCTFree is too large, it will cause low utilization rate of data blocks, causing a lot of waste, so a reasonable PCTFree parameter must be set. How to determine a reasonable PCTFree parameter, generally there are two ways.

The first is a quantitative setting method, which is to use the formula to set the size of the PCTFree. First use the Analyze Table Table_name Estimate Statistics command to modify the table of PCTFree, then view the avg_row_len column value in user_tables, get a average leader avg_row_len1, then a large number of table operations, use the above command analysis table again, get the second A average leader avg_row_len2, and then the result of the formula 100 * (AVG_ROW_LEN2-AVG_ROW_LEN1) / (AVG_ROW_LEN2-AVG_ROW_LEN1 original AVG_ROW_LEN) is the value of a suitable PCTFREE for quantitative calculations. This method is because it is quantitative calculated, it may not necessarily be accurate, and because of the analysis of the table, the system that uses the RBO execution plan is not very applicable. For example: avg_row_len_1 = 60, AVG_ROW_LEN_2 = 70, the average modification is 10, and the PCTFree should be adjusted to 100 * 10 / (10 60) = 16.7%. The second is the method of differential fine-tuning. First, query the value of the PCTFree of the current table, then monitor and adjust the PCTFree parameter, increase the size of the PCTFree, do not exceed 5 percentage points each time, then use Analyze Table Table_name List chained rows info chained_rows Command analysis Each time all row migration and row link growth, for different governance ratios, a relatively fast table PctFree value is increased for a relatively quick increase, slow growth The table increases less, until the line migration of the table is not yet grown. However, be careful not to put the PCTFree adjustment, usually below 40%, otherwise it will cause great waste of space and increase the IO of database access.

After using the above method, after the growth of the row migration of the current table, you can start the row of rows in the previous table before the clearance. Whether to clear the movement, it is related to whether the performance of the system can be greatly improved. Therefore, it is necessary to clear it for previously existed rows. There are many ways to remove existing rows, but not all methods can apply all the cases, such as how much the number of records in the table, how much is related to the table, and how much the number of on-line migration, etc. These factors will be Become a condition that you use what method clearance, so according to the characteristics of the table and the specific situation, we should use different methods to remove row migration. Below I will introduce a variety of clearing migration methods and their respective applications.

Method 1: Method for migration of traditional clear rows

Specific steps are as follows:

1. Execute the UTLChain.sql script under the $ ORACLE_HOME / RDBMS / Admin directory to create a chained_rows table.

@ $ Oracle_home / rdbms / admin / uTlchain.sql

2. Rows of rows that have rows of rows that have rows of migrations (replaced by table_name) are placed in the chained_rows table.

Analyze Table Table_name List Chained Rows Into Chained_Rows;

3. Place the row of rows in the table to be placed in a temporary table.

CREATE TABLE TABLE_NAME_TEMP AS

SELECT * from home_name

WHERE ROWID IN

(SELECT head_rowid from chained_rowswhere table_name = 'table_name');

4. Delete the record line of the row migration existing in the original table.

Delete Table_name

WHERE ROWID IN

(SELECT HEAD_ROWID

From chained_rows

Where Table_name = 'Table_name');

5. Remove from the temporary table and re-insert those deleted data into the original table, and delete a temporary table.

INSERT INTO TABLE_NAME SELECT * home_name_temp;

DROP TABLE TABLE_NAME_TEMP;

For this method of removing RM, the advantage is that the execution process is relatively simple and easy to implement. However, the defects of this algorithm do not take into account the case of the table, many tables in most databases are related to the other tables, and the foreign key limitations are caused, which cannot be in step 3. DELETE off has a row of a row of migration, so the scope of the table that can be applied is limited and can only be applied to tables without any foreign bonds on the table. Since this method does not have a disable drop index when inserting and deleting data, this causes the main consumption time to maintain the balance of the index tree when deleting and inserting, this is relatively short to the time when the number of records is small. However, if the time consumed for the number of records, it is not acceptable. Obviously, this method is obviously not advisable when processing a large amount of data.

The following is an example in which the migration migration is made on the production database, which has been adjusted before this previous PCTFree parameter to a suitable value:

SQL> @ $ oracle_home / rdbms / admin / utslchain.sql

Table created.

SQL> Analyze Table Customer List Chained Rows Into Chained_Rows;

Table analyzed.

SQL> SELECT Count (*) from chained_Rows;

TABLE_NAME COUNT (*)

----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

Customer 21306

1 rows selected.

View restrictions in the Customer table:

SQL> SELECT CONSTRAINT_NAME, ConsTRAINT_TYPE, TABLE_NAME from User_Constraints Where Table_name = 'Customer';

Constraint_name c table_name

--------------------------------------------------- -----------

PK_CUSTOMER1 P CUSTOMER

SQL> select constraint_name, constraint_type, table_name from user_constraints where r_constraint_name = 'pk_customer1';

No rows selected

SQL> CREATE TABLE CUSTOMER_TEMP AS

SELECT * from Customer WHERE ROWID IN

(SELECT Head_rowid from chained_rowswhere Table_name = 'Customer');

Table created.

SQL> SELECT Count (*) from Customer;

Count (*)

------------

338299

SQL> Delete Customer WHERE ROWID IN

(SELECT HEAD_ROWID

From chained_rows

WHERE TABLE_NAME = 'CUSTOMER');

Rows deleded.

SQL> INSERT INTO CUSTOMER SELECT * from Customer_Temp;

Rows Created.

SQL> Drop Table Customer_Temp;

Table Dropped.

SQL> commit;

COMMIT COMPLETE.

SQL> SELECT Count (*) from Customer;

Count (*)

------------

338299

SQL> Truncate Table Chained_ROWS;

Table truncated.

SQL> Analyze Table Customer List Chained Rows Into Chained_Rows;

Table analyzed.

SQL> SELECT Count (*) from chained_Rows;

Count (*)

------------

0

The entire elimination process of more than 20,000 rows of rows of rows of rows is around three minutes, and all are completed in the online state, basically will not have any effect on the business, the only thing is to clear the migration of the row cannot have external keys The limit, otherwise this method cannot be used to remove.

Method 2: Method for improving traditional clear row migration

1. Execute the UTLChain.sql script under the $ ORACLE_HOME / RDBMS / Admin directory to create a chained_rows table.

2. Disabling all other forms associated with this table to this table.

3. Place the row of rows in the table to be placed in a temporary table.

4. Delete the record line of the row migration existing in the original table.

5. Remove from the temporary table and re-insert those deleted data into the original table, and delete a temporary table.

6. Enable all other restrictions on this table to enable all other tables.

This algorithm is an improvement to the traditional algorithm. For the use of such an algorithm to clear the row migration, take into account the association between the tables, and can be flexible to use the table related information generated by the Toad tool. It is a more suitable for A method of clearing the migration. However, because using this method, it is necessary to reconstruct the index. If the number of records is large, such as tens of millions of records, it is not very suitable, because this reconstruction index will be very long, it is linear time complexity , The reconstruction index will result in the table where the index is located, causing a new record that is not inserted, and the time for reconstructing the index is too long. It can't cause a long-term insertion that it will seriously affect the application, and even result in data loss, so This is an important factor that must be taken into account before using this method; for 8i versions, ONLINE methods can be used to rebuild the index, which will not cause the lock, but there will be additional more overhead, time will be very long. Moreover, because this method is inserted into the record and deletion record, it is indexed, if the row migration on the table is much longer, and the time consumption will be relatively large, so it is only suitable for tables. There is a relatively small table that migrates. In general, this method is not very applicable to the case where there is too much to record too much or there is too much way to migrate, and it is not much suitable for the table record less and the table on the table. The following is an example in which the migration migration is made on the production database, which has been adjusted before this previous PCTFree parameter to a suitable value:

SQL> SELECT INDEX_NAME, INDEX_TYPE, TABLE_NAME AER_INDEXES where Table_Name = 'TERMINAL'

INDEX_NAME INDEX_TYPE TABLE_NAME

-------------------------------------------------- ---------------

Index_Terminal_TerminalCode

Normal

Terminal

I_TERMINAL_ID_TYPE

Normal

Terminal

I_TERMINAL_OT_OID

Normal

Terminal

PK_TERMINAL_ID

Normal

Terminal

UI_TERMINAL_GOODIS_SSN

Normal

Terminal

SQL> SELECT CONSTRAINT_NAME, ConsTRAINT_TYPE, TABLE_NAME AER_CONSTRAINTS WHERE R_CONSTRAINT_NAME = 'PK_TERMINAL_ID';

Constraint_name c table_name

--------------------------------------------------- -----------

SYS_C003200 R

Conn

SQL> ALTER TABLE

Conn

Disable consTRAINT SYS_C003200;

Table altered.

SQL> CREATE TABLE TERMINAL_TEMP AS

SELECT * from Terminal

WHERE ROWID IN

(SELECT Head_rowid from chained_rows

Where table_name = 'terminal');

Table created.

SQL> SELECT Count (*) from terminal_temp;

Count (*)

------------

8302SQL> Delete Terminal

WHERE ROWID IN

(SELECT HEAD_ROWID

From chained_rows

Where table_name = 'terminal');

8302 rows deleded.

SQL> INSERT INTO TERMINAL SELECT * from Terminal_Temp;

8302 rows created.

SQL> ALTER TABLE

Conn

Disable consTRAINT SYS_C003200;

Table altered.

SQL> SELECT Count (*) from Terminal

Count (*)

------------

647799

SQL> Truncate Table Chained_ROWS;

Table truncated.

SQL> Analyze Table Terminal List Chained Rows INTO CHAINED_ROWS

Table analyzed.

SQL> SELECT Count (*) from chained_Rows;

Count (*)

------------

0

As can be seen from the above process, the total cost of the TERMINAL is in less than five minutes, and it is still relatively fast. From the experience of clearing row migration in the production database, this method is basically suitable for many tables with row of migration.

Method 3: Method for clearing row migration using the Toad tool

1. Backup To clear the table of the RM.

Rename Table_name to Table_Name_Temp;

2. DROP All other forms are associated with foreign key limitations for Table_name.

SELECT CONSTRAINT_NAME, CONSTRAINT_TYPE, TABLE_NAME from USER_CONSTRAINTS where R_CONSTRAINT_NAME in (SELECT CONSTRAINT_NAME from USER_CONSTRAINTS where TABLE_NAME = 'table_name' AND CONSTRAINT_TYPE = 'P');

ALTER TABLE TABLE_NAME DROP CONSTRAINT XXXX; (xxxx is the result of the above query)

3. Reconstruction 1 is rename table.

Create Table Table_Name Asse SELECT * from Table_name_Temp Where 0 = 1;

4. Reconstruct the original data in the table.

INSERT / * APPEND * / INTO TABLE_NAME SELECT * from Table_name_Temp;

5. Delete the index on Table_Name_Temp and the foreign key to associate additional tables.

6. Establish and the original index, primary keys, and all foreign key restrictions on table_name.

7. Re-compile the relevant stored procedures, functions, and packages.

8. Delete table table_name_temp.

For use this method to clear the row migration, all of the code can be generated by the Toad tool. Since this method takes the association in the table, it is also a clear method of comparing a comparison, and the table and index are reconstructed during the clear process, which has improved on the storage and performance of the database. Because this method is the rename table as a temporary table, then rebuild a new table, therefore the space of the table is required, so it is necessary to check if the FREE space of the table space you want to clear before the operation is sufficient. However, there must be a deficiencies, because reconstructing indexes and restrictions after reinserting the original data in a new table, so there is a relatively large overhead at time and disk space, and there may be a period of time for the foreground application Interrupt, of course, this interrupt time is mainly consumed in rebuilding indexes and rebuild restrictions, while the length of time follows the need to rebuild indexes and restrictions, and how much the logistics records are related. Using this method is not very appropriate for the system of 7 * 24 hours requiring the system, because this method will cause the system to stop the system, if the system is relatively high, this method is not very applicable. . Method 4: Method for clearing row migration using Exp / IMP tools

1. Use exp to export tables with row migration.

2. Then Truncate original table.

3. IMP starts exporting tables.

4. Rebate all indexes on the table. (Optional)

Using this method, you can save this part of the time without reconstructing the index, but the efficiency of the index after completion is not very high, it is best to gradually an online reconstruction index, which is no need to interrupt service. However, it is necessary to consider that IMP will be slow, and it will take up a relatively large IO, and should choose to do this when the application is not busy, otherwise it will have a large impact on the normal operation of the application. There is still a big delay in this way, it is to ensure that the table is not updated or read-only, which cannot be inserted or updated in the table, otherwise it will result in the loss of data. .

SQL> SELECT Count (*) from test;

Count (*)

------------

169344

SQL> Truncate Table Chained_ROWS;

Table truncated.

SQL> Analyze Table Test List Chained Rows INTO CHAINED_ROWS

Table analyzed.

SQL> SELECT Count (*) from chained_Rows;

Count (*)

------------

3294

$ exp allan / allan file = Test.dmp Tables = TEST

Export: Release

9.2.0

. 3.0 - Production on Sun Jun 6 13:50:08 2004

CopyRight (C) 1982, 2002, Oracle Corporation. All Rights Reserved.

Connected to: Oracle9i

Enterprise

Edition Release

9.2.0

.3.0 - Production

With the partitioning, OLAP AND ORACLE DATA MINING OPTIONS

JServer Release

9.2.0

.3.0 - Production

Export done in zhs16gbk character set and al16utf16 nchar character set

About to Export Specified Tables Via Conventional Path ..... EXPORTING TABLE TEST 169344 ROWS Exported

Export Terminated SuccessFully without Warnings.

$ SQLPLUS ALAN / ALLAN

SQL * Plus: Release

9.2.0

. 3.0 - Production on Sun Jun 6 13:50:43 2004

CopyRight (C) 1982, 2002, Oracle Corporation. All Rights Reserved

Connected to:

Oracle9i

Enterprise

Edition Release

9.2.0

.3.0 - Production

With the partitioning, OLAP AND ORACLE DATA MINING OPTIONS

JServer Release

9.2.0

.3.0 - Production

SQL> TRUNCATE TABLE TEST;

Table truncated.

SQL> EXIT

Disconnected from Oracle9i

Enterprise

Edition Release

9.2.0

.3.0 - Production

With the partitioning, OLAP AND ORACLE DATA MINING OPTIONS

JServer Release

9.2.0

.3.0 - Production

$ imp ALAN / ALLAN file = Test.dmp Full = y ignore = y buffer = 5000000

Import: Release

9.2.0

. 3.0 - Production on Sun Jun 6 13:51:24 2004

CopyRight (C) 1982, 2002, Oracle Corporation. All Rights Reserved.

Connected to: Oracle9i

Enterprise

Edition Release

9.2.0

.3.0 - Production

With the partitioning, OLAP AND ORACLE DATA MINING OPTIONS

JServer Release

9.2.0

.3.0 - Production

Export File Created by Export: V

09.02.00

Via Conventional Path

Import done in zhs16gbk character set and al16utf16 nchar character set

INTO Allan Objects Into ALAN

Importing Table "Test" 169344 ROWS IMPORTED

Import Terminated SuccessFully without Warnings.

$ SQLPLUS ALAN / ALLAN

SQL * Plus: Release

9.2.0

. 3.0 - Production on Sun Jun 6 13:52:53 2004

CopyRight (C) 1982, 2002, Oracle Corporation. All Rights Reserved.

Connected to:

Oracle9i

Enterprise

Edition Release

9.2.0

.3.0 - Production

With the partitioning, Olap and Oracle Data Mining OptionsJServer Release

9.2.0

.3.0 - Production

SQL> SELECT Count (*) from test;

Count (*)

------------

169344

SQL> SELECT INDEX_NAME from User_indexes where Table_Name = 'Test';

Index_name

------------------------------

Obj_index

SQL> ALTER INDEX OBJ_INDEX REBUILD ONLINE

Index altered.

SQL> Truncate Table Chained_ROWS;

Table truncated.

SQL> Analyze Table Test List Chained Rows INTO CHAINED_ROWS

Table analyzed.

SQL> SELECT Count (*) from chained_Rows;

Count (*)

------------

0

Method 5: Use the move command to clear the way of migration

1. Check the table space where you want to clear the row migration.

SELECT TABLE_NAME, TABLESPACE_NAME from User_Tables Where Table_name = 'Table_name';

2. Check the specific index on the table to clear the row migration.

Select index_name, Table_name from user_indexes where table_name = 'table_name';

3. Move wants to clear the Table of the RM to the specified tablespace.

Alter Table Table_name Move TableSpace TableSpace_name;

4. Rebate all indexes on the table.

Alter Index Index_name Rebuild;

This method is suitable for 8i and above database versions, mainly using a move command of the database to achieve the clearance of row migration, the spirit of the move command is a process of INSERT ... SELECT, in the process of moving tables Two times the original table size, because the intermediate process is to keep the original old table, the new table is deleted and the space is released and released. When Move should pay attention to the tablespace parameters, you must first know the tablespace where the table is located; because the Move table needs to rebuild the index, you have to determine all the indexes on the table.

This method is not suitable for the number of table records or too much in the table, because the Move of itself will be very slow, and the Move table will be locked, and the time will lead to other operations of the table. There is a problem, causing the data inserted to lose data; there is also a rebuilt index after the Move table, and the time to rebuild, the time to rebuild is too long; Re-execute.

The following is an example in which the migration migration is made on the production database, which has been adjusted before this previous PCTFree parameter to a suitable value:

SQL> Analyze Table Service List Chained Rows INTO CHAINED_ROWS

Table analyzed.

SQL> SELECT Count (*) from chained_Rows;

Count (*)

------------

9145

SQL> SELECT TABLE_NAME, TABLESPACE_NAME from User_Tables Where Table_name = 'Service'; Table_name TableSpace_name

------------------------------------------------------------------------------------------------------------------------------------------------------------------ ------------

Service Data

SQL> SELECT INDEX_NAME, TABLE_NAME from User_indexes where table_name = 'service'

Index_name table_name

------------------------------------------------------------------------------------------------------------------------------------------------------------------ ------------

I_Service_accountnum Service

I_Service_DateActivated Service

I_service_sc_s service

I_Service_ServiceCode Service

PK_SERVICE_SID Service

SQL> SELECT Count (*) from service;

Count (*)

------------

518718

SQL> ALTER TABLE Service Move TableSpace Data;

Table altered.

SQL> ALTER INDEX I_SERVICE_ACCOUNTNUM REBUILD;

Index altered.

SQL> ALTER INDEX i_SERVICE_DATEACTIVATED REBUILD;

Index altered.

SQL> ALTER INDEX I_SERVICE_SC_S REBUILD;

Index altered.

SQL> ALTER INDEX I_SERVICE_SERVICECODE REBUILD;

Index altered.

SQL> ALTER INDEX PK_SERVICE_SID REBUILD;

Index altered.

SQL> Truncate Table Chained_ROWS;

Table truncated.

SQL> Analyze Table Service List Chained Rows INTO CHAINED_ROWS

Table analyzed.

SQL> SELECT Count (*) from chained_Rows;

Count (*)

------------

0

Using the MOVE command to clear the row migration, the executed command is relatively simple, and the time to clear the row migration in the table servcie is about five minutes in the example, where the moving time is less than two minutes, that is The latch time is about two minutes, and for most applications, the general problem is not big, and the implementation does not have much impact on the application when it is in the system.

Method 6: Clearing method for the row migration of some rows of migration is huge and the table records

1. Use the toad tool or other method to get SQL with a large number of rows of rows of migration and table logging a large table, and save as script.

2. Use the rename command to name the original table as a backup table, then delete other tables on the limits on the original table, and the foreign keys and indexes on the original table.

3. Reconstruction of the original table using the script generated in 1, and objects such as the table, foreign keys, indexes, etc.

4. Then use the table in the table mode to export the table, and then import it into another temporary transit database library, because the name of the table has changed, so the rename table is required after the import, then re-export, and finally import it. Original database.

This method is mainly used for some data to be larger, and the row migration of the table is migrated. For the removal of the row of these big tables, it is necessary to stop the application for a long time to clear, which makes people feel the headache, for 7 * 24 hours of application, the longer the time of the DOWN machine The larger, of course, it is necessary to decrease the time of DOWN machines. But because the table itself is relatively large, no matter what operation, it will cost time and resources, but if the application is mainly inserted in a certain period of time, updating data and deleting data are rare, so you can Considering this way: Preferred naming table, then re-establish one of the same tables, used to ensure that the data after the application is inserted normally, so that the application is not stopped for a long time, because rebuilding a no data The process of the table structure is very short, about a few seconds, while rebuilding the table, you can ensure that the application can write data normally, so that the application hardly stops, then rename the original table Export according to the table mode because the name of the table has been changed, so a temporary library is required to import these data, then rename the original name, then re-import the original database after exporting the original table name, so that it will operate It's more trouble, but it is a very effective way, the speed is also very fast, and the introduction is imported because the table structure has been established, and there is no need any more operation, and the most critical is this method. The required DOWN time is the shortest.

SQL> ALTER TABLE User.Pay Rename to Pay_X;

Then export the Pay_X table;

$ exp user / user file = pay_x.dmp tables = pay_x

SQL> ALTER TABLE USER.BATCHPAYMENTDETAIL DROP CONSTRAINT FK_BATCHPAYMENTAIL_OPAYID;

SQL> ALTER TABLE User.DepositClassify Drop connection2 fk_depositclassify2;

SQL> ALTER TABLE User.DepositcreditLog Drop Constraint FK_Depositcreditlog2;

SQL> ALTER TABLE User.Deposit Drop condition Sys_C003423;

SQL> ALTER TABLE User.Pay_x Drop condition Sys_C003549;

SQL> Drop Index User.i_Pay_staffId;

SQL> CREATE TABLE User.Pay

(

Payid Number (9),

Accountnum Number (9),

TOTAL NUMBER (12, 2),

PrevPay Number (12, 2),

Pay Number (12, 2),

StaffID Number (9),

ProcessDate Date,

Payno Char (12),

TYPE Char (2) Default '0',

PaymentMethod Char (1) Default '0', PaymentMethodid Varchar2 (20),

BankAccount varcha2 (32),

PaymentID Number (9),

Status char (1) Default '0',

Memo varchar2 (255),

ServiceID Number (9),

CurrentDepositid Number (9),

SHOULDPROCESSDATE DATE DEFAULT SYSDATE,

OriginalExpiRedate Date,

Originalcanceldate Date,

ExpiRedate Date,

Canceldate Date,

DepositType Char (1)

)

TABLESPACE User

PCTUSED 95

PCTFree 5

INITRANS 1

MaxTrans 255

STORAGE

Initial 7312K

Next 80K

Minextents 1

Maxextents 2147483645

Pctincrease 0

Freeelists 1

Freeelist groups 1

Buffer_Pool Default

)

NOLOGGING

Nocache

Noparallel;

SQL> CREATE INDEX USER.I_PAY_STAFFID ON USER.PAY

(StaffID)

NOLOGGING

TABLESPACE User

PCTFree 5

INITRANS 2

MaxTrans 255

STORAGE

Initial 1936k

Next 80K

Minextents 1

Maxextents 2147483645

Pctincrease 0

Freeelists 1

Freeelist groups 1

Buffer_Pool Default

)

Noparallel;

SQL> CREATE UNIQUE INDEX User.pk_pay_id on user.pay

(PayID)

NOLOGGING

TABLESPACE User

PCTFree 5

INITRANS 2

MaxTrans 255

STORAGE

Initial 1120K

Next 80K

Minextents 1

Maxextents 2147483645

Pctincrease 0

Freeelists 1

Freeelist groups 1

Buffer_Pool Default

)

Noparallel;

SQL> ALTER TABLE User.Pay Add (

Foreign Key (StaffID)

References user.staff (staffID);

SQL> ALTER TABLE User.DepositClassify Add

ConsTRAINT FK_DEPOTICLASSIFY2

Foreign Key (PayID)

References user.pay (payid); SQL> ALTER TABLE User.DepositcreditLog Add

ConsTRAINT FK_DEPOTICReditlog2

Foreign Key (PayID)

References user.pay (payid);

SQL> ALTER FUNCTION "User". "Generatepayno" Compile;

SQL> ALTER Procedure "User". "EngenderPRVPAY" Compile

SQL> ALTER Procedure "User". "Isap_engennderprvpay" Compile;

SQL> ALTER Procedure "User". "Spaddcreditdeposit" compile;

SQL> ALTER Procedure "User". "SPADDDEPOSITWITHOUTCARD" Compile;

SQL> ALTER Procedure "User". "SPADJUSTLWDEPOTI" Compile;

......

The DMP file of the exported table PAY_X is then imported into a temporary database, and then rename it in the temporary database, then export it according to the table mode.

IMP user / user file = pay_x.dmp tables = pay ignore = y

SQL> RENAME PAY_X TO PAY;

$ exp user / user file = pay.dmp tables = pay

Finally, the DMP file is imported into the formal production database.

The above process returns to normal after rebuilding the payment, and the time to rebuild the table after the renaming table is very short, I have been tested in a few minutes, I can do it, new data You can insert a table, the rest of the work is to import the old data into the database, this work is not so high, because the application is normal, generally use the night business to not be busy, you can take a picture The data import of the table is complete.

The above six clear line migrations have their own advantages and disadvantages, which are suitable for use in different situations. The method of several clear line migration can basically completely clear the existence of the existence in the system. , Of course, specific problems in the specific production environment also require specific analysis, for different types of systems, tables in the system use different clearance methods, try to reduce the time of parked databases to ensure the uninterruptible operation of the application. .

转载请注明原文地址:https://www.9cbs.com/read-90367.html

New Post(0)