Oracle 10G Data Pump (Part I)

zhaozj2021-02-16  44

Oracle 10G of Data Pump (Part I) Author: Fenng Source: Http://www.DBAnotes.net

Oracle 10g Data Pump technology can move high-speed mobile databases and metadata in different databases. This technology is based on two data mobile tools: Data Pump Export and Data Pump import.

Oracle's DATA PUMP is implemented by a PL / SQL package: DBMS_DATAPUMP (also called Data Pump API) .DATA PUMP uses a direct path loading and external table mechanism movement. DATA PUMP uses DBMS_METADATA PL / SQL package to include ETL All database objects in the process. Data Pump is the basis for some other key features (such as streaming replication, logical Standby et al.).

Data Pump features are integrated into the Oracle database 10g, but the standard version 10G is more than 1.

Key concept

Master Table Master Table (MT) is the core of Data Pump technology. Master Table is used to save detail information during the entire process (or some metadata information). With MT, export, or importance to become possible. These details information include:

Restarting the Job status of the job to restart all the objects in the DUMP file.

The main table is created in user mode for performing current export or imported operations. The user must have enough space. The name of the home table is the same as the Job name created. This means that you can't explicitly specify one and Existing table or view renamed Data Pump Job.

When exported, the primary table is created and written to the dump file when it is completed. When the import is started, the primary table is loaded into the database from the Dump file, and the order of controlling is used. The main table can also be used. Some parameters for initialization operation. It is important to note that the primary table cannot be stored across files. Therefore, the specified DUMP file is at least to accommodate MT.

The primary table is based on the following situation or preserved or deleted:

Job is successfully completed, MT is deleted. If Job is stopped using the STOP_JOB interactive command, the MT will be retained to restart Job. If Job is dropped using the kill_job interactive command kill, the MT will be deleted, and Job will not be re-reable Start. If JOB accidents, MT is always retained.

Every export or imported Job each exported or imported will generate a primary process. The primary process controls the entire Job, including communicates with client, creates and controls the Worker process, and log operation.

The Worker ProcessWorker Process is a process in which practical uninstalling and loading metadata and table data. The number of processes is equal to the value specified by Parallel. In the entire JOB process, the number can be adjusted. We have already mentioned, standard The version is more than 1.

Network Export and Import When you run an Import operation over the network, the source is another system, not a DUMP file set. The server is different from the original Export / Import method. Data Pump export and import are basically server. this means:

Because the read and write most of the file are on the server, the path specification and database directory objects are parsing for the server instead of the client because the access to the file is in operating system mode, there may be data security issues. . If you use the default directory information, you may generate a file conflict problem.

The default location DUMP, LOG and SQL files of Dump, Log, and SQL files are based on the server. When you run Data Pump Export or Data Pump Import, you must create a directory object by a user with Create Any Directory permissions. After the object is created, you need to authorize other users to authorize the Read / Write object operation permission.

Grant Read on Directory Data_Pump_Dir TO FOO;

Note that this directory can only be used when the database server has read permissions.

Data Pump determines the location of the file according to the following order:

1. Directory object specified for this file; 2. If you do not specify a directory object for a file, the object specified by Directory in the initialization parameter will be used by default. Privilege-user and non-privileged users Data Pump identify two types of users and nonpriviled users. Privileged users have exp_full_database and Imp_Full_Database permissions. Non-privileged users do not have these privileges. Privileged users can do the following work :

Export Database objects that import other users. Export imported non-mode-based objects (such as tablespace and mode definitions, etc.). Some operations are operated for other users.

How to access data? Direct Path, DP and External Table (External Tables, ET) Use Oracle_DataPump Drive and Use Direct Path API Internal Data Stream Format. Oracle, Data Pump will automatically select the fastest method.

File Assign Data Pump JOB Management Three Types of Files:

DUMP file log file Records the relevant information of an action SQL file records a record of a SQL_FILE operation output

Specify file and additional attachment files For exported operations, you can specify DUMP files, as the operations you find that the file space is somewhat insufficient, you can add additional files using the Add_File command in interactive mode. For imported operations. All Dump files must be specified when Job definitions. log files and SQL files overwrite the existing file. DUMP files never override existing files.

Setting the parallelism for export and import, the settings of the parallelism (specified by the Parallel parameter) should be less than or equal to the number of DUMP files. If the number of files is insufficient, performance may be reduced (multithreaded simultaneously access a DUMP file) As mentioned earlier, the standard version 10g is only 1, so this parameter is actually valid for the enterprise version.

Using an alternative variable In the export and import operation, you can use% u to replace the variable to match the file name, which is convenient in a certain situation.

New features of Export and Import in Data Pump have many new features relative to old EXP / IMP, Data Pump, including:

1 Data Pump JOB Performs the ability to specify the maximum number of threads when you execute .2 Data Pump Job is fully restarted. 3 Separate from the long-running JOB or reassembled without an impact on Job itself .4 Support for export import operations through the network .5 remap_datafile.6 supports perfect granular object selection. 7 Supports Job interactive command line mode monitoring and interaction. 8 Spatial assessment of objects to be imported. 9 has the ability to specify the database version object.

Last continued

MT structure description

SQL>

Is the DESC SYS_EXPORT_SCHEMA_01 Name?

Types of----------------------------------------- -------- ------------------- PROCESS_ORDER NUMBER DUPLICATE NUMBER DUMP_FILEID NUMBER DUMP_POSITION NUMBER DUMP_LENGTH NUMBER DUMP_ALLOCATION NUMBER COMPLETED_ROWS NUMBER ERROR_COUNT NUMBER ELAPSED_TIME NUMBER OBJECT_TYPE_PATH VARCHAR2 (200) OBJECT_PATH_SEQNO NUMBER OBJECT_TYPE VARCHAR2 (30) IN_PROGRESS Char (1) Object_name varchar2 (500) Object_schema varchar2 (30) PARTITION_NAME VARCHAR2 (30) FLAGS NUMBER COMPLETION_TIME DATE OBJECT_TABLESPACE VARCHAR2 (30) SIZE_ESTIMATE NUMBER OBJECT_ROW NUMBER PROCESSING_STATE CHAR (1) PROCESSING_STATUS CHAR (1) BASE_OBJECT_TYPE VARCHAR2 (30) BASE_OBJECT_NAME VARCHAR2 (30) BASE_OBJECT_SCHEMA VARCHAR2 (30) PARALLELIZATION N

UMBER UNLOAD_METHOD NUMBER GRANULES NUMBER SCN NUMBER DOMAIN_INDEX VARCHAR2 (30) DOMAIN_INDEX_SCHEMA VARCHAR2 (30) GRANTOR VARCHAR2 (30) NAME VARCHAR2 (30) VALUE_T VARCHAR2 (4000) VALUE_N NUMBER IS_DEFAULT NUMBER FILE_TYPE NUMBER USER_DIRECTORY VARCHAR2 (4000) USER_FILE_NAME VARCHAR2 (4000) FILE_NAME VARCHAR2 (4000) extend_size number file_max_size NUMBER EXTEND_ACTIVE NUMBER OVERFLOW_TO NUMBER PROCESS_NAME VARCHAR2 (30) LAST_UPDATE DATE WORK_ITEM VARCHAR2 (30) NON_TRANSACTIONAL CHAR (1) OBJECT_NUMBER NUMBER COMPLETED_BYTES NUMBER TOTAL_BYTES NUMBER METADATA_IO NUMBER DATA_IO NUMBER CUMULATIVE_TIME NUMB

ER OLD_VALUE VARCHAR2 (4000) SEED NUMBER LAST_FILE NUMBER USER_NAME VARCHAR2 (30) OPERATION VARCHAR2 (30) JOB_MODE VARCHAR2 (30) VERSION NUMBER DB_VERSION VARCHAR2 (30) STATE VARCHAR2 (30) PHASE NUMBER GUID RAW (16) START_TIME DATE BLOCK_SIZE NUMBER METADATA_BUFFER_SIZE NUMBER Data_Buffer_Size Number Degree Number Language varcha2 (30) Platform varcha2 (100) Abort_step number instance varcha2 (16) Reference Document 1.oracle10i Database Utilities2.Data Pump in Oracle Database 10g from Oot

Original source: http://www.dbanotes.net/oracle/10g-data-pump-parti .htm

转载请注明原文地址:https://www.9cbs.com/read-27575.html

New Post(0)