ExpIMP

zhaozj2021-02-12  146

Import / export is the oldest two command line tools survived in Oracle, in fact, I never think that EXP / IMP is a good backup method, the correct statement is that exp / IMP can only be a good dump tool. Especially in the dump of small databases, the migration of table space, the extraction of the table, the detection logic, and physical conflicts, etc. Of course, we can also put it as a logical auxiliary backup after the physical backup of a small database, is also a good suggestion.

For increasing databases, especially the TB-level database, and more and more data warehouses, Exp / IMP is getting stronger, this time, database backups turn to RMAN and third-party tools. Let's take a brief introduction to the use of Exp / IMP.

Instructions:

Exp Parameter_Name = Value

OR Exp Parameter_name = (Value1, Value2 ...)

As long as you enter the parameter help = y, you can see all help.

How to make EXP help with different character sets:

SET NLS_LANG = Simplified Chinese_China.zHS16GBK, by setting the environment variable, allows EXP's help to display in Chinese, if set nls_lang = american_america. The character set, then your help is English.

All parameters of Exp (the default value for parameters in parentheses):

Userid username / password such as Userid = Duanl / Duanl

Full exports the entire database (N)

Buffer data buffer size

Owner owner username list, you want to export which user object, use Owner = Username

File output file (Expdat.dmp)

Tables Table Name list, specify the exported table name, such as: Tables = Table1, Table2

Compress import an extent (y)

Recordlength IO length

GRANTS Export Permissions (Y)

IncType incremental export type

Indexes Export Index (Y)

RECORD Tracking Increase (Y)

Rows Export Data Line (Y)

Parfile parameter file name, if your EXP's parameters can be used, you can store a parameter file.

Constraints Export constraints (Y)

Consistent crosses consistency

Log screen output log file

Statistics Analysis Object (Estimate)

Direct Direct Path (N)

Triggers Export Trigger (Y)

Feedback displays the progress of each x row (0)

FileSize's maximum size of each dump file

Query Select the clause of the exported table set

The following keywords are only used for transmissionable tablespaces

Transport_Tablespace exports can be transmitted tablespace metadata (N)

TableSpaces lists the table space listing

All parameters of IMP (the default value for parameters in parentheses):

UserID username / password

Full Imports the entire file (N)

Buffer data buffer size

FromUser owner username list

FILE input file (expdat.dmp)

Touser username list

SHOW only lists the file content (n)

TABLES Table Name List

Ignore ignores creation error (N)

Recordlength IO Length GRANTS Import Permissions (Y)

IncType incremental import type

Indexes Import Index (Y)

Commit submission array insertion (N)

ROWS Import Data Line (Y)

PARFILE parameter file name

Log screen output log file

Constraints Import Limit (Y)

Destroy Overwrite Table Space Data File (N)

IndexFile writes the table / index information to the specified file

Skip_unusable_indexes Skips the maintenance of unavailable indexes (N)

Analyze Execute the Analyze statement (Y) in the dump file

Feedback displays the progress of each x row (0)

TOID_NOVALIDATE Skip the verification of the specified type ID

FileSize's maximum size of each dump file

Recalculating statistics (N)

The following keywords are only used for transmissionable tablespaces

Transport_Tablespace Imports Transfer Table Space Metadata (N)

TableSpaces will be transferred to the table space of the database

DataFiles will be transferred to the database data file

TTS_OWNERS has users who can transfer data in the mobile tablespace

Description of incremental parameters: EXP / IMP increment is not true in real intent, so it is best not to use.

EXP common option

1.Full, this is used to export the entire database, and the structure of the entire database can be exported when ROWS = N is used. E.g:

Exp userid = test / test file =. / db_str.dmp log =. / db_str.log full = y rows = n compress = y Direct = y

2. Owner and Table, these two options are used to define the object of Exp. Owner defines the object that exports the specified user; Table Specifies the Table name of the EXP, for example:

Exp userid = test / test file =. / db_str.dmp log =. / db_str.log Owner = DUANL

Exp userid = test / test file =. / db_str.dmp log =. / db_str.log table = nc_data, FI_ARAP

3. Butffer and Feedback, when derived more data, I will consider setting these two parameters. E.g:

Exp userid = test / test file = yw97_2003.dmp log = yw97_2003_3.log feedback = 10000 buffer = 100000000 Tables = WO4, OK_YT

4.File and log, these two parameters specify the backup DMP name and log name, including the file name and directory, see above.

5.compress parameter does not compress the content of the exported data. How to generate the Storage statement used to control the export object. The default value is Y, using the default value, the init extent extent extent extent extent of the object is equal to the sum of the extents of the current export object. Recommended using compress = n.

6. FILESIZE This option is available in 8i. If the exported DMP file is too large, it is best to use the FileSize parameter, and the limit file size should not exceed 2G. Such as:

Exp userid = Duanl / Duanl file = f1, f2, f3, f4, f5 filesize = 2g oowner = scott

This will create a series of files such as F1.DMP, F2.DMP, each size is 2G, if the total amount of the export is less than 10g

EXP does not have to create f5.bmp.imp common option

1, fromuser and touser, use them to import data from one Schema to another Schema. For example: Suppose we exported to Test when making EXP, now we want to import objects into users:

IMP Userid = TEST1 / TEST1 file = expdat.dmp fromuser = test1 Touser = TEST1

2, Ignore, GRANTS, and INDEXES, where the Ignore parameter will ignore the existence of the table, continue to import, this is useful to use the storage parameters that need to adjust the table, we can first build a table according to the actual situation, and then import directly data. GRANTS and Indexes indicate whether to import authorization and index, if you want to reconstruct the index with new storage parameters, or to speed up the speed, we can consider setting indexes to N, and GRANTS is generally Y. E.g:

IMP Userid = TEST1 / TEST1 FILE = EXPDAT.DMP fromUser = Test1 Touser = TEST1 INDEXES = N

Table space transmission

Table space transmission is an approach to a new number of mobile data between 8i. It is to attach a format data file on a database to another database, not to export data into a DMP file, which is sometimes It is very tube, because the transfer table space mobile data is as fast as the copy file.

There are some rules about transmission table space, namely:

· Source database and target database must run on the same hardware platform.

• The source database and the target database must use the same character set.

· Source Database and Target Database must have the same size data block

· Target database does not have table space with the same name with the migration table space

· SYS objects cannot be migrated

· It is necessary to transfer the contained object set

· There are some objects, such as physical chemical views, function-based indexing, etc. cannot be transferred

You can use the following method to detect if a table space or a set of table spaces meet the transmission criteria:

EXEC SYS.DBMS_TTTS.TRANSPORT_SET_CHECK ('TableSpace_name', TRUE);

Select * from sys.transport_set_vioc;

If there is no line selection, it means that the table space contains only table data and is itself. For some non-contained tablespaces, such as data table spaces and index tables, you can transfer together.

The following is a brief use of the steps, if you want to refer to the details, you can also refer to Oracle online help.

1. Set the table space for read-only (assuming table space name is app_data and app_index)

Alter TableSpace App_Data Read Only;

Alter tableSpace app_index read only;

2. Send an exp command

SQL> Host Exp userid = "" "" Sys / Password As Sysdba "" "

Transport_tablespace = y tableSpace = (app_data, app_index)

What to pay attention to is

· In order to execute Exp, UserId must use three quotes in UNIX, you must pay attention to avoid "/"

· After 816 and later, SYSDBA must be used to operate

· This command must be placed in a line in SQL (here because the display problem is placed two lines)

3. Copy the data file to another, that is, the target database

Can be CP (UNIX) or COPY (Windows) or via FTP (must be in bin mode)

4. Set the local table space to read and write

5. Additional data files in the target database IMP file = expdat.dmp userid = "" Sys / Password As Sysdba "" "

Transport_tablespace = Y

"DataFile = (C: / Temp / App_Data, C: / Temp / App_index)"

6. Set the target database table space to read and write

Alter TableSpace App_Data Read Write;

Alter TableSpace App_index Read Write;

Optimize EXP / IMP:

When the amount of data that requires EXP / IMP is relatively large, the process takes time is relatively long, we can use some ways to optimize the operation of Exp / IMP.

Exp: Use direct path Direct = Y

Oracle avoids the SQL statement processing engine and reads data directly from the database file and then writes the export file.

It can be observed in the export log:

EXP-00067: Table XXX Will Be Exported In Convenctional Path

If the direct path is not used, it must be guaranteed that the value of the buffer parameter is large enough.

There are some parameters that are not compatible with Direct = Y, and the movable TableSpace cannot be exported with the Query parameter with the Query parameter.

When the imported database is running under different OS, the value of the Recordlength parameter must be guaranteed.

IMP: Optimization through the following ways

1. Avoid disks sort

Set sort_area_size to a larger value, such as 100m

2. Avoid log switching waiting

Increase the number of redo log groups, increase the size of the log file.

3. Optimize log buffer

For example, the capacity of log_buffer is increased by 10 times (maximum not more than 5m)

4. Use array insertion and submission

Commit = y

Note: The array mode cannot handle the table containing the LOB and long types. For such a table, if you use commit = y, each inserted a line, it will be submitted.

5. Reduce the size of the redo log size using the NOLOGGING mode

Specify the parameter indexes = n when importing, only the data is imported, and INDEX is ignored, and then create index through script after passing the data.

Export / Import and Character Set

When the import of data is exported, we should pay attention to the problem of the character set. In the EXP / IMP process we need to pay attention to the parameters of the four character sets: the client character set, the export end database character set, the imported client character set, the imported Database character set.

We first need to view these four character set parameters.

View the information of the character set of the database:

SQL> Select * from NLS_DATABASE_PARETERS;

Parameter Value

------------------------------------------------------------------------------------------------------------------------------------------------------------------ -------------------------------------------------- ------------

NLS_LANGUAGE AMERICAN

NLS_TERRITORY

America

NLS_CURRENCY $

NLS_ISO_CURRENCY

America

NLS_NUMERIC_CHARACTERS.,

NLS_CharacterSet ZHS16GBK

NLS_CALENDAR GREGORIAN

NLS_DATE_FORMAT DD-MON-RR

NLS_DATE_LANGUAGE AMERICAN

NLS_SORT BINARY

NLS_TIME_FORMAT HH.mi.ssxff amnls_timestamp_format DD-MON-RR HH.MI.ssxff AM

NLS_TIME_TZ_FORMAT HH.MI.SSXFF AM TZH: TZM

NLS_TimeSTAMP_TZ_FORMAT DD-MON-RR HH.MI.ssxff AM TZH: TZM

NLS_DUAL_CURRENCY $

NLS_Comp Binary

NLS_NCHAR_CHARACTERSET ENS16GBK

NLS_RDBMS_VERSION 8.1.7.4.1

NLS_CHARACTERSET: ZHS16GBK is the character set of the current database.

Let's check the client's character set information:

Parameters of client character sets NLS_LANG =

_ <

Territory>.

Language: Specifies the language of the Oracle message, the date of the date and month display.

Territory: Specifies the format, region and calculation of the currency and numbers, and the habit of calculating the week and date.

CHARACTERSET: Controls the character sets used by the client application. Usually set or equal to the client's code page.

Or set to UTF8 for Unicode applications.

In Windows, query and modify NLS_LANG can be performed in the registry:

HKEY_LOCAL_MACHINE / SOFTWARE / ORACLE / HOMEXX /

XX refers to the system number when there is multiple Oracle_home.

In UNIX:

$ ENV | GREP NLS_LANG

NLS_LANG = Simplified Chinese_China.zHS16GBK

Modifications are available:

$ EXPORT NLS_LANG = American_america.utf8

It is usually necessary to set the client character set when exported is the same as the database. When data is imported, there are two cases:

(1) The source database and the target database have the same character set settings.

At this time, just set the client NLS_LANG, the client NLS_LANG, which is exported and imported, is equal to the database character set.

(2) The source database and the target database character set are different.

First, the DB's NLS_LANG's NLS_LANG will be consistent with the database character set of the exported end, then the data, then set the NLS_LANG of the imported client to the export, import data, which transition only occurs on the database, and only occurs once.

In this case, only when the import end database character set is strictly superchard for the export end database character set, the data can be completely successful, otherwise, there may be data inconsistencies or garbled.

Different versions of Exp / IMP issues

In general, imports from low versions are not big, trouble is to import high versions of data into low versions, before Oracle9i, the EXP / IMP between different versions of Oracle can be solved by the following method:

1. Run the bottom version of catexp.sql on the high version database;

2. Use a low version of Exp to export high versions of data;

3. Use the low version of IMP to import the database into the low version database;

4. Run the high version of the catexp.sql script on the high version of the database.

But in 9i, the above method does not solve the problem. If you use low version EXP / IMP directly, you will have the following error:

EXP-00008: Oracle Error% Lu Encountered

ORA-00904: Invalid Column Name

This is already a published bug, you need to wait until Oracle10.0 can solve, the BUG number is 2261722, you can go to Metalink to see more information about this bug. BUG is returned to BUG, ​​our work is still to do, before we don't have Oracle's support, we will solve themselves. Perform the following SQL reconstruction EXU81RLS view in Oracle9i.

Create or Replace View Exu81RLS

(Objown, ObjnaM, Policy, Pollow, Polsch, Polfun, Stmts, Chkopt, Enabled, Spolicy)

As SELECT U.NAME, O.NAME, R.PNAME, R.PFSCHMA, R.PPNAME, R.PFNAME,

Decode (Bitand (R.STMT_TYPE, 1), 0, '', 'SELECT,')

|| Decode (Bitand (R.STMT_TYPE, 2), 0, '', 'INSERT,')

|| Decode (Bitand (R.STMT_TYPE, 4), 0, '', 'Update,')

|| Decode (Bitand (R.STMT_TYPE, 8), 0, '', 'Delete,'),

R.Check_opt, r.enable_flag,

Decode (Bitand (R.STMT_TYPE, 16), 0, 0, 1)

From user $ u, Obj $ O, RLS $ R

Where u.user # = o.owner #

And r.obj # = o.obj #

AND (uid = 0 or)

Uid = o.owner # or

EXISTS (SELECT * from session_roles where role = 'select_catalog_role')

)

/

Grant Select On Sys.EXU81RLS to PUBLIC;

/

You can use Exp / IMP across versions, but you must use EXP and IMP versions correctly: 1. Always use IMP versions to match the version of the database, such as to import to 817, using 817 Imp tools.

2. Always use the EXP version to match the lowest version in the two databases, such as importing from 9201 to 817, using the 817 version of the EXP tool.

转载请注明原文地址:https://www.9cbs.com/read-6947.html

New Post(0)