1, use of Export / Import
Oracle Export / Import Tools are used to pass data between databases.
EXPORT exports data from the database to the DUMP file
IMPORT in the data guide database from the DUMP file
Here is the usual use of them.
(1) Transfer data between two databases
Between the same version of Oracle Server
Different versions between Oracle Server
Between the same OS
Between different OS
(2) for backup and recovery for databases
(3), transfer from one Schema to another SCHEMA
(4), transfer from a TableSpace to another TABLESPACE
2, DUMP file
EXPORT is out of the binary format file, which is not manually edited, otherwise the data will be damaged.
This file is the same format on any platform supported by Oracle, which can be universal on each platform.
DUMP files use up-compatible mode when importing, that is, Oralce7's DUMP file can be imported.
There may be a problem between the version varies greatly to Oracle8.
3, Export / Import Process
Export Export Dump files contain two basic types of data
- DDL (Data Dictionary Language)
- Data
The dump file contains all DDL statements that recreate Data Dictionary, basically a format that you can read.
.
But it should be noted that don't use the text editor to edit it, Oracle does not support this.
The Oracle object included in the dump file is divided into a Table / User / Full method.
Icon
Just in the Full mode (such as Public Synonyms, Users, ROLES, ROLLBACK SEGM
ENTS, etc.)
Table Mode User Mode Full Database Mode
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ----------------
---
Table Definitions Table Definitions Table Definitions
Table Data Table Data Table Data
Owner's Table Grants Owner's Grants Grants
Owner's Table Indexes Owner's INDEXES INDEXES
Table Constraints Table Constraints Table Constraints
Table Triggers Table Triggers All Triggers
Clusters Clusters
Database Links Database Links
Job Queues Job Queues
Refresh Groups Refresh Groups
Sequences Sequences
Snapshots Snapshots
Snapshot Logs Snapshot logs
Stored Procedures Stored Procedures
Private synynyms all synonymsviews views
PROFILES
Replication Catalog
Resource Cost
Roles
Rollback segments
System Audit Options
System privileges
TableSpace Definitions
TableSpace quotas
User definitions
4. Objects in Import Pour in Order
When pouring data, oracle has a specific order that may vary with the database version.
but
This is now.
TableSpaces 14. Snapshot Logs
2. PROFILES 15. Job Queues
3. Users 16. Refresh Groups
4. Roles 17. Cluster Definitions
5. System Privilege Grants 18. Tables (Also Grants, Commen
TS,
6. Role Grants Indexes, Constraints, Audi
Ting)
7. Default Roles 19. Referenceial Integrity
8. TableSpace Quotas 20. PostTables Actions
9. Resource Costs 21. Synynyms
10. Rollback Segments 22. Views
11. Database Links 23. Stored Procedures
12. Sequences 24. Triggers, defaults and AUD
origin
13. Snapshots
According to this order, it is mainly to solve problems that may be generated between objects. Trigger finally imported,
In Insert
Trigger does not excite the data to the database. There may be some states after importing is an Invalid's Proc
Edure, mainly
It will affect some database objects when IMPORT, and IMPORT does not recompile Procedure, resulting
This situation,
You can resolve this issue again.
5, compatibility problem
The Import tool can handle the Dump file exported by the version after Export 5.1.22, so you use ORACL
E7's import
Handling Oracle6's DUMP file, push it according to the class, but oracle is much likely to be
. specific
Problem can refer to the corresponding document, such as related parameter settings, etc. (Compatible parameters)
6, Export needs View
The View required for Export is created by catexp.sql, which is used for Export organization Dump files.
The data format.
Most View is used to collect create DDL statements, others are mainly used by Oracle developers.
These Views may differ between different Oracle versions, each version may have new features.
So in new
The old DUMP file in the version will have errors, which can generally perform catexp.sql to solve these problems, and solve backward compatibility
The general steps of the problem are as follows:
Exporting the version of the database is older than the target database:
- Perform old catexp.sql in the target database that needs to be imported
- Export Dump files using old export
- Import to the database using old import
- Perform new catexp.sql in the database to restore this version of Export View
Export the version of the database than the new situation of the target database:
- Perform new catexp.sql in the target database that needs to be imported
- Export DUMP files using new export
- Import the new import into the database
- Perform old catexp.sql in the database to restore this version of Export View
7, fragmentation
EXPORT / IMPORT is a very important application to organize fragments. Because if Impport, Impport,
If you re-create Table, import data, so the entire sheet is continuously stored. Different default
EXPORT will generate DUMP files "Compress" Table, but this compression is in a lot of feelings
Misunderstood. In fact, Compress is the value of changing the Storage Parameters Initial. such as:
Create Table .... Storage (Initial 10k next 10k ..)
Now the data has been extended to 100 extent, if compress = y is used to export data,
The Storage (Initial 1000K Next 10K) when the statement is generated
We can see that the next value has not changed, and initial is the sum of all Extent. So there will be
As follows, Table A has 4 100M Extens, executes delete from A, then use compress = y
Out of data, the resulting CREATE TABLE statement will have 400M Initial Extent. Even this is TABLE
No data! ! This is the DUMP file even if it is small, but it will generate a huge in IMPORT.
of
TABLE.
In addition, it may exceed the size of the DataFile. For example, there are 4 50m data files, where Table A has
15 10M EXTENT, if you use compress = y, you will have initial = 150m.
So when retroducing, you cannot assign a 150M Extent because a single extent cannot span multiple texts.
Part.
8, transfer data between User and TableSpace
Under normal circumstances, the data of Export should be restored to its original place. If the Scott user's table is TABLE
Or User mode export data, when IMPORT, if the Scott user does not exist, it will report an error!
The data exported in full is the information with Create User, so you will create users to store data.
.
Of course, you can use fromuser and Touser parameters when IMPORT can be used to determine User to import, but
certificate
Touser must already exist.
9. Exort / Import's impact on SQUENCE
In both cases, Export / Import will have sequence.
(1) If in Export, the user is taking the value of the Sequence, which may cause the disqualification of Sequence.
(2) In addition, if sequence uses cache, when Export, those in cache will be ignored.
of,
Just take the current value export from the data dictionary.
If you update a column data in the table in the table, it is not the same as both cases, and is not the above case, and is not the above case.
If the regular path is used, each row of data is used in INSERT statement, consistency check, and INSERT T.
Rigger
If you use a Direct mode, some constraints and Trigger may not trigger, if used in Trigger
Sequence.nextVal will have an impact on Sequence.
Parameter solution:
E: /> Exp Help = Y
You can by entering the exp command and username / password
Command after the user / password:
Example: EXP Scott / Tiger
Alternatively, you can also control the "Export" operating mode by entering the EXP command with various parameters.
To specify parameters, you can use keywords:
Format: exp keyword = value or keyword = (Value1, Value2, ..., Valuen)
Example: EXP Scott / Tiger GRANTS = y Tables = (EMP, DEPT, MGR)
Or TABLES = (T1: P1, T1: P2), if T1 is a partition table
UserID must be the first parameter in the command line.
Keyword description (default)
-------------------------------------------------- -
UserID username / password
Full exports the entire file (n)
Buffer data buffer size
Owner owner username list
File output file (Expdat.dmp)
TABLES Table Name List
Compress imports a range (Y)
Recordlength IO length
GRANTS Export Permissions (Y)
IncType incremental export type
Indexes Export Index (Y)
RECORD Tracking Increase (Y)
Rows Export Data Line (Y)
PARFILE parameter file name
Constraints Export Limit (Y)
Consistent crosses consistency
Log screen output log file
Statistics Analysis Object (Estimate)
Direct Direct Path (N)
Triggers Export Trigger (Y)
Feedback displays the progress of each x row (0)
FileSize's maximum size of each dump file
Query Select the clause of the exported table set
The following keywords are only used for transmissionable tablespaces
Transport_Tablespace exports can be transmitted tablespace metadata (N)
TableSpaces lists the table space listing
E: /> IMP Help = Y
Can pass the IMP command and your username / password
With your username / password command:
Example: IMP Scott / Tiger
Alternatively, "import" can be controlled by inputting the Imp command and various arguments.
To specify parameters, you can use keywords:
Format: IMP keyword = value or keyword = (Value1, Value2, ..., Vlauen)
Example: IMP Scott / Tiger Ignore = y Tables = (EMP, DEPT) FULL = N
Or TABLES = (T1: P1, T1: P2), if T1 is a partition table
UserID must be the first parameter in the command line.
Keyword description (default)
----------------------------------------------
UserID username / password
Full Imports the entire file (n)
Buffer data buffer size
FromUser owner username list
FILE input file (expdat.dmp)
Touser username list show only lists file content (N)
TABLES Table Name List
Ignore ignores creation error (N)
Recordlength IO length
GRANTS Import Permissions (Y)
IncType incremental import type
Indexes Import Index (Y)
Commit submission array insertion (N)
ROWS Import Data Line (Y)
PARFILE parameter file name
Log screen output log file
Constraints Import Limit (Y)
Destroy Overwrite Table Space Data File (N)
IndexFile writes the table / index information to the specified file
Skip_unusable_indexes Skips the maintenance of unavailable indexes (N)
Analyze Execute the Analyze statement (Y) in the dump file
Feedback displays the progress of each x row (0)
TOID_NOVALIDATE Skip the verification of the specified type ID
FileSize's maximum size of each dump file
Recalculating statistics (N)
The following keywords are only used for transmissionable tablespaces
Transport_Tablespace Imports Transfer Table Space Metadata (N)
TableSpaces will be transferred to the table space of the database
DataFiles will be transferred to the database data file
TTS_OWNERS has users who can transfer data in the mobile tablespace
supplement:
1 Table Model
1) Backup One User's Table
exp icdmain / icd = y indexes = n compress = n buffer = 65536 feedback = 100000 volsize = 0 file = exp_icdmain_table_yyyymmdd.dmp log = exp_icdmain_table_yyyymmdd.log tables = icdmain.commoninformation rows, icdmain.serviceinfo, icdmain.dealinfo
2) Recover All Table
imp icdmain / icd fromuser = icdmain touser = icdmain rows = y indexes = n commit = y buffer = 65536 feedback = 100000 ignore = y volsize 0 file = exp_icdmain_table_yyyymmdd.dmp log = imp_icdmain_table_yyyymmdd.log =
3) Recover Some Table Of All Table
imp icdmain / icd fromuser = icdmain touser = icdmain rows = y indexes = n commit = y buffer = 65536 feedback = 100000 ignore = y volsize = 0 file = exp_icdmain_table_yyyymmdd.dmp log = imp_icdmain_table_yyyymmdd.log tables = commoninformation, serviceinfo
2 User Model
1) Backup All SomeOne's Object
Exp ICDMAIN / ICD ROWS = y indexes = n compress = n buffer = 65536 feedback = 100000 volsize = 0 Owner = icdmain file = exp_icdmain_user_yyyymmdd.dmp log = express_icdmain_user_yyyyyymmdd.log
2) recover all someone's objectimp icdmain / icd fromuser = icdmain touser = icdmain rows = y indexes = n commit = y buffer = 65536 feedback = 100000 ignore = y volsize 0 file = exp_icdmain_user_yyyymmdd.dmp log = imp_icdmain_user_yyyymmdd.log =
3) Recover Some Table of All Some's Object
imp icdmain / icd fromuser = icdmain touser = icdmain rows = y indexes = n commit = y buffer = 65536 feedback = 100000 ignore = y volsize = 0 file = exp_icdmain_user_yyyymmdd.dmp log = imp_icdmain_user_yyyymmdd.log tables = commoninformation, serviceinfo
3 Full Model
1) Backup The Full DB for ALL
exp system / manager rows = y indexes = n compress = n buffer = 65536 feedback = 100000 volsize = 0 full = y inctype complete file = exp_fulldb_yyyymmdd.dmp log = exp_fulldb_yyyymmdd.log =
2) Backup the full db for zengliang
exp system / manager rows = y indexes = n compress = n buffer = 65536 feedback = 100000 volsize = 0 full = y inctype = incremental file = exp_fulldb_zl_yyyymmdd.dmp log = exp_fulldb_zl_yyyymmdd.log
3) Recover All Date for Full Backup
IMP system / manager rows = y indexes = n commit = y buffer = 65536 feedback = 100000 ignore = y VOLSIZE = 0 fullyyy y file = exp_fulldb_yyyyymmdd.dmp log = IMP_FULLDB_YYYYMMDD.LOG
4) Recover All Date for ZENGLIANG BACKUP
imp system / manager rows = y indexes = n commit = y buffer = 65536 feedback = 100000 ignore = y volsize = 0 full = y inctype restore file = exp_fulldb_zl_yyyymmdd.dmp log = imp_fulldb_zl_yyyymmdd.log =