Oracle common command

xiaoxiao2021-03-06  42

1. View the location of the data file: SQL> SELECT FILE #, Status, Enabled, Name from V $ dataFile;] 2, view control file: SQL> SELECT * FROM V $ controlfile; 3, view online log: SQL> SELECT * From v $ logfile; // January 7, 2005 9:48:44 4, see the current database running archive log mode, archive log location, etc. Information SQL> Archive log list database log mode archive mode Auto archive enable archive end point / ORA_ARCH / ARCH Previous Summary Log Sequence 1 Next Archive Log Sequence 3 Current Log Sequence 35, Feed Archive Mode SQL> ALTER Database [DBNAME] ArchiveLog;

TEL800-810-0136 2032 HuANpeng database system startup ternary 1. Start database monitoring and service launch console monitor configuration: [Oracle @ OWBSRV ORACLE $ CD / [Oracle @ Owbsrv /] $ CD $ Oracle_Home / Bin [Oracle @ oowbsrv bin ] $ ls [Oracle @ Owbsrv Bin] $ LSNRCTL Start [Oracle @ Owbsrv Bin] $ Agentctl Start [Oracle @ obsrv bin] $ OEM Console

[Oracle @ Owbsrv Bin] $ NETCA 2, Several ways to start the database SQL> Conn Sys / PWD @ WH AS SYSDBA can make the following operations After entering the database: SQL> Shutdown Normal needs to wait for all users to disconnect Connecting SQL> Shutdown Immediate Waiting for User to Complete Current Statement SQL> Shutdown Transactional Waiting for Users to Complete Current Services SQL> Shutdown Abort Don't do anything, directly shut down the database database to start using the startup command, there are three cases: without parameters Start the database instance and open the database so that the user uses the database, in most cases, use this way! The second: with the Nomount parameters, only start the database instance, but do not open the database, use when you want to create a new database, or use it when you need this! The third: with the mount parameters, use it when the database is changed. This time the database opens and can be used!

Database System Objects 1, Modify System User Default Table Space and Temporal Table Space SVRMGRL Connect InterNal / Oracle ALTER USER System Default TableSpace Tools; ALTER User System Temporary TableSpace Temp; Exit 2,

Does Oracle have all users who store the database as well as all table system tables? 1. Does Oracle have all users who store the database and all the system tables for all tables? 2. What should I do if I have to get all users of a database? 3. What should I do if I have to get all the tables for a database? 4. What should I do if I have to get all the tables of a user? 5. What should I do if I have to get all the fields and their data types? 1, there are 2, dba_users, all_users3, dba_tables / dba_objects4, all_tables / all_objects, user_tables / user_objects5, dba_tab_columns, all_tab_columns, user_tab_columns general, the data dictionary tables, dba_ the beginning, the entire database of all information to owner To distinguish. With ALL_, it is all information that this user can access, including objects that are not created by this user. In the beginning of user_, it is all the information established by this user.

In the data dictionary table starting in DBA_ and ALL_, you use where owner = 'You want to query the username (uppercase)' you can get the information you want. If you are excluding sys and system, then use where oowner <> syster 'and oowner <>' system '

In general, users you have created should still be seen. The system will create some users themselves.

Or, select Username, TO_CHAR (Created, 'YYYY / mm / DD HH24: MI: SS') from all_users order by create;

Look at the time created, you can basically know. Oracle's own user time is definitely concentrated in the first few minutes to dozens of minutes. Starting with SYS and SYSTEM.

Database performance adjustment

1, in the fragmentation, look at the free space ---- Since the free space debris is composed of several parts, such as the number, the maximum range size, etc., we can use FSFI - Free Space Fragmentation Index (free space debris index) value Reflected: FSFI = 100 * SQRT (MAX (Extent) / SUM (Extents)) * 1 / SQRT (SQRT (count (extents))) ---- It can be seen that the maximum possible value of FSFI is 100 (an ideal Single file table space). As the range increases, the FSFI value slowly decreases, and the FSFI value will fall quickly as the maximum range size is reduced. ---- The following script can be used to calculate fsfi values: Rem Fsfi Value ComputeRem Fsfi.SqlColumn Fsfi Format 999, 99Select TableSpace_name, SQRT (Max (Blocks) / SUM (Blocks)) * (100 / SQRT (SQRT (count Blocks))))))))) FSFIFROM DBA_FREE_SPACEGROUP BY TABLESPACE_NAME ORDER BY 1; Spool Fsfi.rep; / Spool Off;

---- Statistics the FSFI value of the database, you can use it as a comparable parameter. In a table space with enough effective free space, and the FSFI value exceeds 30, there is little problem of effective free space. When a space will be close to the parameter, you need to make a fragmentation. 2. Freedom fragmentation ---- (1) The PctincRease value of the table space is non-0 --- can change the default storage parameters of the table space to non-0. It is generally set to 1, such as: ALTER TABLESPACE TEMPDEFAULT Storage (PctinCrease 1); ---- like SMON automatically merges free range. It is also possible to manually consolidate free range: ALTER TABLESPACE TEMP COALESCE; 3, the fragmentation of the segment - we know, the segment is composed of scope. In some cases, it is necessary to organize the fragments of the segment. To view information about segments, view data dba_segments, range information to view data dictionary DBA_EXTENTS. If the segment is too large, the simplest method that compresses its data to a range is to rebuild this segment with the correct storage parameters, and then insert the data in the old table into a new table, and delete the old table. This process can be done with the Import / Export tool. ---- The export () command has a (compressed) flag, which triggered Export to determine the physical space assigned by the table when reading a table, which writes a new initial storage parameter to the output dump file - It is equal to all allocated space. If this table is turned off, use the import () tool to regenerate. In this way, its data will be placed in a new, large initial segment. For example: exp user / password file = exp.dmp compress = y GRANTS = y indexes = ytables = (Table1, Table2); ---- If the output is successful, remove the output table from the library, then dump from the output dump Enter the table in the file: IMP user / password file = exp.dmp commit = y buffer = 64000 full = y ---- This method can be used throughout the database. Database Backup Recovery 1, view the location of the data file:

SQL> SELECT FILE #, Status, Enabled, Name from V $ dataFile;]

2, check the control file:

SQL> SELECT * FROM V $ ControlFile;

3, check the online log:

SQL> SELECT * FROM V $ logfile;

4, check the current database running archive log mode, archive log location and other information SQL> Archive log list database log mode archive mode Auto archive Enable archive terminal / ORA_ARCH / ARCH The earliest log sequence 1 Next Archive log sequence 3 Current log Sequence 35, Feed Model SQL> ALTER DATABASE [DBNAME] ArchiveLog; log management - to be modified

Database performance adjustment is 1, fragment finishing, view free space ---- Since free space debris is composed of several parts, such as ranging quantity, maximum range size, etc., we can use FSFI - Free Space Fragmentation Index (Free Space Debris Index ) Visual expression: fsfi = 100 * SQRT (max (extent) / sum (extents)) * 1 / SQRT (SQRT (count (extents))) ---- It can be seen that the maximum possible value of FSFI is 100 (An ideal single text table space). As the range increases, the FSFI value slowly decreases, and the FSFI value will fall quickly as the maximum range size is reduced. ---- The following script can be used to calculate fsfi values: Rem Fsfi Value ComputeRem Fsfi.SqlColumn Fsfi Format 999, 99Select TableSpace_name, SQRT (Max (Blocks) / SUM (Blocks)) * (100 / SQRT (SQRT (count Blocks))))))) FSFIFROM DBA_FREE_SPACEGROM DBA_FREE_SPACEGROUP BY TABLESPACE_NAME ORDER BY 1; Spool fsfi.rep; / spool off; ---- Statistics the FSFI value of the database, you can use it as a comparable parameter. In a table space with enough effective free space, and the FSFI value exceeds 30, there is little problem of effective free space. When a space will be close to the parameter, you need to make a fragmentation. 2. Freedom fragmentation ---- (1) The PctincRease value of the table space is non-0 --- can change the default storage parameters of the table space to non-0. It is generally set to 1, such as: ALTER TABLESPACE TEMPDEFAULT Storage (PctinCrease 1); ---- like SMON automatically merges free range. It is also possible to manually consolidate free range: ALTER TABLESPACE TEMP COALESCE; 3, the fragmentation of the segment - we know, the segment is composed of scope. In some cases, it is necessary to organize the fragments of the segment. To view information about segments, view data dba_segments, range information to view data dictionary DBA_EXTENTS. If the segment is too large, the simplest method that compresses its data to a range is to rebuild this segment with the correct storage parameters, and then insert the data in the old table into a new table, and delete the old table. This process can be done with the Import / Export tool. ---- The export () command has a (compressed) flag, which triggered Export to determine the physical space assigned by the table when reading a table, which writes a new initial storage parameter to the output dump file - It is equal to all allocated space. If this table is turned off, use the import () tool to regenerate. In this way, its data will be placed in a new, large initial segment. For example: exp user / password file = exp.dmp compress = y GRANTS = y indexes = ytables = (Table1, Table2); ---- If the output is successful, remove the output table from the library, then dump from the output dump Enter the table in the file: IMP user / password file = exp.dmp commit = y buffer = 64000 full = y ---- This method can be used throughout the database.

Database Backup Recovery 1, View Data File Location: SQL> Select File #, Status, Enabled, Name from V $ DataFile;] 2, View Control File: SQL> SELECT * FROM V $ ControlFile; 3, view online log: SQL> SELECT * FROM V $ logfile; 4, see the archive log mode of the current database running, the location of the archive log, SQL> Archive log list database log mode archive mode Auto archive enable archive endpoint / ORA_ARCH / ARCH earliest log sequence 1 Next Archive Log Sequence 3 Current Log Sequence 35, Feed Archive Mode SQL> ALTER DATABASE [DBNAME] ArchiveLog; log management - to be modified

1.forcing log switchesql> ALTER SYSTEM SWITCH LOGFILE

2.FORCING CHECKPOINTSSQL> ALTER System Checkpoint;

3.Adding Online Redo Log Groupssql> Alter Database Add logfile [Group 4] SQL> ('/disk3/log4a.rdo' ,'/disk4/log4b.rdo') size 1m;

4.Adding Online Redo Log Memberssql> ALTER DATABASE ADD LOGFILE MEMBERSQL> '/disk3/log1b.rdo' to group 1, sql> '/disk4/log2b.rdo' to group 2;

5.changes the name of the online redo logfilesql> ALTER DATABASE RENAME FILE 'C: / ORACLE/Ordata/oradb/redo01.log'sql> To' c: /oacle/oradata/redo01.log ';

6.Drop Online Redo Log Groupssql> ALTER DATABASE DROP Logfile Group 3;

7.Drop Online Redo Log Memberssql> ALTER DATABASE DROP LOGFILE MEMBER 'C: /Oracle/Ordata/redo01.log';

8. Clearing Online Redo Log Filessql> Alter Database Clear [Unarchived] logfile 'c: /oacle/log2a.rdo';

9.using logminer analyzing redo logfiles

a. in the init.ora specify utl_file_dir = '' b. SQL> EXECUTE DBMS_LOGMNR_D.BUILD ('ORADB.ORA', 'C: / Oracle / ORADB / LOG'); c. SQL> EXECUTE DBMS_LOGMNR_ADD_LOGFILE ('C: / Oracle / ORADATA / ORADB / Redo01.log ', SQL> DBMS_Logmnr.new); d. sql> execute dbms_logmnr.add_logfile (' c: /oracle/oradata/oradb/redo02.log', SQL> dbms_logmnr.addfile; e . sql> execute dbms_logmnr.start_logmnr. (dictfilename => 'c: /oracle/oradb/log/oradb.ora'); f sql> select * from v $ logmnr_contents (v $ logmnr_dictionary, v $ logmnr_parameterssql> v $ logmnr_logs) G. SQL> EXECUTE DBMS_LOGMNR.END_LOGMNR; Table Space Management 1.create TableSpacessql> Create TableSpace TableSpace_name DataFile 'C: /oracle/oradata/file1.dbf' size 100m, SQL> 'C: /oracle/oradata/file2.dbf 'size 100m minimum extent 550k [logging / nologging] sql> default storage (initial 500k next 500k maxextents 500 pctinccease 0) sql> [online / offline] [permanent / temporary] [extent_management_clause] 2.locally managed tablespacesql> create tablespace user_data datafile 'c: /oracle/oradata/user_data01.dbf'sql> size 500M Extent Management Local Uniform Size 10M;

3.Temporary TableSpaceSQL> CREATE TEMPORARY TABLESPACE TEMP TEMPFILE 'C: /Oracle/oradata/temp01.dbf'sql> size 500m Extent Management Local Uniform Size 10m;

4.Change the Storage Settingsql> ALTER TABLESPACE APP_DATA Minimum Extent 2m; SQL> ALTER TABLESPACE APP_DATA Default Storage (Initial 2M Next 2M MaxExtents 999);

5.Taking TableSpace Offline or OnlineSQL> ALTER TABLESPACE APP_DATA OFFLINE; SQL> ALTER TABLESPASPASPASPASP_DATA Online;

6.Read_only tablespacesql> ALTER TABLESPACE APP_DATA READ ONLY | WRITE

7.droping tablespacesql> drop tablespace app_data including contents; 8.enableing automatic extension of data filessql> alter tablespace app_data add datafile 'c: /oracle/oradata/app_data01.dbf' size 200msql> autoextend on next 10m maxsize 500m;

9.change the size fo data files manuallysql> ALTER DATABASE DATAFILE 'C: / ORACLE/Ordata/App_data.dbf' Resize 200M;

10.Moving Data Files: ALTER TABLESPACESQL> ALTER TABLESPACE APP_DATA RENAME DATADATA/App_Data.dbf'SQL> To 'c: /oracle/app_data.dbf';

11.Moving Data Files: ALTER DATABASESQL> ALTER DATABASE RENAME File 'C: /Oracle/oradata/app_data.dbf'sql> to' c: /oacle/app_data.dbf ';

1.create a tablesql> create table table_name (column datatype, column datatype] ....) sql> tablespace tablespace_name [pctfree integer] [pctused integer] sql> [initrans integer] [maxtrans integer] sql> storage (initial 200k next 200k Pctincrease 0 MaxExtents 50) SQL> [Logging | NOLOGGING] [cache | nocache]

2.copy an existing tablesql> create table table_name [logging | nologing] as subquery

3.Create Temporary TableSQL> Create Global Temporary Table Xay_Temp as Select * from xay; on commit preserve rows / on commit delete rows

4.pctFree = (Average Row Size - Initial Row Size * 100 / Average Row Sizepctused = 100-PctFree- (Average Row Size * 100 / Available Data Space)

5.Change Storage and Block Utilization Parametersql> ALTER TABLE TABLE_NAME PCTFREE = 30 Pctused = 50 Storage (Next 500KSQL> MineXtents 2 MaxExtents 100);

6.Manually Allocating ExtentsSQL> ALTER TABLE TABLE_NAME Allocate Extent (Size 500K DataFile 'C: /oracle/data.dbf');

7.Move TableSpaceSQL> ALTER TABLE EMPLOYEE MOVE TABLESPAESPACE; 8.DEAllocate of Unused SpaceSQL> ALTER TABLE TABLE_NAME DEALLOCATE UNUSED [Keep Integer]

9.Truncate a TableSQL> Truncate Table Table_name;

10.Drop a TableSQL> Drop Table Table_Name [Cascade Constraints];

11.Drop a columnsql> ALTER TABLE TABLE_NAME DROP Column Comments Cascade Constraints Checkpoint 1000; ALTER TABLE TABLE_NAME DROP Column Continue;

12.mark a column as unusedsql> alter table table_name set unused column comments cascade constraints; alter table table_name drop unused columns checkpoint 1000; alter table orders drop columns continue checkpoint 1000data_dictionary: dba_unused_col_tabs Index

1.creating function-based indexessql> create index summit.item_quantity on summit.item (Quantity-Quantity_Shipped);

2.create a B-tree indexsql> create [unique] index index_name on table_name (column, .. asc / desc) tablespacesql> tablespace_name [pctfree integer] [initrans integer] [maxtrans integer] sql> [logging | nologging] [nosort ] Storage (Initial 200k Next 200k Pctincrease 0Sql> maxextents 50);

3. PctFree (INDEX) = (Maximum Number of Rows-Initial Number of Rows) * 100 / Maximum Number Of Rows

4.CREATING REVERSE Key Indexessql> Create Unique Index Xay_ID ON XAY (a) Reverse Pctfree 30 Storage (Initial 200ksql> Next 200k Pctincrease 0 MaxExtents 50) TABLESPACE INDX;

5.Create Bitmap IndexSql> Create Bitmap Index Xay_id ON XAY (A) PCTFree 30 Storage (Initial 200k Next 200ksql> Pctincrease 0 MaxExtents 50) TABLESPACE INDX;

6.Change Storage Parameter Of IndexSql> ALTER INDEX XAY_ID Storage (Next 400K MaxExtents 100);

7.allocating index spaceSQL> ALTER INDEX XAY_ID Allocate Extent (Size 200k DataFile 'C: / Oracle/index.dbf'); 8.alter index xay_id deallocate unused; constraint

1.define constraints as immediate or defresql> ALTER session set constraint [s] = immediate / deferred / default; set constraint [S] constraint_name / all immediate / deferred;

2. SQL> Drop Table Table_name Cascade Constraintssql> Drop TableSpace TableSpace_name Including Contents Cascade Constraints

3. define constraints while create a tablesql> create table xay (id number (7) constraint xay_id primary key deferrablesql> using index storage (initial 100k next 100k) tablespace indx); primary key / unique / references table (column) / check

4.enable constraintssql> alter table XAY Enable NoValidate construint XAY_ID;

5.enable constraintssql> ALTER TABLE XAY ENABLE VALIDATE CONSTRAINT XAY_ID; LOAD Data

1.Loading Data Using Direct_load Insertsql> Insert / * Append * / INTO EMP NOLOGGINGSQL> SELECT * from Emp_old;

2.Parallel Direct-Load Insertsql> ALTER Session Enable Parallel DML; SQL> INSERT / * Parallel (EMP, 2) * / INTO EMP NOLOGGINGSQL> SELECT * FROM EMP_OLD;

3.USING SQL * Loadersql> SQLLDR Scott / Tiger / SQL> Control = ULCase6.ctl / SQL> Log = ULCase6.log Direct = TrueReorganizing Data

1.USING Expoty $ EXP Scott / Tiger Tables (DEPT, EMP) File = C: /emp.dmp log = exp.log compress = n direct = y

2.USING IMPORT $ IMP Scott / Tiger Tables (DEPT, EMP) File = Emp.dmp Log = Imp.log ignore = Y

3.transporting a tablespacesql> alter tablespace sales_ts read only; $ exp sys / .. file = xay.dmp transport_tablespace = y tablespace = sales_tstriggers = n constraints = n $ copy datafile $ imp sys / .. file = xay.dmp transport_tablespace = y datafiles = (/ disk1 / sles01.dbf, / disk2 / sles02.dbf) sql> alter tablespace sales_ts read write; 4.checking transport setsql> DBMS_tts.transport_set_check (ts_list => 'sales_ts' .., incl_constraints => true) View SQL> dbms_tts.isselfcontained is true in Table Transport_set_violations, indicating that it is included in Managing Password Security and Resources.

1.Controlling Account Lock and Passwordsql> ALTER USER JUNCKY Identified by Oracle Account UNLOCK

2.User_Provided Password Functions QL> Function_Name (Userid in varchame (30), old_password in varcha2 (30)) Return Boolean

3.create a profile: password settingsql> create profile grace_5 limit failed_login_attempts 3sql> password_lock_time unlimited password_life_time 30sql> password_reuse_time 30 password_verify_function verify_functionsql> password_grace_time 5;

4.ALTERING A ProfileSQL> ALTER profile default failed_login_attempts 3sql> Password_life_time 60 password_grace_time 10;

5.Drop a profilesql> Drop Profile grace_5 [cascade];

6.create a profile: resource limitsql> Create Profile developer_prof limited sessions_per_user 2sql> CPU_PER_SESSION 10000 IDLE_TIME 60 Connect_Time 480;

7. View => resource_cost: Alter Resource Costdba_Users, DBA_PrOFiles

8. Enable resource limitssql> ALTER system set resource_limit = true; managing user

1.create a user: database authenticationsql> create user juncky identified by oracle default tablespace userssql> temporary tablespace temp quota 10m on data password expiresql> [account lock | unlock] [profile profilename | default]; 2.change user quota on tablespacesql> Alter User Juncky Quota 0 on Use;

3.Drop a usersql> Drop User Juncky [Cascade];

4. Monitor Userview: DBA_USERS, DBA_TS_QUOTASMANAGING Privileges

1.system privileges: view => system_privilege_map, dba_sys_privs, session_privs

2.Grant System Privilegesql> Grant Create Session, Create Table To Managers; SQL> Grant Create Session To Scott with Admin Option; With Option Can Grant Or Revoke Privilege from ANY User Or Role;

3.sysdba and sysoper privileges: sysoper: startup, shutdown, alter database open | mount, alter database backup controlfile, alter tablespace begin / end backup, recover databasealter database archivelog, restricted sessionsysdba: sysoper privileges with admin option, create database, recover database Until

4.Password file members: view: => v $ pwfile_users

5.O7_DICTIONARY_ACCESSIBILITILIBILITY = True Restriction Access to View or Tables in Other Schema

6.Revoke System Privilegesql> Revoke Create Table from Karen; SQL> Revoke Create Session from Scott;

7.Grant Object Privilegesql> Grant Execute On DBMS_PIPE TO PUBLIC; SQL> GRANT Update (First_name, Salary) on Employee to Karen with Grant Option;

8.Display Object Privilege: View => DBA_TAB_PRIVS, DBA_COL_PRIVS

9.Revoke Object Privilegesql> Revoke Execute on dbms_pipe from scott [cascade constraints];

10.Audit Record View: => sys.aud $

11. Protecting The audit trailsql> Audit delete on sys.aud $ by access;

12.Statement Auditingsql> Audit User; 13.Privilege Auditingsql> Audit Select Any Table by Summit By Access

14.Schema Object Auditingsql> Audit Lock on Summit.employee by Access WHENEVER SUCCESSFUL

15.View audit option: view => all_def_audit_opts, dba_stmt_audit_opts, dba_priv_audit_opts, dba_obj_audit_opts

16.View audit result: view => dba_audit_trail, dba_audit_exists, dba_audit_ object, dba_audit_session, dba_audit_stateManager Role

1.create roles_clerk; sql> create role hr_clerk identified by bonus; sql> create role hr_manager identified externally

2.Modify Rolesql> ALTER ROLE SALES_CLERK Identified by Commission; SQL> ALTER ROLE HR_Clerk Identified Externalness; SQL> ALTER ROLE HR_MANAGER NOT IDENTIFIED;

3.assigning rolesql> grant sales_clerk to scott; sql> grant hr_clerk to hr_manager; sql> grant hr_manager to scott with admin/

4.establish default rolesql> alter user scott default role hr_clerk, sales_clerk; sql> alter user scott default role all; sql> alter user scott default role all except hr_clerk; sql> alter user scott default role none;

5.enable and disable rolesql> set role hr_clerk; sql> set role sales_clerk identified by commission; sql> set role all exchange, siL> set role none;

6. Revoke Sales_Clerk from Scott; SQL> Revoke HR_Manager from public

7. Remove Rolesql> Drop Role HR_Manager;

8.Display role informationView: => DBA_ROLES, DBA_ROLE_PRIVS, ROLE_ROLE_PRIVS, DBA_SYS_PRIVS, ROLE_SYS_PRIVS, ROLE_TAB_PRIVS, SESSION_ROLESBACKUP AND Recovery

1. V $ SGA, V $ INSTANCE, V $ PROCESS, V $ BGPROCESS, V $ DATABASE, V $ DataFile, V $ SGASTAT

2. RMAN NEED SET DBWR_IO_SLAVES or backup_tape_io_slaves and large_pool_size3. Monitoring Parallel Rollback> V $ FAST_START_SERVERS, V $ FAST_START_TRANSACTIONS

4.Perform A Closed Database Backup> Shutdown Immediate> CP Files / Backup /> Startup

5.RESTORE TO A DIFFERENT LOCATION> Connect System / Manager As Sysdba> Startup Mount> ALTER DATABASE RENAME File '/Disk1/./user.dbf' to '/ disk2 /../ user.dbf';> Alter Database Open ;

6.recover syntax - recover a mounted database> recover database;> recover datafile '/disk1/data/df2.dbf';>alter database recover database; - recover an opened database> recover tablespace user_data;> recover datafile 2; > ALTER DATABASE Recover DataFile 2;

7.how to Apply Redo Log Files Automatically> Set Autorecovery On> Recover Automatic DataFile 4;

8.Complete Recovery: - Mounted 1 (Mounted Database> Copy C: /backup/User.dbf C: /oraData/User.dbf> Startup Mount> Recover DataFile 'C: / OraData/User.dbf;> Alter Database Open; - Method 2 (Opened Database, Initially Opened, Not System or Rollback DataFile> Copy C: / Backup/User.dbf C: / OraData/User.dbf (ALTER TABLESPACE OFFLINE)> Recover DataFile 'C: / ORADATA /user.dbf 'or> recover tablespace user_data;> alter database datafile' c: /oradata/user.dbf 'online or> alter tablespace user_data online; - method 3 (opened database, initially closed not system or rollback datafile)> Startup Mount> ALTER DATABASE DATAFILE 'C: /ORADATA/USER.DBF' OFFLINE;> ALTER DATABASE OPEN> COPY C: /BACKUP/USER.DBF D: / ORADATA/USER.DBF> ALTER DATABASE RENAME FILE 'C: / ORADATA /user.dbf 'to' D: /oradata/user.dbf '> Recover DataFile' E: / ORADATA/User.dbf 'or Recover TableSpace User_Data;> ALTER TABLESPACE USER_DATA ONLINE; - Method 4 (Loss of Data File with NO Backup and have all archive logs> ALTER TABLESPACE User_Data offline immediate;> Alter Database Crea te datafile 'd: /oradata/user.dbf' as' c: /oradata/user.dbf ''> recover tablespace user_data;> alter tablespace user_data online5.perform an open database backup> alter tablespace user_data begin backup;> copy files / backup /> ALTER DATABASE DATAFILE '/C:/../data.dbf' end backup;> alter system switch logfile; 6.backup a control file> ALTER DATABASE backup controlfile to 'control1.bkp';> ALTER DATABASE BACKUP ControlFile to TRACE; 7.Recovery (NoArchiveLog Mode)> Shutdown Abort> CP Files> Startup8.recovery of File In Backup Mode> ALTER DATABASE DATAFILE 2 End Backup;

9.clearing redo log file> alter database clear unarchived logfile group 1;> alter database clear unarchived logfile group 1 unrecoverable datafile; 10.redo log recovery> alter database add logfile group 3 'c: /oradata/redo03.log' size 1000k > ALTER DATABASE DROP logfile group 1;> Alter Database Open; Or> CP: /oradata/redo02.log 'c: /oradata/redo01.log> ALTER DATABASE CLOGFILE' C: / ORADATA/LOG01.LOG '; (CCW)

转载请注明原文地址:https://www.9cbs.com/read-72081.html

New Post(0)