How to use Oracle's Logminer Tools

xiaoxiao2021-03-06  121

Logminer (dbms_logmnr, and dbms_logmnr_d) packages can be used to analyze Oracle's redo log files. Logminer is a tool that Oracle starts from 8i.

We can use LogMiner to analyze other instance (inconsistent) redo log files, but must follow the following requirements:

1. Logminer log analysis tools can only analyze the products after Oracle 8

2. Logminer must use a dictionary file generated by the analytical database instance, and the character set of the Logminer database must be the same as the character set of the analytical database.

3. The database platform must be the same as the database platform where the current Logminer is located, and the block size is the same.

Use logminer

1. Install the logminer:

To install the logminer tool, you must first run the following two scripts, $ oracle_home / rdbms / admin / dbmslm.sql $ oracle_home / rdbms / admin / dbmslmd.sql. These two scripts must be run as SYS user.

2. Create a data dictionary file

First, in the init.ra initialization parameter file, add parameter utl_file_dir, which is the directory of the data dictionary file in the server. Such as: UTL_FILE_DIR = (D: / Oracle / Logs)

Restart the database, make the newly added parameters, then create a data dictionary file: SQL> EXECUTE DBMS_LOGMNR_D.BUILD (Dictionary_FileName => 'logmn_ora817.dat', Dictionary_location => D: / Oracle / logs');

Creating a data dictionary is a name that uses an object when the logminer reference involves the part in the internal data dictionary, rather than the 16-based ID within the system. If the table we want to analyze has changed, the data dictionary that affects the library has changed, and it is necessary to recreate the dictionary.

3. Add a log file to be analyzed

Logminer can be used to analyze online redo log files and archive log files, but we generally recommend using archived log files.

a. Add a new log file: SQL> Execute dbms_logmnr.add_logfile (logfilename => 'D: / Database / Oracle / ORADATA / ORA817 / ARCHIVE / ARC01491.001', Options => dbms_logmnr.new); b. Add additional Log files to list SQL> Execute dbms_logmnr.add_logfile (logfilename => 'd: / database / oracle / arc01491.002', options => dbms_logmnr.addfile;

c. Remove a log file

SQL> EXECUTE DBMS_LOGMNR.ADD_Logfile (logfilename => 'D: / Database / Oracle / ORADATA / ORA817 / Archive / ARC01491.002', Options => dbms_logmnr. Removefile;

Create a log file to be analyzed, you can analyze it.

4. Log analysis

SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR (DictFileName => D: / Oracle / logs / logmn_ora817.dat '); can use the corresponding restriction condition:

Time range: Use the StartTime and EndTime parameters for dbms_logmnr.start_logmnr

SCN range: Use startscn and endscn parameters for dbms_logmnr.start_logmnr

5. Observation results:

Mainly query V $ logmnr_contents:

SQL> DESC V $ logmnr_contents;

Name empty? Type

------------------------------------- ---------------------

SCN Number

TimeStamp Date

Thread # Number

LOG_ID NUMBER

Xidusn Number

XIDSLT NUMBER

XIDSQN Number

RBASQN Number

RBABLK NUMBER

Rbabyte Number

Ubafil Number

Ubablk Number

Ubarec Number

Ubasqn number

ABS_FILE # Number

REL_FILE # Number

Data_blk # Number

Data_obj # Number

Data_objd # Number

SEG_OWNER VARCHAR2 (32)

SEG_NAME VARCHAR2 (32)

SEG_TYPE NUMBERSEG_TYPE_NAME VARCHAR2 (32)

Table_space varchar2 (32)

Row_id varchar2 (19)

Session # number

Serial # Number

Username varchar2 (32)

Session_info varcha2 (4000)

ROLLBACK NUMBER

Operation varchar2 (32)

SQL_REDO VARCHAR2 (4000)

SQL_UNDO VARCHAR2 (4000)

RS_ID VARCHAR2 (32)

SSN Number

CSF Number

INFO VARCHAR2 (32)

STATUS NUMBER

PH1_NAME VARCHAR2 (32)

PH1_REDO VARCHAR2 (2000)

PH1_UNDO VARCHAR2 (2000)

PH2_NAME VARCHAR2 (32)

PH2_REDO VARCHAR2 (2000)

PH2_UNDO VARCHAR2 (2000)

PH3_NAME VARCHAR2 (32)

PH3_REDO VARCHAR2 (2000)

PH3_undo varcha2 (2000)

PH4_NAME VARCHAR2 (32)

Ph4_redo varchar2 (2000) pH4_undo varcha2 (2000)

PH5_NAME VARCHAR2 (32)

PH5_REDO VARCHAR2 (2000)

PH5_UNDO VARCHAR2 (2000)

The SQL operations in the log file can be obtained by the field SQL_REDO, and the SQL statement withdrawn can be obtained by SQL_UNDO.

You can also use this SQL to all operations in log files.

Select Operation, Count (*) from V $ logmnr_contents group by operation;

The analysis results in V $ logMnR_Contents exist only in our running process 'dbms_logmrn.start_logmnr' this session. This is because all LogMiner storage is in PGA memory, all other sessions are not seen, while clearing the analysis results with the end of the session.

Finally, use the process dbms_logmnr.end_logmnr terminating the log analysis transaction, the PGA memory area will be cleared.

转载请注明原文地址:https://www.9cbs.com/read-125863.html

New Post(0)