LogMiner in Oracle8i

zhaozj2021-02-16  56

Oracle Logminer is an actual very useful analysis tool provided after Oracle from product 8i, using this tool to easily obtain specific content in Oracle Refrigeration Log File (archive log file), in particular, this tool can analyze all DML (INSERT, UPDATE, DELETE, etc.) statement of database operations can also be analyzed to get some necessary rollback SQL statements. This tool is especially suitable for debugging, auditing, or retreating a particular transaction.

The Logminer analysis tool is actually composed of a set of PL / SQL packages and some dynamic views (part of the Oracle8i built-in package), which is published as part of the Oracle database, which is a completely free tool for 8i products. But the tool and other Oracle built-in tools are used compared to use, the main reason is that the tool does not provide any graphical user interface (GUI). This article will detail how to install and use the tool.

First, Logminer's use

The log files store all the data for database recovery, record each change of the database structure, that is, all DML statements for the database operation.

Before Oracle 8i, Oracle does not provide any tools that assist database administrators to read and explain the contents of the recovery log file. There is a problem in the system. For a normal data administrator, the only job can be packaged by all Log files, then send it to Oracle's technical support, and then quietly wait for Oracle's technical support to our final answer. . However, from 8i, Oracle provides such a powerful tool-Logminer.

Logminer tools can be used to analyze online, or to analyze offline log files, you can analyze your own database's reproduction log files, or to analyze other databases of re-log files.

In general, the main purpose of the Logminer tool is:

1. Tracking the database changes: You can turn offline the change of the database without affecting the performance of the online system.

2. Return the change of the database: Roll back specific change data, reduce the execution of Point-in-Time Recovery.

3. Optimization and expansion programs: data growth mode can be analyzed by analyzing data in log files.

Second, install Logminer

To install the Logminer tool, you must first run the following two scripts.

L $ oracle_home / rdbms / admin / dbmslsm.sql

2 $ oracle_home / rdbms / admin / dbmslsmd.sql.

These two scripts must run as SYS users. The first script is used to create a DBMS_LogMnR package that is used to analyze the log file. The second script is used to create a dbms_logmnr_d package, which is used to create a data dictionary file.

Third, use Logminer Tools

Here's how to use the Logminer tool more details.

1. Create a data dictionary file (data-dictionary)

As mentioned earlier, the Logminer tool is actually built by two new PL / SQL buckets ((DBMS_LogMnR and DBMS_ logMnR_D) and four V $ dynamic performance views (view is created when using process dbms_logmnr.start_logmnr to start logminer) Composition. You can use the DBMS_LogMnR_d package to export the data dictionary as a text file before using the LogMiner tool to analyze the DBMS_LOGMNR_D package. The dictionary file is optional, but if not it, the part of the Logminer explains about the part of the data dictionary (, Such as the name, column name, etc.) and values ​​will be in the form of 16, we can't understand directly. For example, the following SQL statement: INSERT INTO DM_DJ_SWRY (RYDM, RYMC) Values ​​(00005, 'Zhang San " );

The result of the Logminer explained will be the following look.

INSERT INTO Object # 308 (Col # 1, Col # 2) VALUES (HEXTORAW ('C30RTE567E436'), HexToraw ('4A6F686E20446F65');

The purpose of creating a data dictionary is to let the logminer reference to the actual names for them when they involve the internal data dictionary, rather than 16 within the system. The data dictionary file is a text file that is created using the package dbms_logmnr_d. If we have change in the database to be analyzed, the data dictionary affecting the library has also changed, and the dictionary file is required to recreate the dictionary file. Another situation is to analyze the reproduction log of another database file, you must also regenerate the data dictionary files that have been analyzed over again.

First, in the init.ra initialization parameter file, the location of the data dictionary file is specified, that is, add a parameter UTL_FILE_DIR, which is the directory of the data dictionary file in the server. Such as:

UTL_FILE_DIR = (E: / Oracle / Logs)

Restart the database, make the new parameters take effect, then create a data dictionary file:

SQL> Connect Syssql> EXECUTE DBMS_LOGMNR_D.BUILD (Dictionary_FileName => 'V816Dict.ora', Dictionary_Location => 'E: / ORACLE / LOGS');

2. Create a list of log files to be analyzed

Oracle's Reline Log is divided into two, online and offline archive log files, and below to discuss the list of these two different log files, respectively.

(1) Analyze Online Relief Log File

A. Create a list

SQL> Execute dbms_logmnr.add_logfile (logfilename => 'e: /oracle/oradata/sxf/redo01.log' ,Options=> dbms_logmnr.new);

B. Add other log files to the list

SQL> EXECUTE DBMS_LOGMNR.Add_logfile (logfilename => 'E: /oracle/oradata/sxf/redo02.log' ,Options=> dbms_logmnr.addfile;

(2) Analyze offline log files

A. Create a list

SQL> Execute dbms_logmnr.add_logfile (logfilename => 'E: /oracle/oradata/sxf/archive/arcarc09108.001' ,options=> dbms_logmnr.new); b. Add additional log files to the list

SQL> EXECUTE DBMS_LOGMNR.Add_logfile (logfilename => 'E: /oracle/oradata/sxf/archive/arcarc09109.001' ,options=> dbms_logmnr.addfile);

About this log file list is required to be determined by yourself, but the suggestion here is best to add only a log file that needs to be analyzed each time, add additional files after the file is added.

And add a log analysis list, using the process 'dbms_logmnr.removefile' can also remove a log file from the list. The following example moves to the log files added above E: / Oracle/oradata/sxf/redo02.log.

SQL> EXECUTE DBMS_LOGMNR.Add_logfile (logfilename => 'E: /oracle/oradata/sxf/redo02.log' ,Options=> dbms_logmnr. Removefile;

Create a list of log files to be analyzed, and you can analyze it below.

3, use logminer for log analysis

(1) No restriction conditions

SQL> EXECUTE DBMS_LOGMNR.Start_logmnr (DictFileName => 'E: / Oracle / logs / v816dict.ora');

(2) There is a restriction condition

By settings for several different parameters in the process DBMS_ logmnr.start_logmnr (see Table 1 for parameter), the range to analyze the log files can be reduced. We can limit only a certain number of logs by setting the start time and termination time parameters. As described below, we only analyze the logs on September 18, 2001:

SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR (DictFileName => 'E: / Oracle / logs / v816dict.ora', StartTime => to_date ('2001-9-18 00:00:00', 'YYYY-MM-DD HH24: Mi : Ss ') endtime => to_date (' '2001-9-18 23:59:59', 'YYYY-MM-DD HH24: MI: SS'));

It is also possible to limit the range of the logging of the log by setting the start SCN and the SCN.

SQL> EXECUTE DBMS_LOGMNR.Start_logmnr (DictFileName => 'E: / Oracle / logs / v816dict.ora', startscn => 20, endscn => 50);

Table 1 DBMS_LOGMNR.START__LOGMNR Process Parameter Meaning

Parameter Parameter Type Default Conit STARTSCN Digital (NUMBER) 0 Analysis of SCN ≥StartScn Log File Some endscn Digital (Number) 0 Analysis of the SCN ≤ENDSCN log file section StartTime Date (Date) 1998- 01-01 Analysis of the Time Stamp ≥ StartTime Log File Some EndTime Date (DATE) 2988-01-01 Analysis Refocular Log Time Stamp ≤ENDTIME log file section DictFileName Character type (varchar2) Dictionary file, this The file contains a snapshot of a database directory. Using this file can make the analysis result is a text form that can be understood, instead of 16 credit binary_integer 0 system debug parameters inside the system, actually use 4, observation analysis results (v $ logmnr_contents)

Up to now, we have analyzed the contents of the reconciliation log files. Dynamic performance view V $ logmnr_contents contains all information obtained by Logminer analysis.

SELECT SQL_REDO from V $ logmnr_contents;

If we just want to know the operation of a user for a tap, you can get the following SQL query, which can get all the work made by the user db_zgxt on the table sb_djjl.

SQL> SELECT SQL_REDO from V $ logmnr_contents where username = 'db_zgxt' and Tablename = 'sb_djjl';

It is important to emphasize that the analysis results in the view V $ logmnr_contents exist only in our run 'dbms_logmrn.start_logmnr' this session of the session. This is because all LogMiner storage is in the PGA memory, all other processes can't see it, while the analysis results have disappeared as the process ends.

Finally, use the process dbms_logmnr.end_logmnr termination log analysis transaction: SQL> execute dbms_logmnr.end_logmnr (); At this point, the PGA memory area is cleared, and the analysis results are no longer existed.

Fourth, other considerations

We can use the Logminer log analysis tool to analyze the re-log files generated by other database instances, not just to analyze the Redo logs files of the database instance of the Logminer. Note when using Logminer to analyze other database instances:

1. Logminer must use a dictionary file generated by the database discharge, not a dictionary file generated by the database of logminer, and must ensure that the character set of the Logminer database is the same as the character set of the analytical database.

2. The analysis database platform must be the same as the database platform where the current logminer is located, that is, if we want to analyze the file generated by Oracle 8i running on the UNIX platform, you must also act in a UNIX platform. Run the logminer on the other, such as running the logminer as Microsoft NT. Of course, the hardware conditions of both are not exactly the same.

3. Logminer log analysis tools can only analyze the products after Oracle 8. For 8 previous products, this tool is not powerful.

Five, conclusion

转载请注明原文地址:https://www.9cbs.com/read-19542.html

New Post(0)