In-depth analysis Oracle database log file

xiaoxiao2021-03-06  17

As an Oracle DBA, we sometimes need to track the malicious operation of data mistakes or users. At this time, we not only need to detect database accounts that do these operations, but also need to know which client (IP address, etc.) is issued. . For these issues, a most effective and low-cost way is to analyze the log files of the Oracle database. This article will discuss in-depth discussion on Oracle log analysis techniques. I. How to analyze that Logminer interpretation From the current perspective, the only way to analyze the Oracle log is to use the logminer provided by Oracle, and all changes in the Oracle database are recorded in the log, but the original log information we can't understand. Logminer is a tool for us to understand the log information. From this point, it is similar to TkProf, one is used to analyze log information, one is formatted trace file. Through the analysis of the log, we can implement the following purposes: 1. Check the logic changes of the database; 2. Scout and correct the user's misoperation; 3. Afterwarding the event; 4, perform change analysis.

Not only that, the information recorded in the log also includes: the database changes history, change type (INSERT, UPDATE, DELETE, DDL, etc.), change the corresponding SCN number, and user information that executes these operations, etc., logminer is analyzing the log, Reconstruct the equivalent SQL statement and the undo statement (in SQL_REDO and SQL_UNDO of the V $ LogMnR_Contents view, respectively). Here you need to pay attention to equivalence statements, rather than original SQL statements, for example: we originally executed "Delete a where c1 <> Cyx ';", and Logminer reconstructs the equivalent 6 delete statement. So we should realize that it is not an original realistic in the V $ logmnr_contents view. This is easy to understand from the database perspective. It records the yuan operation because it is also "delete a where c1 <> Cyx ' , "In different environments, the actual number of records may vary, so the recordings do not actually have actual significance, and logminer reconstructs multiple single strips for transforming into elevated operations in actual conditions. Scriptures. In addition, due to the original object (such as the list and the columns) name recorded in the Oracle redo log, but they are in the Oracle database (for tables in the database in the database, and for the table In the column, the corresponding one is the arrangement sequence number: COL 1, COL 2, etc.) in the table, so in order to make the SQL statement reconstructed by the LogMiner is easy to identify, we need to convert these numbers to the corresponding name. This requires the data dictionary (also said that the Logminer itself is a data dictionary, see the analysis process below for details), and logminer uses the dbms_logmnr_d.build () process to extract data dictionary information. Logminer contains two PL / SQL packages and several views: 1, DBMS_LOGMNR_D package, this package only includes a process for extracting data dictionary information, the DBMS_LogMnR_d.build () process. 2, dbms_logmnr package, which has three processes: add_logfile (name varchar2, options number) - used to add / delete the log files for analysis; start_logmnr (start_scn number, end_scn number, start_time number, end_time number, dictfilename varchar2, options number ) - Used to turn on log analysis while identifying the time / SCN window of the analysis and confirming whether the extracted data dictionary information is used. End_logmnr () - Used to terminate the analysis session, which will reclaim the memory occupied by the Logminer. Data dictionary related to Logminer. 1. V $ logmnr_dictionary, the Logminer may use the data dictionary information, because the logmnr can have multiple dictionary files, which is used to display this information. 2, V $ logmnr_parameters, the parameter information set by the current Logminer. 3, V $ logmnr_logs, currently used to analyze the list of logs. 4, V $ logmnr_contents, log analysis results. Second, Oracle9i Logminer Enhancement: 1. Support for more data / storage types: link / migration rows, cluster table operations, Direct Path Insert, and DDL operations.

The original sentence of the DDL operation can be seen in the SQL_REDO of V $ logmnr_contents (except for CREATEER, the password will appear in encrypted, not the original password). If the tx_auditing initialization parameter is set to TRUE, all the database accounts of all operations will be recorded. 2. Extraction and use the data dictionary: Now the data dictionary can not only extract into an external file, but also directly extract the redo log stream, which provides a data dictionary snapshot at the time in the log stream, so you can Implementation of offline analysis. 3, allowing the DML operation to group transactions: You can set the commcted_data_only option in start_logmnr () to implement a packet for DML operations, which will return the submitted transaction in the order of SCN. 4, support SCHEMA change: In the state of the database, if the Logminer's DDL_DICT_TRACKING option is used, Oracle9i's logminer will automatically compare the initial log stream and the current system's data dictionary, and return the correct DDL statement, and will automatically reconorage The difference between the current data dictionary and the initial log stream, so even if the table involved in the original log stream has been changed or no longer exist, the logminer also returns the correct DDL statement. 5. Ability to record more column information in the log: For example, the UPDATE operation not only records the updated line, but also captures more forward information. 6, support value-based query: Oracle9i logminer starts supporting queries based on actual data based on metadata (operation, object, etc.) queries. For example, there is a salary table, now we can easily find the employee salary from the original update statement of 2000 from 1000, and before we can only select all update statements. Third, Oracle8i / 9i log analysis process logminer can run in the case of example, logminer uses a dictionary file to implement the conversion of the Oracle internal object name, if there is no dictionary file, directly display internal object number, for example we Execute the following statement: delete from "c". "A" where "c1" = 'gototop' and rowid = 'aaabg1aafaaabqaaah';

If there is no dictionary file, the result of Logminer is analyzed is:

Delete from "unknown". "Obj # 6197" Where "col 1" = HEXTORAW ('D6A7D4AE') And RowID

= 'AAABG1AAFAAABQAAAH'; if you want to use a dictionary file, the database should at least be in the MOUNT state. Then execute the dbms_logmnr_d.build process to extract the data dictionary information into an external file. The following is a specific analysis step: 1, confirm that the initialization parameter: utl_file_dir, and confirm that Oracle has read and write permissions to the changed directory, then start instance. The UTL_FILE_DIR parameter in the example is as follows:

SQL> Show Parameter UTL

Name Type Value

------------------------------------------------

UTL_FILE_DIR STRING / DATA6 / CYX / LOGMNR This directory is primarily used to store dictionary information files generated by the dbms_logmnr_d.build process. If this is not set, you can don't set it, you will skip the next step. 2. Generate Dictionary Information File: EXEC DBMS_LOGMNR_D.BUILD (Dictionary_FileName => "

Dic.ora ', Dictionary_location =>' / data6 / cyx / logmnr '); where Dictionary_location refers to the storage location of the Dictionary information file, it must match the value of UTL_FILE_DIR, for example: hypothesis UTL_FILE_DIR = / DATA6 / CYX / LOGMNR / Then, the above statement will be wrong, just because UTL_FILE_DIR is more "/", but in many other places are not sensitive to this "/". Dictionary_filename refers to the name of the dictionary information file, which can be taken. Of course, we can also write these two options without clearly writing:

EXEC DBMS_LOGMNR_D.BUILD ('Dic.ora', '/ DATA6 / CYX / LOGMNR'); if your first step is not set, but directly start this step, Oracle will report the following error:

Error At Line 1:

ORA-01308: Initialization Parameter UTL_FILE_DIR IS Not Set

ORA-06512: AT "sys.dbms_logmnr_d", LINE 923

ORA-06512: AT "sys.dbms_logmnr_d", Line 1938

ORA-06512: AT line 1 Need not to note that the following error will occur in Oracle817 for Windows:

14:26:05 SQL> EXECUTE DBMS_LOGMNR_D.BUILD ('ORADICT.ORA', 'C: / Oracle / Admin / ORA / LOG');

Begin dbms_logmnr_d.build ('ORADICT.ORA', 'C: / Oracle / Admin / ORA / LOG'); END;

*

Error At Line 1:

ORA-06532: Subscript outside of limit

ORA-06512: AT "sys.dbms_logmnr_d", Line 793

ORA-06512: AT line 1 Solution:

Edit the "$ Oracle_Home / Rdbms / Admindbmslmd.sql" file, put it

Type col_Desc_Array Is Varray (513) of col_description;

Change to:

TYPE COL_DESC_ARRAY IS VARRAY (700) of col_description; saves the file, then execute this script:

15:09:06 SQL> @c: /oracle/ora81/rdbms/admin/dbmslmd.sql

Package created.

Package body created.

No Errors.

Grant succeeded. Then recompile the dbms_logmnr_d package:

15:09:51 SQL> ALTER PACKAGE DBMS_LOGMNR_D Compile Body;

Package body altered.

After re-executing dbms_logmnr_d.build:

15:10:06 SQL> EXECUTE DBMS_LOGMNR_D.BUILD ('ORADICT.ORA', 'C: / Oracle / Admin / ORA / LOG');

PL / SQL Procedure SuccessFully Completed.3, add log files that need to be analyzed

SQL> EXEC DBMS_LOGMNR.ADD_LOGFILE (logfilename => "

/DATA6/CYX/RAC1ARCH/Arch_1_197.arc ', options => dbms_logmnr.new);

The Options option here has three parameters. You can use: New - Represents Creating a new log file list addFile - Add a log file to this list, as described below RemoveFile - Contrary to AddFile.

SQL> EXEC DBMS_LOGMNR.ADD_LOGFILE (logfilename => "

/DATA6/CYX/RAC1ARCH/Arch_2_86.arc ', options => dbms_logmnr.addfile;

PL / SQL Procedure SuccessFully Completed.4, when you add a log file that needs to be analyzed, we can let Logminer start to analyze:

SQL> EXEC DBMS_LOGMNR.Start_logmnr (DictFileName => '/ DATA6 / CYX / LOGMNR / DIC.ORA');

Pl / SQL Procedure SuccessFully Completed. If you don't use a dictionary information file (at this time we just need to start the instance), then you don't need to follow the DICTFileName parameter:

SQL> EXEC DBMS_LOGMNR.START_LOGMNR ();

PL / SQL Procedure SuccessFully Completed. Of course, the dbms_logmnr.start_logmnr () procedure has several other parameters for defining the analysis log time / SCN window, which are: startscn / endscn - Definition analysis start / end SCN number, StartTime / EndTime - Define the start / end time of the analysis. For example, the following procedure will only analyze the log from '2003-09-21 09:39:00' to '2003-09-21 09:45:00' During this time:

SQL> EXEC DBMS_LOGMNR.START_LOGMNR (DictFileName => '/ DATA6 / CYX / LOGMNR / DIC.ORA',

StartTime => '2003-09-21 09: 39: 00', endtime => '2003-09-21 09:45:00');

PL / SQL Procedure SuccessFully Completed. The "-" ending the first line of the first line indicates that the stroke, if you are in the same line, it is not necessary. We can see the timestamp of the valid log:

SQL> SELECT DISTINCT TIMESTAMP FROM V $ logmnr_contents;

Timestamp

-------------------

2003-09-21 09:40:02

2003-09-21 09:42:39 Here you need to pay attention to it, because I have already set up NLS_DATE_FORMAT environment variables, so the above date can be written directly, if you are not set, you need to use the to_date function to convert a bit. SQL>! ENV | GREP NLS

NLS_LANG = American_america.zhs16cgb231280

NLS_DATE_FORMAT = YYYY-MM-DD HH24: MI: SS

ORA_NLS33 = / Oracle / Oracle9 / App / Oracle / Product / 9.2.0 / Ocommon / NLS / Admin / Data

The format using TO_DATE is as follows:

Exec dbms_logmnr.start_logmnr (DictFileName => '/ DATA6 / CYX / LOGMNR / DIC.ORA', -

StartTime => to_date ('2003-09-21 09:39:00', 'YYYY-MM-DD HH24: MI: SS'), -

EndTime => to_date ('2003-09-21 09:45:00', 'YYYY-MM-DD HH24: MI: SS')); StartSCN and EndScn parameter usage are similar. 5, good, after the above process is executed, we can extract the information we need by accessing several views associated with Logminer. Among them, you can see the list of our currently analyzed log lists in V $ logMnR_logs. If the database has two instances (ie OPS / RAC), there are two different three_ids in V $ logmnr_logs. The real analysis result is to put it in V $ logmnr_contents, there are many information, we can track what we are interested in as needed. Behind I will listen separately from common tracking situations. 6, after all end, we can perform the dbms_logmnr.end_logmnr process exiting the Logminer analysis process, you can also exit SQL * Plus directly, it will automatically terminate.

IV. How to use the logminer to analyze Oracle8 log files, although Logminer is Oracle8i, but we can also use it to analyze Oracle8 log files, just a little trouble, there is a certain limit, the following is concrete approach: We first copy Oracle8i's $ oracle_home / rdbms / admin / dbmslmd.sql script to the same directory of the host where the Oracle8 database is located; this script is used to create a dbms_logmnr_d package (note that Oracle9i will also create a DBMS_LOGMNR package), if it is 8.1.5 script The name is dbmslogmnrd.sql. Then run this script on the Oracle8's database, then create a dictionary information file using the dbms_logmnr_d.build process. Now we can copy Oracle8 archive logs to the host of the Oracle8i database to the host where the Oracle8i database is located, and then analyze Oracle8 logs from the third step of the above analysis process, but dbms_logmnr.start_logmnr () is used It is Oracle8 dictionary information file. As mentioned earlier, if it is not a dictionary file, we can copy the Oracle8 archive log to the host where the Oracle8i database is located and analyzes it. In fact, it involves a cross-platform problem using Logminer, and the author has tested, or the log of Oracle8i can also be analyzed in Oracle9i. But these are limited, mainly in: 1. The dictionary files used by Logminer must be generated by the same database, and the character gathering of the database is the same as the Logminer database. This is very understandable that if it is not the same database, there is no corresponding relationship. 2. The database hardware platform for generating the log is consistent with the hardware platform of the executable Logminer database, and the operating system version can be inconsistent. When the author is doing the test (if the reader can go to my website http://www.ncn.cn to download the whole process, because too long is not placed here), the two database operating systems used are Tru64 Unix But one is V5.1a, and the other is V4.0F. If the operating system is inconsistent, the following error occurs: ORA-01284: file /data6/CYX/LOGMNR/Arch_1_163570.arc Cannot Be Opened

ORA-00308: Cannot Open Archived log '/Data6/CYX/LOGMNR/Arch_1_163570.arc'

ORA-27048: SKGFIFI: File Header Information Is Invalid

ORA-06512: AT "sys.dbms_logmnr", Line 63

ORA-06512: AT line 1 5. Analysis V $ logmnr_contents In front of us, we already know that the results of the logminer are placed in V $ logmnr_contents, there are a lot of information in this, we can track our interested information as needed. So what are our usual to be interested? 1. Track the database structure change, that is, DDL operation, as mentioned earlier, this only Oracle9i supports:

SQL> SELECT TIMESTAMP, SQL_REDO from V $ logMnR_contents2

WHERE UPPER (SQL_REDO) LIKE '% CREATE%';

Timestamp

-------------------

SQL_REDO

------------------------- 2003-09-21 10:01:55

Create Table T (C1 Number); 2. Tracking User's Misuses or malicious operations: For example, there is such a need in our reality. Once we found that an employee revised the business database information through the program, change the tip type of some calls to free Now it is asked us to find this from the database. How to check? Logminer provides the means of our analysis log file, where V $ logmnr_contents' session_info column contains the following information:

Login_Username = new_97

Client_info = OS_USERNAME = Oracle8 machine_name = phoenix1

OS_TERMINAL = TTYP3 OS_PROCESS_ID = 8004 OS_Program Name = Sqlplus @ phoenix1

(TNS V1-V3) Although there is much more information, in our business database, the program is logged in with the same login_username, so that the information from the above is difficult to judge. But we noticed that because the company's application server is not permission to write programs above, the general malicious program is connected directly to the database through his own PC, which requires an accurate positioning. IP tracking is what we first think of, and also meet our actual requirements, because the company's internal IP address assignment is managed, and we can track the IP address. We can accurately locate it. However, we can't see IP directly from the sessions_info, but we still have a way, because this session_info is actually extracted from the V $ Session view, we can create a tracking client in the production database IP address trigger:

Create or Replace Trigger on_logon_trigger

After Logon on Database

Begin

DBMS_Application_info.set_client_info (sys_context ('useerenv', 'ip_address'));

END;

/ Now, we can see the new login client IP address in the client_info column of the V $ SESSION view. Then the problem mentioned above can be solved. If the name is updated is HMLX, we can find the required information by the following SQL:

SQL> SELECT session_info, SQL_REDO from V $ logmnr_contents

2 WHERE Upper (Operation) = 'update' and upper (SQL_REDO) Like '% HMLX%'

3 /

Session_info

-----------------------------------------

SQL_REDO

-----------------------------------------

Login_Username = C Client_info = 10.16.98.26 os_username = SZ-XJS-CHENGYX MACHINE_NAME

= GDTEL / SZ-XJS-CHENGYX

Update "c". "hmlx" set "name" = 'free' where "name" = 'ncn.cn' and rowid = 'AAABHTAA

转载请注明原文地址:https://www.9cbs.com/read-42208.html

New Post(0)