# 声明 临表
Declare Global Temporary Table Session.temp1
Like Employee
On Commit Preserve Rows
NOT Logged
In Mytempspace
#Alter
Can be modified: buffer pool, table, table space, view
#Drop
You can delete: Buffer Pool, Event Monitor, Function, Index, Mode, Storage Procedure, Table, Table Space, Trigger, View.
# Generate data backup
Create Table Pers like Staff
INSERT INTO PERS
Select ID, Name, DEPT, JOB, YEARS, SALARY, COMM
From staff
WHERE dept = 38
# Uniqueness constraint
Create Table Books (Bookid Integer NOT NULL Primary Key,
BookName Varchar (100),
ISBN Char (10) Not Null Constraint BooksisBn Unique
Alter Table Books Add Constraint Unique (BookID)
# Reference integrity constraint
Create Table Authors (AuthorId Integer NOT NULL Primary Key,
LName Varchar (100),
FName Varchar (100))
Create Table Books (Bookid Integer NOT NULL Primary Key,
BookName Varchar (100),
ISBN Char (10),
Authorid Integer References Authors
# Table Check Constraint
Alter Table Books Add BookType Char (1) Check (BookType In ('f', 'n'))
# Create an index
Create INDEX I2BOOKNAME ON BOOKS (Authoid Desc, Bookname ASC)
# Set the profile registry
After setting the instance to restart
List all variables: db2set -lr
Set variable: db2set variable_name = value
Set to default: db2set variable_name =
Display the variable set: db2set -all
# Set the parameters of the database manager and database
Get the current parameters:
DB2 Get Database Manager Configuration
DB2 GET DATABASE Configuration for Database_name
DB2 Get Database Manager Configuration Show Detail
Setting parameters:
DB2 UPDATE DATABASE Manager Configuration Using Parameter New_Value
DB2 UPDATE DATABASE Configuration for Database_name Using Parameter New_Value
# Prepare the database service port
DB2SET DB2COMM = TCPIP, NetBIOS
In the Services file contains the service and port number,
Such as: DB2ICDB2 50000 / TCP
DB2 Update Database Manager Configuration Using SVCENAME DB2ICDB2
Restart the database.
# View the connection application
DB2 List Applications [for Database DB-Name] [Show Detail]
# Disconnect the application
DB2 Force Application (6,5)
DB2 Force Application All
# Set the customer authentication method
DB2 Update Database Manager Configuration Authentication Auth_Type # Create buffer pool
Create Bufferpool BP2 Size 25000 PageSize 8k
Connect to sccrm;
Alter Bufferpool IBMDefaultbp Immediate Node 1 Size 50000;
Alter BufferPool IBMDefaultbp Immediate Node 0 Size 50000;
Connect reset;
# Reset the status of the table space to normal (Normal)
Quiesce tablespaces for Table
# Useful directory table
Syscat.columns: Each row included corresponds to each column defined in the table or view
Syscat.indexColuse: List all the columns included in the index
Syscat.indexes: The included each row corresponds to each index defined in the table or view (including the applicable inheritance index).
Syscat.tables: All tables, view, alias (Nickname) or alias (alias) correspond to one of them. All directory tables and views have one in the syscat.tables directory view.
Syscat.views: Each view created corresponds to one of the rows or more.
### Monitor DB2 activity
## Capture Snapshot
Database, Table Space, Table, Buffer Pool, Lock, Database Manager, Application
# Shooting snapshot
API, command line
Update Monitor Switches
# Open and close snapshot
1. Set the monitor switch on the instance level (affect all users and databases)
DFT_MON_STMT: Statement Monitor (for Dynamic SQL)
DFT_MON_TABLE: Table Monitor
DFT_MON_LOCK: Lock Monitor
DFT_MON_BUFPOOL: Buffer Pool Monitor
DFT_MON_SORT: Sort Monitor
DFT_MON_UOW: Work Unit Information
DFT_MON_TIMESTAMP: Track Timestamp Information
DB2 Get DBM CFG | GREP DFT_MON
DB2 Update DBM CFG Using Monitorswitch [ON | OFF]
DB2 UPDATE DBM CFG Using DFT_MON_SORT ON
2. Set the monitor switch on the application level (only suitable for specific applications)
Open only this CLP in the CLP
Bufferpool
LOCK
Sort
Statement
TABLE
Timestamp
UOU
DB2 GET MONITOR SWITCHES
DB2 Update Monitor Switches Using SwitchName [ON | OFF]
3, reset switch
DB2 RESET MONITOR [All | for Database DatabaseName] [At dbpartitionnum partitionnum]
The value of the monitor switch is empty or 0.
# 数据 数据 快照
Contains information:
connection
DB2 agent
lock
Sort
Total number of buffer pools
SQL activity
Quantity of SQL statement
Log usage
Cache usage
Haveh connection
Below is a command to get this snapshot
DB2 Get Snapshot for Database on DatabaseName
#Batabase Manager snapshot
DB2 Get Snapshot for Database Manager
# 图片 snapshot
DB2 Get Snapshot for Tables on Drew_db
# Table space and buffer pool snapshot
DB2 Get Snapshot for TableSpaces on Drew_DB
DB2 Get Snapshot for BufferPools on Drew_DB
Calculate the hit rate of the buffer pool:
Ratio = ((1- (Physical Index and Data READS) / (Logical Index and Data Reads) * 100%
# 小照
DB2 Get Snapshot for Locks on Drew_db
# 动态 SQL snapshot
# Find SQL for performing speeds
Select Stmt_Text, Total_exec_time, Num_execurn
From table (Snapshot_Dyn_SQL ('Drew_DB', - 1)) AS DYNSNAPTAB
Order by total_exec_time desc
Fetch First 1 Row ONLY
The second example finds the five SQL statements that have the longest execution time:
Select Stmt_Text,
Case when num_executions = 0
Then 0
Else (total_exec_time / num_executions)
End Avgexectime,
Num_execurnions
From table (Snapshot_Dyn_SQL ('Drew_DB', -1)) AS DYNSNAPTAB
Order by Avgexectime DESC
Fetch First 5 Rows ONLY
### Event Monitor
People use snaps to check the DB2 of a precise moment and aggregate performance information. The event monitor is different from this, people use him in fixed time
Monitor DB2 performance during cycle. Event Monitor Check the conversion event in the database and treats each event as an object. This allows a very detailed analysis of the behavior of DB2.
The information captured by the event monitor is similar to the information captured by the snapshot. At the same time, multiple monitors can be run, and each monitors can also capture information about multiple event types. This allows different administrators to monitor different activities independently.
Event Monitor can capture information about the following:
database
table
Dead lock
Table space
Buffer pool
connection
Statement
Transaction
You can write the information of the event monitor:
SQL table
file
pipeline
## Create event monitor
Create Event Monitor
## Open and close Monitor
DB2 Set Event Monitor EventMonitorname State [0 | 1]
EventmoniTorname is the name of the created event monitor. 0 The monitor will be turned off, and 1 will open the monitor.
# Get the status
SELECT EVENT_MON_STATE ('DrewTest') from sysibm.sysdummy1
The Sysibm.SyventMonitors table and the view of the table Syscat.Eventmonitors include information about the event monitors that have been created. You can run SQL queries for these tables to determine the status of the monitor:
SELECT EVMONNAME, TARGET_TYPE, TARGET,
From syscat.eventmonitors
WHERE evMonname like 'Drew%'
AutoStart option
The AutoStart option in commands used to create event monitors indicate that the event monitor is automatically started whenever the database starts.
# 清 清 事 事 监视
Note: The event monitor name uses uppercase letters and needs to create a directory.
Event monitor data can also be emptied to disk. This is very useful if you want to record this type of information, such as the database event written only when all connection is terminated. Below is an example command:
DB2 Flush Event Monitor EventmonitorName
# Using event monitor
Event monitors should be used to monitor very specific events or workloads in a short time. They aim to provide you with very specific information to allow you to diagnose databases or applications issues or behaviors.
Unlike snapshots, event monitors have great impact on DB2 performance. This is because of the object of each event
The number of information is caused. The influence of statement monitors on performance is great because the database engine must be for each
Query all additional work: Not just able to execute queries, the DB2 engine must also write this query.
All features and runtime information. This information must be written to a text file, which further reduces performance.
# 死 锁 监控
Create Event Monitor Deadlock_db
For deadlocks
Write to file 'deadlock_db'
Maxfiles 1
MaxFileSize None
Autostart
Note: The event monitor name uses uppercase letters and needs to create a directory.
#SQL monitoring
SQL monitors are useful because it can capture dynamic and static SQL statements. If applying
The program uses the precompiled SQL statement that cannot be captured with SQL snapshots, then the monitor is
Very important.
An event is recorded for each executed SQL statement. Attributes of each statement (such as reading,
The number of rows selected and deleted, etc.) is recorded, but it is not like this in the snapshot.
The manner is expressed. Also record the execution time range and start and stop. This allows you to perform a detailed analysis of some transactions and SQL of an application.
You can also use SQL monitors to discover poor execution code; if you put the output in the table,
Then you can do this by searching the number of executions of individual code. Due to operation
The amount of information and performance overhead of the SQL monitor, so it should only be in short test or problem determination.
Use this technology without using it in a production environment.
Create Event Monitor Bar
For Statements
Write to Table
STMT (Table Drewkb.stmtTab)
INCLUDES (Rows_Read, Rows_Written_Stmt_Text)
# Capture event monitor data
DB2EVA [DB DatabaseName] [evm Eventmonitorname]
# 利用 e 分析 Analyze SQL
SQL's EXPLAIN details how each part of the SQL statement is executed and when the SQL statement is performed by graphical or text-based. This includes the following basic information:
Table that is being accessed
Index being used
When to connect data
When to sort data
EXPLAIN also captures more complex information, which is very useful when SQL is happening, this information is very useful:
Table and index base
The order of positive connections and indexes
Result set
Which fields are choosing every part of the query?
Sort method
SQL's time range
To understand Explain information, the most important concept you need to master is Timeron. Timeron is the DB2 optimizer
The metering unit used to measure the time and resources of the query. Timeron combines time, CPU utilization,
I / O and other factors. Since these parameter values are changing, the number of Timeron required to perform a query is dynamic
Every time the Timeron required to perform the query is different.
# Use the EXPLAIN SQL of the Control Center
Observe the SQL access scheme, analyze the optimization parameters, and consider the level of optimization.
# Using Character Tools
DB2 SET CURRENT EXPLAIN MODE [NO | YES | EXPLAIN]
DB2EXFMT
DB2EXPLN tools can be called from the command line to get the access scheme for the query. However, the tool does not return to optimizer information.
DB2EXPLN -DATABASE DREW_DB -STATEMENT "Select * from syscat.tables" -terminal # SQL troubleshooting
Indexing, whether to use an index, use the order of the index
Table base and "select *"
The optimization level is too low.
Setting Optimization Level: DB2 SET CURRENT Query Optimization [0 | 1 | 2 | 3 | 5 | 7 | 9]
## Use the Health Center and Memory Trimming Tools
# Select the instance Using View Memory Usage, you can save the output to the file.
In addition to providing memory visualization, the tool also allows you to set alert to certain parameter values. The output of the alarm will be written
Threshold's file, this file is located in the same directory with db2diag.log. This directory is located in the primary directory of the instance. according to
The choice of choice when you create an instance, this location will be different.
# DB2 Health Center
Is there any sufficient resource (such as available memory, table space containers or log storage) to complete tasks
Resource use is effective
Whether the task is completed within an acceptable time period, or whether the completion of the task does not significantly reduce performance
Does resources or database objects will not be in an unavailable state?
## DB2 Query Patrol and Controller
IBM provides two main tools for DB2, which allows you to monitor and control the execution of SQL on the database. DB2 Controller (GOVERNOR) is used to control
Users and applications perform their SQL priority. The DB2 query patrol (Query Patroller) provides query and resource management for decision support systems.
This tool accepts all queries through your system and analyzes, prioritize and scheders.
The DB2 query patrol provides query and resource management for decision support systems. This tool can accept all queries that flow through your system and analyze it.
Priority sorting and scheduling. Once the query is completed, the user will also receive a notification. In a large environment (where some queries may take a few hours, or not
The same department may have different system priorities or uses), which is extremely useful.
The query patrol will also perform load balancing by redirecting the work to the appropriate database partition, and the load balance will ensure that the use of a database partition will not be too heavy.
. This tool can only be used with the multi-partition options of DB2 ESE in the SMP or MMP environment.
After the latest revision, IBM has completely restructured the DB2 query patrol, thus integrating query control functions into similar DB2 control.
The center DB2 query patrol center and provides an easy-to-use user interface. The query patrol is completely server-based and does not require client software.
# Performance Monitor Command
Get Monitor Switches Returns the status of session monitoring switch
Update Monitor Switches Using
RESET MONITOR ALL Reset Performance Monitor Value
Get Snapshot for DBM
Get Snapshot for all on
Get Snapshot for Dynamic SQL ON
Runstats on Table
Reorgchk on Table ALL determines if you need to reorganize reorgchk on table
REORG TABLE
### DB2 utility
Three utilities: export, import, load these utility supported file formats:
Unordered or fixed length ASCII (ASC): As the name suggests, this file type contains a fixed length ASCII data to align with the column data. Each ASC file is one
ASCII character stream, this character stream consists of data values sorted according to row and column. Rows in the data stream are separated by the settlement. It usually sets the line to definitely definitely a newline.
Setting ASCII (DEL): It is the most common file format for various database managers for data exchange. This format contains ASCII data, which uses special characters
Definition separation column value. The row in the data stream is separated by a liner.
PC Edition integrated switch format (PC Version of the Integrated Exchange Format, PC / IXF): It is a structured description of database tables. This file format is not only
Can be used to import data, and can also be used to create a table that does not exist in the target database.
Worksheet Format (WSF): The data stored in this format can be displayed in the worksheet. This format can only be used to export and import.
Cursor: Cursor is declared with query. It can only be used as an input to load operation.
#EXPORT Utility Use the SQL SELECT statement to extract data from the database table to a file. For exported data, its file format can be DEL, IXF, or WSF. I suggest you
The export contains the Messages clause to capture errors, warnings, and information messages during the export.
Export to myfile.del of del
Messages msg.out
Select staff.name, staff.dept, org.location
From Org, Staff
Where org.deptnumb = staff.dept;
In the example on the previous page, the data is extracted into a DEL format file. By default, the column value is separated by a comma (,), and the string is enclosed by double quotes ("). If you want to extract
The data already contains a comma and double quotes, what should I do? If this is the case, import or load utility cannot be determined: Which symbols are actual data, which is delimitizes. Set
Method for operating the EXPORT, you can use the Modified By clause and specify what you want to use file type modifier. The format of the export command is as follows:
Export to file_name of file_type
Modified by file_type_modifier
Messages Message_File
SELECT_STATEMENT
Chardelx
Specifies X for a new single string delimiter. The default is dual quotation marks (").
Coldelx
Specifies X for a new single-character list. The default is a comma (,).
CodePage = X
Specify X This ASCII string is the new code page for output data. During the export operation, the character data is converted from the application code page into this code page.
TimeStampFormat = "x"
X is the format of the timestamp in the source table.
Consider the following example:
Export to myfile.del of del
Modified by chardel! Coldel @ codepage = 1208 timestampformat = "YYYY.MM.DD HH: mm TT"
Messages msg.out
The command of the Select * from schedule is exported from the Schedule table from the Schedule table in DEL format, and the following behavior has occurred:
The string is enclosed by an exclamation mark (!)
Ring from @ 号 定 界
String is converted into code page 1208
The user-defined timestamp in the Schedule table has YYYY.MM.DD HH: MM TT this format
# Export large objects
# You can export the table in the control center and schedule.
#Import utility
IMPORT from file_name of file_type
Messages Message_File
[INSERT | INSERT_UPDATE | Replace | Replace_create | CREATE]
INTO TARGET_TABLE_NAME
The INSERT option inserts the imported data into the table. The target table must already exist.
Insert_UPDATE Inserts data into the table or updates the existing row in the table with the matching primary key. The target table must exist and define the primary key.
The Replace option deletes existing data and then inserts the imported data into the existing target table.
With the replace_create option, if the target table exists, then the utility deletes existing data, then inserts new data, it seems that the Replace option is specified. If you do not define a target table, you will create the table and its related index before importing the data. As you can think, the input file must be a PC / IXF format because this format contains a structured description of the export table. If the target table is a parent table referenced by the foreign key, replace_create cannot be used.
The CREATE option creates a target table and its index, then imports the data into a new table. Its only supported file format is PC / IXF. You can also specify the name of the table space, and the new table will be created.
Example:
Import from Emp.ixf of ixf
Commitcount 500
Messages msg.out
Create INTO Employee in DataTBSP Index in Indtbsp
If the above command is executed because of some reasons, you can use the message file to determine the last line that is successfully imported and submitted. Then you can use
The RestartCount option restarts the import. In the following command, the utility will skip the previous 30,000 records to start the Import operation.
IMPORT from myfile.ixf of ixf
Commitcount 500 RestartCount 30000
Messages msg.out
INSERT INTO NEWTABLE
Compound = X
Use a non-atomic composite SQL to insert data. Try the X statement each time.
IndexSchema = Schema
Use the specified mode using the index during creating an index.
Striptblanks
When the data is loaded into the beads, the tail space is truncated.
Lobsinfile
It is pointed out to import LOB data. The utility will check the Lobs from clause to get the path to enter the LOB file.
Here is an example of actually using these file type modifiers:
Import for InputFile.asc of ASC
Lobs from / u / db2load / lob1, / u / db2load / lob2
Modified by Compount = 5 Lobinsfile
INSERT INTO NEWTABLE
Use the control center for Import.
## loading utility overview
The LOAD utility is another way to fill the table with data. Formatted page is directly
Write the database. This mechanism allows for data movement than the Import utility.
However, the LOAD utility does not perform some operations such as references or table constraints and triggers calls. Load from Input_Source of Input_Type
Messages Message_File
[INSERT | Replace | Terminate | Restart]
INTO TARGET_TABLENAME
# example, Execute At Partition EN
date
DB2 Connect To Sccrm User DB2Inst1 Using DB2Inst1
DB2 "Load from /backup1114/dw_call_cdr_20031002.txt of del modified by foldel; fastparse anyorder replace INTO DW_CALL_CDR_20030801"
date
DB2 "load from /db2home/db2inst1/data/dw_newbusi_smscdr_20031026.txt of del modified by color; Terminate INTO DW_NEWBUSI_SMSCDR_20031026_1"
The format of the source input of LOAD can be DEL, ASC, PC / IXF or CURSOR. Cursor is from SELECT
The result set returned by the statement. Examples of using Cursor as load input are shown below:
Declare mycursor cursor for select col1, col2, col3 from tab1;
Load from Mycursor of Cursor Insert INTO NEWTAB
The load goal must exist, which can be started. This goal can be a table, type table or a table. Do not support the system table or temporary table.
Please use the Messages option to capture any errors, warnings, and information messages during the load.
LOAD can be performed in four different ways:
Insert mode Add input data to the table without changing existing table data.
Replace mode removes all existing data from the table and populates the table with the input data.
Terminate mode terminates the load operation, then roll back to the starting point of the load operation. One exception is: If the Replace method is specified, the table will be truncated.
Restart mode is used to restart the previous interrupt load. It will automatically continue operation from the previous consistency point. To use this way, specify the same options in the previous load command,
But it is used to use Restart. It allows the utility to find all the required temporary files generated during the load process. therefore
Unless it is convinced that there is no need to remove any such files in manual, do not remove any such files manually.
Once the load does not complete any incorrectly, the temporary file will be removed automatically. By default, temporary files are created in the current working directory.
You can use the TempFiles path option to specify the directory of the stored temporary file.
# Four stages of the loading process
The complete load process is divided into four different stages.
During the stage:
Load the data into the table.
Collect index keys and table statistics.
Record the consistency point.
Put the invalid data into the dump file and log messages in the message file. These data lines are considered invalid data when the data line is inconsistent with the definition of the table.
And will be rejected (not in the table). Use the Dumpfile modifier to specify the name and location of the file to record any rejected rows.
Building stage:
Create an index based on the button collected in the loading phase.
Delete phase:
Delete those rows leading to violations, and put these rows into the exception table. In addition to some data as described above, there is only the definition of the target table,
Some data has passed the loading phase, but violates the uniqueness constraints defined in the table. Note: This will only be unique in violation of the key to bad data; there is currently no other constraint. Since such data has been loaded in the table, the LOAD utility will delete the violations at this stage. The exception table can be used to store the deleted rows that allow you to decide
How to handle them after completion. If you do not specify an exception table, you will delete the violation, not any tracking, and the exception table is discussed in more detail below.
Record messages in the message file.
Index Copy Stage:
If the Allow Read Access is specified with the Use TableSpace option, then the index data is copied from the system temporary table space.
The index should reside the table space.
The exception table is a user-defined table that must have the same column definition as the target table to be loaded. If there is at least one column does not appear in the abnormal table, then
Will discard violations. You can only add two additional columns to the end of the surface: the timestamp of the record line is inserted, and the storage (thinking that the line is bad)
Reasons (or messages) CLOB columns.
Load from Emp.ixf of ixf
Modified by dumpfile = c: /emp.dmp
Messages msg.out
TempFiles PATH D: / TMP
INSERT INTO EMPLOYEE
For Exception EMPEXP
In the above figure, (1) shows the contents of the input source file.
(2) The target table shown in (2) Employee is created with the following definitions:
The first column must have uniqueness.
The last column is a numeric column that cannot be NULL.
(3) The exception table shown in (3) is created with the same columns as Employee and the timestamp and message column.
In the loading phase, all data in the input file is loaded into Employee - but except for both rows of pink markers.
Because they do not meet the NOT NULL and NUMERIC column definitions. Because the Dumpfile modifier is specified, the two lines are recorded in the file C: /emp.dmp.
In the delete phase, the two rows of yellow markers are removed from the Employee and inserted into the exception table EMPEXP. This is due to violation of the first in the Employee table
The uniqueness of the column is caused.
At the end of the load, you should check the message file, dump file, and exception table, and then determine how to deal with the rejected row.
If the load is successfully completed, the temporary file generated in D: / TMP will be removed.
# 装 入 选
RowCount N: Allows the user to specify the previous N records that are only loaded into the input file.
SaveCount N: Establish a consistent point after each of the N records is loaded. At the same time, a message will be generated and record it in the message file to indicate how much input row is successfully loaded in the save point. This cannot be done when the input file type is CURSOR.
WARNINGCOUNT N: Stop loading after the N times warning is issued.
Indexing Mode [Rebuild | Incremental | AutoSelect | Deferred]: Build an index during the construction phase. This option specifies that the LOAD utility is a rebuild index or an incrementally extended index. Support four different ways:
Rebuild mode forces re-build all indexes.
Incremental mode only uses new data extension indexes.
The AutoSelect method allows the utility to choose between Rebuild and Incremental.
The deferred method means that the index is not created during the load. The indexes involved are labeled with the required refresh. These indexes will be rebuild when restarting the database or the first access to such an index.
Statistics [YES | NO]: After performing the completion, the previous target table statistics is very likely no longer valid, because more data has been added in the target table. You can choose to collect these statistics by specifying Statistics Yes.
# File type modifier. The file type modifier is specified by the Modified By clause. Here is a few modes you might feel useful:
FastParse: Reduces the syntax check to load data to enhance performance.
IdentityIgnore, IdentityMissing and IdentityOverride: Ignore and point out lost or override identity column data.
IndexFreespace N, PageFreespace N and TotalFreespace N: Keep the specified amount of idle pages in the index and data pages.
NorowWarnings: Prohibition of row warnings.
Lobsinfile: Indicates that the LOB file will be loaded; and check the Lobs from option to get the LOB path.
## Table Access during loading
During the load table, the LOAD utility will lock it with a mutex. Any other access is not allowed until the load is completed. This is the default behavior of the Allow NO Access option. During this load, the table is in the Load In Progress state. There is a convenient use command to check the status of the load operation, and return to the table status:
Load query Table Table_name
You may guess this, that is, an option allows for a table access. The Allow Read Access option causes the table to be locked in a shared mode. Reader can access the table
The data already existing is not accessible to new data. The data being loaded is to wait until the load is completed. This option will be put into the table at the same time in Load in Progress
State and read access Only Status.
As mentioned in the previous page, you can re-build all indexes during the construction phase, or you can use new data to extend the index. For the Allow Read Access option,
If you re-build all indexes, you will create a mirror copy for the index. When the LOAD utility reaches the index copy phase (see the four phases of the load process)
The target table is set offline and then copies the new index to the target table space.
Regardless of which table access options are specified, the load requires various locks to process. If an application has locked the target table, then the load utility must wait until
The lock is released. If you don't want to wait, you can use the Lock With Force option in the load command to force other applications that hold conflict locks.
## Check the Status Status
At this point, we know: Will not define an inconsistent input data in the target table. In the loading phase, such data will be rejected and recorded in the message file.
In the delete phase, the Load utility will delete those rows that violate any unique constraints. If the exception table is specified, the row of the violation will insert the table. For tables likely defined
Other constraints (such as reference integrity and check constraints)? Load
The utility does not check these constraints. The table is placed in the Check Pending state, forcing you to manually check the data integrity before you can access the table. As far as
As discussed in this, you can use the load query command to query the state. The column const_checked in the system catalog table syscat.tables also points out the table defined in the table.
The state of each constraint.
State type: Y, N, U, W, F
To handle one or more tables to turn off integrity check, use the set integrity command. Some examples are given below to demonstrate some of the sections of the command. To immediately check the integrity of the options for the table Employee and STAFF, use the following command:
Set Integrity for Employee, Staff Immediate Checked Incremental
To ignore the foreign key check of the table Employee with the immediate unchecked option:
Set Integrity for EmployeE Foreign Key Immediate Unchecked
After the load is complete, sometimes you may want to place the target table and its derived table with foreign key relationships in the Check Pending state. This ensures that the integrity is manually
Acquisition of all these tables has been controlled before checking. The installation option is Check Pending Cascade Immediate, which pointing: Immediately check the foreign key
Separate state extends to all derived foreign key tables. By default, only the loaded table is placed in the check. This is the load option
CHECK PENDING CASCADE DeferRed behavior.
## Import vs loading
IMPORT LOAD
Slower On Large Amounts of Data Faster on Large Loads-Writes Formatted Pages
Creation of Tables & Indexes with ixf Tables and INDEXES MUST EXIST
WSF Supported WSF Not Supported
Import Into Tables and Views loading Tables ONLY
No Support for Importing Into Supported
Materialized Query Tables
All Rows Logged Minimal Logging Supported
Triggers Will BE Fired Triggers Not Supported
Temporary Space Used Within the Database Used Outside THE DATABASE
Constraints Validated During Import All Unique Key IS Verified During Load
Other constraints are validated with the
Set Integrity Command
IF Interrupted, Table Is Usable with if Interrupted, The Table Is Held in Load Pending
Data Up to The Last Commit Point State, Either Restart Or Restore Tables Effected
Run Runstats After Import for Statisics Statistics CAN Be Gathere During Load
Import Into Mainfram Database Via Cannot Load Into Mainframe Database
DB2 Connect
NO Back-up Image Required Backup Can Be CREATED DURING LOAD