IDS training documentation! The concise clearness of the IdS system system is very good!
-------------------------------------------------- ------------ Chapter 1 Informix Dynamic Scalability Architecture 1, Relational Database System Architecture Currently Comparative Commercial Relationship Database Multi-use Three architectures: 1, a way of requesting a service architecture is used for each database service request, the database system will allocate a database service process service for it. Advantages: Different users' database services can be fully isolated, and the resources of machines that are easier to use the SMP architecture. Disadvantages: System memory resources, CPU resource consumption; due to the use of operating systems, the system running efficiency is not high. 2, multi-line architecture advantages: do not need frequent operating systems process switching, save memory, CPU resources; system parallel operation is high and high running efficiency. Disadvantages: The system is relatively fragile, and a clue's misoperative operation may cause system paralysis; the database own clues are switched for the switching of the operating system; the operation of large data may cause the imbalance distribution of system resources. 3. Mixed Architecture This system is constructed from a multi-thread network listener (2) task distributor, including the request / response queue (3) can be composed of a database server. Advantages: Task processing uses parallel and queuing methods, the system has high efficiency. Disadvantages: The load balance is simpler. Second, the IDS system composition IDS (Informix Dynamic Server) database system consists of part: 1. Shared memory part shared memory portion includes: long station memory part, virtual memory section, message area. (1) The long-in memory part is mainly used for disk data buffering, system data, etc. (Bufffers, Physical / Logical Buffers, Lrus, Chunks, DBSAPCES, USERS, LOCKS ...). (2) The virtual memory part is mainly used for VP management information and buffers. (Global Pool, Dictionary Pools, Procedure Pools, Sort Pools, Session Pools, Big Buffer Pools, Mt Pools. The virtual memory section can grow during use, and the growth is defined in the configuration file. (3) Message Area is mainly used for information exchange between the application (Client) and the database engine (Server). 2, disk data spatial section 3, database engine - virtual processor VP (Virtual Processors) three, IDS multi-line architecture online7.0 changed the one-to-one customer / server mode in Online 5.0: Requests by the original database Starting a SQPLEXEC Database Engine Service CLIENT / Server mode transition to a database engine (ie, VP virtual processor) service for multiple quantities (unless dynamic adjustment) services for all database requests. Online7.0 divides the database engine by functionality into multiple VPs, these VPs are the ONInit process for memory. For each database service request, it will be split into a plurality of parallel clues, served by different VP parallel. The clue can be defined as a program executed in a sequence. Virtual Processor VP can be defined as a database process that completes a sense of database service.
VP includes type: CPU, PIO (responsible for writing physical logs), LIO (responsible for writing logical logs), AIO (responsible for disk I / O), etc.. The clue runs on VP, and the scheduling of the clues is done by the CPU VP. From this point, VP is similar to the CPU on the hardware, the lead is similar to the process running. Advantages of multi-line architecture: (1) FAN-I: For multiple application requests, database only uses a number of VP services, and can be dynamically adjusted as needed; (2) Fan-out: Multiple Database VP Services An application Request; (3) The process switching avoidance operation system inside the database is fast, high efficiency; (4) multi-line architecture is more suitable for multi-CPU architecture, such as: AffInity features. Fourth, IDS Customer / Server Connection Method 1, IDS Customer / Server Connection Method can be used: (1) Shared memory connection mode (onipCSHM) (2) Piping connection mode (3) network connection mode (based on TCP) / IP protocol, including ONTLITCP, ONSOCTCP, or IPX / SPX protocol, ONTLISPX) The application interface based on TCP / IP network protocol can use SOC (Socket) or TLI (Transport Layer Interface). Clients with network connection methods and server must use the same network communication protocol (TCP / IP protocol or IPX / SPX protocol), but the client uses the Socker interface, and the server side uses the TLI interface. 2. Customer Server Connections Based on shared memory 3, client server connection (1) based on TCP / IP protocols (1) Specify connection mode in the client and server / etc / hosts file. (2) Specify the communication port and network protocol in the / etc / service file of the client and server. (3) Specify how to access the server in the $ INFORMIXSQLHOSTS file of the client and server, including server name, server network address, server network access method (protocol, and port number). (4) Check network system security files, including /etc/hosts.equive, worthless /.rhosts files. (5) Set environment variables on the client and server side, including InformixServer, InformixSqlhosts. V. IDS Disk Data Structure 1, Data Storage Concept (1) Page Page is the most basic data storage unit for Online. Online's data page contains the content: (1) 24 bytes of header (containing 4 bytes). (2) Data zone (3) Slot Table every 4 bytes. The first 2 bytes are stored recorded on the offset on the page, and the latter two bytes store the size of the record. The number of slots in a page does not exceed 255, so a data page stores up to 255 records. (4) Timestamp. (2) Extent Extent is a combination of multiple physical continuous Page (up to 4).
The storage space allocation of the database table is in an extent. (3) TBSPACE TBSPACE is the logical combination of Extent. TBSPACE consists of all EXTENT assigned to a table. A TBSPACE may only be in a dbspace, but may span multiple Chunk. (4) Chunk chunk is a physical storage space assigned to Online, which can be a UNIX file or an original device. (5) DBSPACE, BlobSpace dbspace is a logical combination of Chunk. Database administrators can create, add CHUNK, increasing the storage space of the database. The primary chunk must be given to specify its primary chunk when creating DBSpace. 2, the log (log) database log is a means for maintaining database data consistency. The IDS log is divided into physical logs and logical logs. The physical log is used to maintain the physical consistency of the database. Before each modify the data, the data page in which the data is located is stored in the physical log, so the physical log is also referred to as "pre-image". The logical log is used to maintain the logic consistency of the database. When each change database, all changes are recorded in the logical log. Physical logs will be automatically emptied after each checkpoint; logical logs must be backed up, and transactions contained in the log are submitted, and not the last log file can be released. Long matters refer to transactions that have not yet ended, but the database logical log file has been filled with the transaction log. Such a logical log file cannot be reused, and the transaction logging is also written without the logical log file. In this case, IDS will block database requests while "rolling" long transactions. Six, IDS's fault tolerance 1, CheckPoint CheckPint is an important system feature of IDS. IDs uses CheckPoint to ensure that data in the shared memory data buffer is consistent with the data on the physical disk. It includes the step of: (1) hangs the critical area; (2) Refresh the physical log file in the shared memory to the physical log file on the physical disk; (3) Refresh the shared memory buffer in the shared page to the physical disk page On; (4) Write checkpoint to log files and system reserved pages; (5) logical empty physical log files; (6) Refresh the logical log buffer to the logical log file of the physical disk. Note: IDS must first refresh the data of the physical log buffer to the physical log file on the hard disk before refreshed the page to the physical disk. 2. Fast Recovery IDs uses the FAST Recovery function to ensure that the database quickly returns to database shutdown at a restart of the database. It contains the steps: (1) Restore data in the physical log file to the buffer and disk of the shared memory; (2) Last consistency point in the logical log file - CHECKPOINT Point; (3) The logical logging of the system last point is recorded "Rollback" (Rollback) has not submitted transactions and "cancel" (UNDO) already submitted transactions. 3, IDS Data Buffer Technology IDS The operation in the database is done by the operation of the data in the shared memory data buffer.
For example, a modified database record ... begin work; update tab1 set fld1 = "" "where fld2 = ?; commit work; ... IDS processing is as follows: (1) After connecting to IDS (connect ..., database ...), IDS This request launches a SQLEXEC task to serve. (2) SQLEXEC generates an execution plan after gramatic analysis of the SQL statement. (3) SQLEXEC writes the transaction start recording in the Logical Log Buffer. (4) SQLEXEC will request a certain data page in a CHUNK and apply for a corresponding lock resource. (5) IDS preferred in the LRU queue of the memory buffer to find whether the data page is in the buffer. (6) If not in the buffer, IDS will find an idle page in the FLRU queue. If not, IDS will start a Forground Write to apply for an idle buffer page. The IDS application reads the data page from the disk into the buffer. Before modifying the data, IDS writes the page into the physical log buffer. Then, data modification is performed. Next, step (8) is performed next. (7) If in the buffer, and in the FLRU queue, IDS writes the page into the physical log buffer and modifies data. If in the MLRU queue, IDS will be modified directly. (8) IDS writes a logical log record that writes a modified operation in the logical log buffer. (9) IDS releases all lock resources for this transaction, and writes to the logical log buffer end record. Seven, IDS Monitoring 1, IDS Status (1) OFF-LINE State (2) Quiescent Status (3) Online Status (4) SHUTDOWN Status (5) Recovery Status 2, System Blocking Cause CKPT Checkpint Longtx Long Transaction Archive ONGOING Archive Media_Failure Media Failure HANG_SYSTEM DATABASE Server Failure DBS_DROP DROPPING A DBSPACE DDR Discrete High Availability Data Replication LBU LOGS FULL HIGH-WATER MARK 3, Monitoring Tools. Users can use the SMI (System Monitoring Interface), onstat tool, and the OnCheck tool to complete the monitoring of IDS. 1) Use the system monitoring interface SMI system monitoring interface to directly access DSA management information.
2) Use the OnStat monitoring tool (1) Monitor database log file online.log: onstat -m (2) Monitor database system shared memory usage: onstat -g seg (3) Monitor database system logical log usage: onstat -l 4) Monitor database system CHUNK usage: onstat -d (5) Monitoring database system online session situation: onstat -g sees (6) Monitor database system A online session situation: onstat -g ses sessid (7) Monitor database system online Userthread situation: onstat -u (8) Monitor database system lock resource usage: onstat -k (9) Monitor database system buffer refresh operation: onstat -f (10) Monitor database system Lru queue usage: onstat -r ( 11) Monitoring the database system to all CHUNK Cooperatives: onstat -g iof (12) Monitoring database system online Thread situation: onstat -g ATH (13) Monitoring database system online VPS situation: onstat -g glo (14) Monitor database System usage efficiency: onstat -p (15) Monitor database system PDQ usage: OnSat -G MGM (16) Monitor database system ready queue (ONSTAT -G REA (17) monitoring database system waiting queue (Wait Queue) Situation: Onstat -g Wai (18) Monitoring the Sleep Queue situation: Onstat -g SLE (19) Monitor database system activity transaction: onstat -x
Chapter II concurrent control concurrently means that multiple users operate the same data at the same time. Because concurrent operations bring data consistency problems. IDs is to ensure concurrent data by locking techniques. 1, the type of lock 1) Shared locks can prevent other users from modifying the locked targets, but multiple users can implement plus locks to the same goal. 2) Exclusive Lock A lock-locked object will decline all other users' access. 3) Lifting lock (Promotable Lock / Update Lock) Lock is the lock used when the user is using the Update cursor. The lifting lock is a shared lock before data modification. When the data is modified, the shared lock is enhanced into a row lock. Below is a lock conflict table: a ----------------------------------------- ---------------------- SXU None BS Ynyy x nny u ynny ------------------- Description : S: shared lock U: Lock U: Lock A: Holding Lock User B: Apply Lock User Y: Lock Application Success N: Lock Application Failure 2, Locking Particle Size (Granularity) Lock particle size refers to The size or range of the lockable object. The larger the particle size of the lock, the lower the degree of concurrency, the smaller the lock resource overhead; the smaller the particle size of the lock, the higher the degree of concurrent, the larger the overhead. 1) Database level lock ( Database Lock Database DBNAME [Exclusive / Share]; By default, IDS uses a shared lock for the database, such as user opens the database: Database DBNAME. 2) Table Lock Lock Table TableName In Exclusive / Share Mode; When doing the following, IDS will automatically apply the table to the table: (1) ALTER INDEX (2) Alter Table (2) CREATE INDEX (3) DROP INDEX (4) Rename Column (5) Rename Table Table-Level Shared Lock Save Table will refer to any modification of the table data, but other users can read the data. Table-level row lock locks will refer to any access to other users (unless "dirty reading" mode). A large amount of data modification operations for a table can consider using a table-level row lock.
3) Page Lock, a Row Lock, key lock (Key Lock) page lock lock-locked object is an IDS data page, the object recorded is a record. When the user creates a table, you can select the page lock or record lock when operating the table. The default lock mode is the page lock mode when IDS is built. The Create Table TableName (...) Lock Mode Page / Row; When the IDS tries to lock the record is locked, it will lock according to the lock type defined as the long sword. . The key-value lock is the method used to lock the records that do not exist. It is actually that the location of the record is not used in the table. For example, when IDS deletes a record in a table in the transaction, it locks the key value corresponding to the record, so that other users will insert the same record simultaneously.
3, the lock lifecycle database level lock can only be released when the database is turned off. For table-level locks, record locks, key-value lock life cycles depend on the database operation: SELCET, UPDATE, DELETE, etc., and database transactions are used. If the database does not use the log, then only when the user displays the lock (UNLOCK TABLENAME), the IDS releases the table-level lock. In summary: When the database transaction ends, it will release all locks used in this transaction; the time added to the transaction during the transaction to the end of the transaction.
4, the lock isolation level (ISOLATION Level) The level of isolation refers to an application that isolates other concurrent applications in a concurrent environment. By using the isolation level, the application can specify the way of using the lock resource when reading data. Use the isolation boarding must open the database log. 1) SET ISOLATION TO DIRTY READ; specifying that the read object does not apply any locks when applying the read data. Will other applications are operating the data when applying read data. Only considering system overhead, this way of reading data is the highest efficiency. 2) SET ISOLATION TO COMMITED READ; Specify before applying read data, check whether other applications are modifying to ensure that this application read data is submitted. However, when reading data, the data is not applied to the data, so the data being read by the user may be modified by other users. This approach is the default read data method. 3) SET ISOLATION To CURSOR Stability; When applying the read data, if you use the "Cursor", apply a shared lock to the read data, which will be released when the next record is read, that is, only the read The current record keeps the shared lock: If "Cursor" is not used, the way the read data is the same as the previous one. 4) SET ISOLATION TO REPEATABLE TOTAD; When the application read data is applied, all of the shared locks are applied to the read data, and all lock resources are released in the end of the transaction. With this way, all scanned data is all shared locks, even if some data does not meet the read data conditions, the scanning process is scanned. (If the full surface is scanned, this method is the same as the typographic lock effect). 5. Lock conflict mode When the application causes a conflict due to lock resources, it can be solved by setting a lock collision. 1) Set lock mode to wait; In this way, when applying a lock conflict, there is no restriction waiting until the required lock resource is applied. 2) SET LOCK MODE TO No Wait; In this way, when applying lock conflicts, return to the application immediately: Database operation, such as: 107: Record is locaked. This way is the IDS default conflict waiting method. 3) SET LOCK MODE TO WAIT N; (N is the waiting time for application settings) In this way, when applying a lock conflict, the required lock resources will be repeated in time N until the application is successful or the setting time n is used. When IdS processing OLTP (Online Transaction Processing) online transaction (such as Savings, Accounting, Credit Card Day Real-Time Transactions), you should first set a lock conflict waiting time in the E / C program, so you can avoid a lot of unnecessary Database operation due to lock conflicts failed. 6. Deadlock assumes that the user A holds an X lock, and the user B holds a Y lock. At this time, the user A applies for a Y lock, and the user B is applied for an X lock. In this way, users A, B will wait for the other party to release their locks, so "dead lock" appears. For "dead lock", IDS itself provides a set of detection and recovery mechanisms to avoid "dead lock".
7. Example: Suppose there are two users in the same database Workdb Test, Test Scripts: $ CREATE TABLE TEST (Code Char (3), Time Char (20), Name Char (20) Lock Mode Page; $ CREATE UPIQUE INDEX M_IDX ON TEST (CODE); Advanced Row Exercise: (1) User A Do: Begin Work; Update Test Set Time = "2" WHERE CODE = "SAM"; (2) User B Simultaneous operations: select * from test where code = "abc"; problem: what is the above operation? User B's SELECT operation failed. Since the user A is applied to the table-level lock (these locks will be retained to the end of the transaction), and the user b is trying to scan the page that is locked by the page level lock, so the operation failed. (3) User A: Rollback Work; ALTER TABLE TEST LOCK MODE (ROW); Begin Work; Update Manufact Set Time = "2" Where code = "SAM"; Description: This operation is successful. Since the operation reads the record through the index, the record is not locked by the recording lock being step 3. (4) User B performs as follows: Select * from test where code = "hro"; Description: This operation is successful. Since the operation reads the record by an index, the record is not locked by the recording of step 3. (5) User B performs as follows: SELECT * from test; Description: This operation failed. Because this operation tries to scan the table TEST sequence, step 3 applies a recorded lock to TEST, and this operation uses the "COMMITED" read data (system default). (6) User B performs as follows: set isolation to dirty read; select * from test; Description: This operation is successful. Because the operation sets "dirty" mode, it will ignore lock conflicts. (7) User B performs as follows: set isolation to strIited read; set lock mode to wait; select * from test; Description: This operation is waiting. Because the operation sets "COMMITED READ" and "Lock Conflict Waiting" mode, the operation is applied to the lock block plug, waiting for its release of the lock resource. (8) User A performs as follows: commit work; Description: This operation will release the lock resource application for step 3, step 7 is successful. (9) User A proceeds to the following: set isolation to repeatable read; begin work; update test set time = "2" where name = "julio"; Description: This time the Name field has no index, so this operation will take order scanning Method; additionally, since "RepeAtable Read" is set, all records of the TEST table will be locked.
(10) User B performs as follows: set lock mode to no wait; Update Test set Time "2" and code = "sam"; Description: This operation failed. Although this operation is recorded by indexing positioning, the user A in step 9 has locked (shared lock) Test all records, and the user B sets the "NO WAIT" lock conflict method, so the operation failed. (11) User B performs as follows: select * from test where code = "sam"; Description: This operation is successful. Because this operation uses index positioning records, and uses the "COMMITED" reading method to apply a shared lock in step 9, the shared lock applied in step 9 does not conflict, so the operation is successful. Chapter III Index Policy IDS adopts B tree index structure. 1. Advantages of the index 1) Improve the query speed by using the index positioning replacement sequence; 2) Improve data sorting speed; 3) To ensure uniqueness of the index field; 4) When only querying the index field, avoid reading record all fields content. 2. Index Establishment Principle (1) Establish an index for the connection (Jion) field to establish an index for the connection operation, at least one field of the connection expression, otherwise Ids automatically establish a temporary index before connecting "Sort Merge Join" or "NESTED Loop Join, or sequentially scanned data sheets for "Hash Join". (2) Establish index (3) Filter field establish index (4) to use a combined index (Composite Indexs) using the Sensors field establish index (4) ) When reducing the index repetition rate (6) When establishing a combined index, the field with low repetition rate should be placed behind, and the repetition rate is high. (7) Control the index field comparison Data table field cannot be too long (8) Enhance the setup index (Clustered index) to improve the establishment of the query speed aggregation index will make the indexed table record in the order of the aggregated index in physical storage. That is, the aggregation index record is consistent with the storage order of the data record, and the amount of data scanned when the query is reduced by a normal index. So for frequent queries, few questions that have been deleted can make full use of the advantages of the aggregation index to improve the query speed. (9) The index lookup speed of the digital field is fast than the index of other types of fields such as string fields, etc.). (10) The index of a data table should not be too much. Excessive index, data insertion, data deletion, and data modification speed will have a certain extent. (11) Improve index utilization using "Partial Key Search). For example, the index IDX (F1, F2, F3, F4) is built on the table Tab, and when Tab is pressed (F1, F2, F3, F4) or according to (F1, F2, F3) or according to (F1, F2) Or in accordance with (F1) conditions, index IDX (F1, F2, F3, F4) can be utilized.
3, and build an index
Chapter 4 Parallel Data Query PDQ (Parallel Data Qurey) 1, PDQ Technology Informix PDQ Technology Split a large number of database operations into multiple parallel operation tasks, make full use of parallel processing capabilities of multiprocessors, compared to ordinary queries Speed complete data query. Informix's PDQ technology mainly includes parallel operation: 1) Parallel SCANS 2) Parallel Sorts 4) Parallel Groups 5) Parallel Aggregates
2, Parallel Insert 1) After the Informix 7.0 version, the special data insert operation can be executed in parallel, inserted in parallel to include: (1) INSERT INTO TABNAME SELECT ...WHERT INTO TABNAME SELECT ...WHERE ... INSERT, SELECT operation is the heart execution. If the destination table, the source table has dictated, and INFORMIX uses multiple CPU VP (Virtual Process), then INSERT operations can be executed in parallel. (2) Select ... from ... where ... INTO TEMP TABNAME; Using this method, INSERT, SELECT operation is in parallel. And the temporary table TabName will slide on the plurality of temporary data spaces (dbspaces) specified by the Round Robin mode. 2) Parallel Insertion In the case, the target table that is not started (1) data is inserted is used to use reference integrity control (defined the primary key or foreign key) or use "tigger"; (2) The target table of the data insert is a network The table of the remote database; (3) Data insertion of the Baoji BLOB field; (4) The data inserted is included in the "Filtering" status. ("Filtering" Status Node The restriction "constrains" opens, violates "constrains", but does not "roll back") 3. When using PDQ, first, the first thing to declare is PDQ operation , By executing: set pdqPriority High can open the PDQ switch. After the execution, the PDQ switch should be turned off: set pdqpriority low; Second, in order to better play the PDQ technology, data segmentation should be performed on the operational data table; finally use PDQ technology to select a multi-CPU machine. When using the following operation, the PDQ: (1) query uses the isolation level of "CURSOR Stablility"; (2) Query Using the Update cursor or cursor is defined as "with hold"; (3) query uses nested Subqueries; (4) stored procedures in the query; (5) No SCAN, JOIN, SORT, Group, AGGREGATE are not included in the query. 4, PDQ monitoring uses onstat -g MGM to monitor PDQ usage. Chapter 5 Data Splitting Informix Data Fragment refers to distributing data of a database table on a different data space. Data fragment techniques are mainly suitable for large data volume (50,000 records above), and access frequent data sheets.
1. Data Split Technology 1) Round Robin mode Round-followed Data Split The data of the database table is evenly distributed on the specified data space. For data sheets that often need sequential scanned data sheets or frequently loaded data, consider using the Round Robin fragmentation policy, uniform distribution of data is evenly distributed over multiple data spaces (dbspaces) on different hard drives. For example: Create Table Satmxhz (...) Fragment By Round Robin in Workdbs1, Workdbs2, Workdbs3; 2) Expression (Expression) Functional Expression Data Split Pieces Data Distributes on the specified data space in accordance with certain conditions. For data sheets that are often scanned in accordance with certain conditions, and rarely load data, consider using the Expression slice mode to distribute data in multiple data spaces (dbspace). For example: Create Table ACDB1 (...) Fragment By Expression FB1Z1 IN ("1", "2", "3") in Workdbs1, FB1Z1 IN ("4", "5", "6") in Workdbs2, Remainder in Workdbs3 When using an expression method, the ONLINE can ignore the fragmentation that does not satisfy the condition, thereby reducing the amount of scanned data and improves query efficiency.
2, the advantages of data fragment (1) Parallel Scans; if the data fragmentation is used, the PDQ is opened, then the scan of the data table will read data in each fragmentation, the query speed will be large. improve. (2) Balance I / O; if the data space used by the data fragment is exactly on multiple independent physical hard drives, the data scan will be carried out simultaneously on multiple hard drives, and the disk competition is reduced, and the disk I / O is equalized. This is very advantageous for OLTP transactions. (3) High reliability; by setting a single data fragmentation that ignores an error, avoiding the entire data fragmentation error caused by a single data fragmentation. This is very advantageous for a large number of dynamic data statistics, such as data statistics in Decision Support System DSS (Decision Support System). (4) Reduce the backup and restore the data particle size. Due to data backup, recovery can be performed at the DBSPACE level, and the data fragment resides on dbspace, so data backup, recovery can be performed on the Fragment.
3, ROWID problem IDs ROWID consists of 4 bytes: 0 - 07bit: Record the slot number in the page. 08 - 32bit: The logic page number in TBSAPCE. Due to the use of data fragmentation, a database table will span TBSpace in multiple dbspace, so RowID is no longer unique in a table, and as an application should not directly operate RowID. In view of the above, the ROWID should no longer operate directly after using INFORMIX 7.0. The primary key can be defined in the database table for easy operation.
Chapter 6 ESQL / C Use 1, the use of the prepare statement After using the Prepar statement in SQL, you can perform multiple times according to the different values provided by the application, and the grammatical analysis is only executed. When an SQL statement is repeated in the same application, the use of prepare can greatly improve performance and efficiency. Prepare syntax: $ prepare p_id from "INSERT INTO TABNAME (..) VALUES (? ...) 2 Write the shared memory buffer because the number of I / O times the write buffer is reduced, so that performance can be improved. Plug-in cursor syntax: $ declare cursor_name cursor for Insert Into Tabnem (...) VALUES (...); ... $ Open cursor_name; ... $ put cursor_name; ... or $ prepare insert_name from "insert into tabname (...) values (...??); ... $ declare cursor_name cursor for insert_name; ... $ open cursor_name; ... $ put cursor_name;
3, the use of scroll cursor uses the advantages of scrolling cursors in that the records can be retrieved before, which is useful for applications that need to repeated access results. However, when the table is large, and often a lot of record retrieval, it will result in a large temporary table, low efficiency. It is recommended to use: 1) When defining the scrolling cursor, you can select as few cursors as possible, such as only the primary keywords in the table, and then select the corresponding record in the table; 2) If you do not need Scroll the cursor, should be avoided; 3) If the return result is sure that only one record is only one record, you should avoid using a cursor; 4) should avoid the nested use of the cursor. Rolling Rolling Tags: $ DECLARE CURSOR_NAME SCROLL CURSOR for ...; ... $ Open Cursor_name; 4, Update Cursor Using $ Declare Cursor_name Cursor for ... for Update ... $ update ... where current of cursor_name;
5, the use of SQL Switch SQLCA (SQL Communication Area) whenever a SQL statement is executed, online will return the execution to this structure, including information: 1) Completion of the recently running SQL statement; 2) Some information about performance; 3) For some warnings that may happen or have happen; SQLCA institutions are: struct sqlca_s {long sqlcode; char sqlerrm [72]; char sqlerrp [8]; long sqlerrd [6]; struct sqlcaw_s; {char sqlwarn1; char sqlwarn2; char sqlwarn3; char sqlwarn4; char sqlwarn5; char sqlwarn6; char sqlwarn7;} sqlwarn;} sqlca; when calling SQL error, the return code that often defines the error, should also Return to ISAM error code SQlerrd [1]. 6. The use of stored procedure is a SPL (Stored Procedure Laugh "language provided by the user, and some database operations prepared by the SPL provided by the INFORMIX. You can use the following syntax, use the stored procedure: $ create procedure p_name (...) ... End Procedure ... $ execute p_name; stored procedure, the database is compiled, generates the database query plan, and translates it into a database file store Ids database system table sysprocedures. All authorized users of the stored procedure can call the process. When the specific call, the IDS reads out the stored procedure from the system table sysprocedures, converts into execution code and runs it. For a SQL statement that is often repeatedly executed, if the memory process can be used, the complexity of the program can be reduced, and the interaction between the application and the database is changed from a SQL statement to a stored procedure name, which greatly reduces the amount of interaction data. . Second, because some system performance can be improved. Another different application can share a stored procedure to eliminate redundant code. In addition, it is also conducive to code maintenance in the Client / Server environment. By using the security level of the stored procedure, some illegal operations can be restricted, such as prohibiting users from accessing the database table.