Database Physical Design Experience Talk (Reprinted from IT

xiaoxiao2021-03-06  111

Database Physical Design Experience Talking on Database Physical Distribution Design

Author: CCBZZP

Overview Which database we use, no matter how to design the database, I want to follow a principle: data security and performance efficient these two main aspects, but there are too many topics in these two aspects, here is not a statement I just discussed it simply from the physical distribution design of the database. Because the database is good physical distribution design is also relatively large for data security and performance efficient, it is like we must first make a foundation before building the building. In reality, we often ignore the physical layout of the database in the application of various databases, only when the database performance encounters problems, but this is not paid, which will not only lead to design related The problem occurs, and it will affect the performance of the performance, so the physical layout of the planning database before creating a database is also necessary, which is also in line with the truth of "knife is not mistaken to cut the fireworks". Below I will take Oracle as an example of optimizing operating system, disk layout optimization, and configuration, database initialization parameters, setting, and manage memory, setting, and manage table space, setting, and management, setting, and management Online redo logs, settings, and managing archive reform logs, settings, and management control files.

I. Optimizing the operating system in order to achieve the best server performance, the optimization of the operating system is also necessary, because operating system performance issues usually involve process management, memory management, scheduling, etc., so users need to ensure sufficient I / O Bandwidth, CPU processing power, swap space to reduce system time as much as possible. If the application has too many "busy" waiting in the buffer, the process of system call will increase, although it can reduce the number of calls by optimizing SQL statements, but this is not rooted. Users can start Oracle's initialization parameter TIMED_STATISTICS to increase the number of system calls, and vice versa if this parameter is turned off, then the number of system calls will decrease. The cache of the operating system and the Oracle's own cache management are incomplishing, although it consumes a certain resource, but it has a certain benefit of performance, because the I / O of all databases needs to access the file through the system file cache Memory. Oracle's operations may use many processes (some system call threads), so users should make sure all Oracle processes, background processes, user processes have the same priority, otherwise it will have deteriorated, resulting in high priority The process waits for a low priority process to release the CPU resource and then processes the Oracle's background process to the CPU, which will also cause the binded process to starve to death by the CPU resource. More preferably, some operating systems provide Operating System Resource Manager, which can reduce the influence of peak load modes through the system resource access, and control users. Resource access, limiting the consumption of user resources.

two. Disk layout optimization and configuration In most product database applications, database files are generally placed on disk, so the good use and layout of the disk is also important. The target of disk layout is that disk performance cannot hinder the implementation of database performance, and the database disk must be dedicated to database files, otherwise the non-database will affect the database, and this impact is unpredictable; system hardware and mirror must meet recovery and Performance requirements, data file size and I / O cannot exceed the size of the disk and I / O, the database must be recovered, and the competition between the background process must be minimized. It is also important to note when planning the hard disk configuration: The disk capacity used is first used, sometimes more than a small disk is better than using a large disk effect because of a more advanced parallel I / O operation; the speed of the second disk, If the reaction time and the seek time will affect the performance of I / O, consider using the appropriate file system as a data file; re-use the appropriate RAID. RAID (Redundant ARRAYS OF INEXPENSIVE DISKS) inexpensive redundant array can improve data reliability, while the performance of I / O depends on the way RAID configuration: RAID1 can provide a relatively good reliability and faster read speed, but The cost of writing is relatively large, so it is not suitable for frequent writing; RAID0 1 is read faster based on the original RAID1, so this is also a way you often choose; RAID5 can provide a better reliability, order Reading is more suitable for this method, but performance will be affected, and it is not suitable for use in frequent writing operations. The way to choose this method cannot be terminated, depending on the specific situation. Some application software will congenitally be restricted by the disk, so it should be as designed to make Oracle's performance is not limited by I / O, so the following database needs: Storage disk when designing an I / O system: storage disks Minimum byte; availability, such as 24x7, 9x5; performance such as I / O output and response time. Deciding the I / O statistics of the Oracle file to query the following: Physical reading (V $ filestat.phyrds), physical write quantity (V $ filestat.phywrites), average time, I / O = physical read physical write. The average number of I / O = (Physical Read Physical Writing) / Shared Seconds), estimating that this data is useful for the new system, you can query the I / O requirements of the new application and whether the I / O capability of the system is Match to adjust in time.

three. The first phase of the selection management database for creating database initialization parameters is to initialize the creation of the database. Although the performance can be adjusted after the database is created, some parameters cannot be modified or difficult to modify, such as: db_block_size, db_name, db_domain, compatible , NLS_LANGUAGE, NLS_CHARACTERSET, NLS_NCHAR_CHARACTERSET. The DB_BLOCK_SIZE parameter determines the size of the Oracle database block, which can be selected from 2k, 4K, 8K, 16K, 32K. The effect of using the next larger value database block size can generally increase the performance of 50% in the query. However, according to regular, the general server does not advocate this value large, except for small machines, because there will be more lines in the database block, and the possibility of block-level competition during database maintenance is relatively large. Avoid this competition is to increase the settings of FreElists, MaxTrans, and initrans in the table-level and index levels, usually Freelists set to greater than 4 will bring more benefits. DB_NAME This parameter specifies a database identifier, which is generally specified in CREATE DATABASE, and the parameter is optional (it is required when the Oracle9i real-time application cluster, and multiple instances have the same parameter value), but it is recommended to be in CREATE Database sets it before, if not specified, you have to appear in the Startup or Alter Database mount command. DB_DOMAIN This parameter specifies the extension section of the global database name, and it is necessary when applying the cluster in the Oracle9i real-time, and multiple instances have the same parameter value. Compatible This parameter specifies the compatibility of the Oracle server maintenance, ensuring that the user is compatible with the early version to allow users to use new versions, and are required when applying clusters in the Oracle9i real-time, multiple instances have the same parameter value. NLS_LANGUAGUE and NLS_CHARACTERSET and NLS_NCHAR_CHARACTERSET three parameters are the character set parameters of the database, and they can never change or difficult to change after the database creation is complete, so they must be set first when creating the database.

four. Setting and managing memory Oracle Use Share Save to manage its memory and file structure, Oracle's memory structure is as follows: System Global Area, SGA, SGA varies with different environments, no ordinary most Good solution, we are setting it to just consider the following aspects: the physical memory is much; the operating system is the kind of memory, the database system is a file system or a naked device; the database running mode. SGA includes: Fixed Size, Variable Size, Database Buffers, Redo Buffers. SGA has no strict regulations in the proportion of physical memory, and can only follow the general rules: SGA occupies 40% - 60% of physical memory. If an intuitive formula is expressed: OS uses memory SGA concurrent process number * (sort_area_size hash_area_size 2m) <0.7ram, this formula is just a reference, does not have to be arrested, the actual situation can be free. Some parameters in the initialization parameter file have a decisive impact on the size of the SGA. Parameter db_block_buffers (number of buffers in SGA cache), parameter shared_pool_size (number of bytes assigned to shared SQL zones), is the main impact of SGA size. The Database Buffers parameter is the most important determinant of SGA size and database performance. This value is high, and the hit rate of the system can be improved, and I / O is reduced. The size of each buffer is equal to the size of the parameter DB_BLOCK_SIZE. Oracle database blocks indicate in bytes. The Oracle SGA zone shared pool part consists of a library cache, a dictionary cache, and some other users and server session information, and the shared pool is the largest consumption ingredient. Adjust the size of each structure of the SGA area, which can greatly improve the performance of the system. Data Block Buffers Cache, Data Buffers is DB_BLOCK_BUFFERS * DB_BLOCK_SIZE, 9I in 8i, in replace this parameter in DB_CACHE_SIZE. After the other parameter settings are completed in the memory configuration, you should give Data Buffers. Oracle caches read and write data to the database during operation, and the cache hits indicates that the information has been in memory, and the cache fails means that Oracle must perform disk I / O. The key to keeping the cache failure rate is to ensure the size of the cache. The initialization parameter db_block_buffers in Oracle8i Controls the size of the database buffer cache. You can query the V $ sysstat hit rate to determine if the value of DB_BLOCK_BUFFERS should be added. SELECT NAME, VALUE FROM V $ SYSSTAT WHERE Name in ('dbblock gets', 'consistent gets', 'physical reads'); hit rate by query results = 1-Physical Reads / (DBBLOCK GETS CONSITENT GETS) if hit rate 0.6 ~ 0.7, then DB_BLOCK_BUFFERS should be increased. Dictionary Cache, the size of the data dictionary cache is managed inside the database, and the size is set by the parameter shared_pool_size. The data dictionary cache includes structures, users, entity information, etc. of the database. The hit rate of the data dictionary has a big impact on the system. In the calculation of hit rate, getMisses represents the number of failed, and Get is the number of successful success.

Query V $ ROWCACHE Table: SELECT (1- (SUM (getMisses) / (SUM (Gets) SUM (GetMisses))) * 100 from V $ Rowcache; if this value is> 90%, the hit rate is appropriate. Otherwise, the size of the shared pool should be increased. Heavy log buffer, which will be statements below, not explained here. SQL shared pool, the shared pool includes syntax analysis that includes execution plans and a SQL statement that performs SQL statements for the database, which can be accelerated in SQL when running the same SQL statement. If it is too small, the statement will continue to reload the library cache continuously to affect performance. This parameter can be modified by the alter system command, and the version after 9i can dynamically modify its size. Large pool size is an optional memory area. If you choose to make a database icon backup / recovery these large operations improve performance. If this parameter is not selected, the system uses a shared pool. Java pool size, known by its name, is to meet the needs of Java command syntax analysis. If the size of the neighbor is 4MB in the UNIX system, the default size should be 24m, if the zone size is 16MB, the default size is 32m. If the database does not use Java, remain at 10m-20m. Multiple Buffer Pools, you can use multiple buffer pools to separate large data sets from the remainder of the application to reduce the likelihood of they compete for the same resources in the cache area, and need to set them in the initialization parameters when creating size. Program Global Area (PGA) is a private memory area of ​​Oracle. In the next version, if Workarea_Size_Policy = Auto, all sessions share a memory, the internal parameter PGA_AGGREGATE_TARGET set, it's a good The initial setting is: for an OLTP system PGA_AGGREGATE_TARGET = (TOTALL_MEM * 80%) * 20%; for a DSS system PGA_AGGREGATE_TARGET = (Total_MEM * 80%) * 50%. Total_mem here is physical memory. When adjusting the PGA_AGGREGATE_TARGET parameters, the following dynamic views will help: V $ sysstat and v $ sessstat; v $ sql_workarea_active; v $ pgastat; v $ sql_workarea; v $ process.

Fives. Setting and managing CPUs In the process of setting up and installing the database, you don't have to configure what the CPU is configured. The system will automatically default, but in the management process we can use the operating system monitoring tool to monitor the situation of the CPU. For example, in UNIX systems, you can run SAR-U tools to check the level of the entire system using the CPU. The statistics include: user time, system time, idle time, I / O wait time. In the case of a normal working load, if the idle time and the I / O wait time are close to 0 or less than 5%, it means that there is a problem with the use of the CPU. For Windows systems, you can check the use of the CPU through Performance Monitor to provide the following information: processor time, user time, privilege time, interrupt time, DPC time. If there is a problem with the use of the CPU, you can resolve the following ways: optimize the system and database; increase hardware ability; divide the CPU resource allocation, the Oracle database resource manager is responsible for users and applications Assign and manage CPU resources between procedures.

six. Setting and managing the I / O competition between table space database files is a trick of the database, so preliminary evaluation of the I / O of the data file before the database planning, usually, the application product database table is Table space will be active, the tablespace such as index table space and data dictionary is also active. In applications where things are more frequent, the table space is also active, so I have the i of different types of databases. / O competition will also be slightly different, but basically comply with the following principles, better: the tables and indexes of the application should usually be assigned or partitioned into multiple tablespaces to reduce the I / O of a single data file. It is better to establish a separate table space of each function; there is no reason to remove the other things outside the data dictionary table and the system backward section to the system table space, to remove the objects that can be removed from the system table space The index segment should not be placed in the same table space with the related table because they produce a lot of concurrent I / O during data management and query; temporary table space is used to store large number of sorting, so other application objects cannot be Place the temporary table space. The above is the principle of database file distribution, principles are protocol, and we are still better in terms of experience. Of course, there is no experience before you have no experience, so that you will not take it. Database and table space can be a pair of relationships, tablespaces and data files can also be a pair of relationships, data files and data objects can also be a couple of relationships. When a data object (such as a table or index) is created, it can be given a table space by default or special command, which is created in the tablespace to store data related to the object. A segment consists of some sections called interval (a set of continuous Oracle blocks), and when the existing segment cannot store data, this segment is to obtain another interval to support the insertion of the data into the object. . Therefore, the space used in this segment is determined by its parameters, which can be specified at the time of creation or can be changed later. If you do not specify a storage parameter in Create Table, Create Index, Create Cluster, Create Rollback Segment command, the database will default to store the parameters of the tablespace where the table space is stored, these parameters include Initial, Next, Pctincrease, MaxExtents, Mineltents, etc. You cannot modify the initial and minelts values ​​after you create, and the storage parameter default value for each table space can be queried in the DBA_TABLESPACES view. Disk I / O is a bottleneck of system performance, solving disk I / O, which can significantly improve performance. By querying V $ fileStat, you can know the frequency of usage of each physical file (Phyrds represents the number of times each data file read, PHYWRTS means the number of times each data file is written) SELECT NAME, PHYRDS, PHYWRTS from V $ DataFile DF, V $ filestat FS where df.file # = fs.file #; For physical files with high frequency, the following policies can be used: I / O is allocated as much as possible in as many disks as possible; set different tables for tables and indexes Space; separate data files with redo log files on different disks; reducing disk I / O without Oracle Server. If you don't care, you accidentally put the data file planning inappropriate, so that you have generated a lot of I / O activity, then re-adjust the distribution of data files based on the above principles to balance data files The I / O competition, how to move data files, the method of various databases is not the same, but the basic principle is still the same, the following is an example of two methods for how Oracle8i mobile data files (9i is slightly different) : First method: (ALTER DATABASE) Turn off the database - Mobile Database File - Load and Renname - Start

1> svrmgrl2> connect internal3> shutdown4> exit5> mv /u/product/oradata/foxmold/user01.dbf / db3 / oradata 6> svrmgrl7> connect internal8> startup mount foxmold9> alter database rename file '/ u / product / oradata /FOXMOLD/USER01.DBF 'to' / db3 / oradata / user01.dbf'10> ALTER DATABASE OPEN Second Method: (ALTER TABLESPACE) Turn off Database - Mobile Database File - Load and Rename - Start 1> SVRMGRL2> Connect INTERNAL3 > shutdown4> exit5> mv /u/product/oradata/foxmold/user01.dbf / db3 / oradata 6> svrmgrl7> connect internal10> alter database rename file '/ u / product / oradata / foxmold / user01.dbf' to '/ DB3 / ORADATA / User01.dbf'8> ALTER DATABASE OPEN The above FoxMold represents the current Database Name.

Seven. Setting up and managing a rollback segment typically can process any size, so there is also a different rollback segment. The size of the roll band is set by specifying the storage clause when creating a rollback segment, but it will generally follow the principles: OLTP things have many concurrent things, each may only modify a small amount of data, can establish 10KB to 20KB size Rolling segments, each with 2 to 4 scope; for a long query to maintain a lot of rollback information in order to maintain read consistency, it is necessary to roll up the segment, and the size of the establishment of the rollover segment is best. 10% of the maximum table (most queries only affect the amount of data of about 10% of the table). Setting the rollback segment can be implemented by Create Rollback Segment and ALTER ROLLBACK Segment statements. In general, INITIAL = next, set an Optimal parameter to save space, do not set maxExtents unlimited, and the rollback segment should be created in a specific rollometer table space. The target capacity of the roll band can be defined by the storage parameter Optimal, which specifies the size of the rollback segment to be reduced. If it is found that the return segment is continuously contracted due to Optimal, it is likely that the return segment is not appropriate. This can be determined by dynamic view V $ ROLLSTAT to determine if there is a problem, such as: Select Substr (Name, 1, 40 Name, Extents, Resize, Aveactive, Aveshrink, Extends, Shrink from V $ ROLLNAME RN, V $ ROLLSTAT RS WHERE RN.USN = RS.USw; the results are as follows: Name Extents Resize Aveactive Aveshrint Extents Shrinks ----- - ---------- -------------- -------- -------- SYSTEM 4 202876 0 0 0 0CSIRSL 2 202876 55192 0 0 0 If the average size is close to Optimal, Optimal is correct, if Extents and Shrinks are high, you must increase the Optimal value. If you have a long time query and run, you should set Optimal when you design the Optimal value. Using a rollback segment can improve system performance, reduce competition, and how much to return to rollback segments should be determined by the database in the database, too many things will compete with a rolling segment, check the dynamic performance table V $ Waitstat can be viewed back Is there a competition on a roll: SELECT CLASS, Count From V $ Waitstat WHERE CLASS IN ('undo header', 'undo block ",' system undo header ',' system undo block '); the result is as follows: Class Count --- ----------------------------------------- System Undo Header 0System undo block 0undo Header 0Undo Block 0 is then compared to the total number of these values ​​and data requests. The total number of data requests is equal to the sum of DB buffer gets and consistent gets in V $ sysstat: SELECT SUM (Value) 'Data Requests' from V $ SYSTAT WHERE Name in (' DB Block Gets', 'Consistent Gets " ); The results are as follows: Data Requests --------------------------- 5105 If any Class / Sum (Value)> 10%, then consider Increase the return.

The number of refunds is generally set as follows: Number of users return Number N <16 416

Eight. Setting up and managing the size of the online redo logging log can also affect performance, because the writing and archiving of the database depends on the size of the redo log, usually, larger redo log files can provide some performance, Small can increase the activity and reduce frequency of the checkpoint. It is impossible to provide a certain-specific recommendation for a redo log file, and the redo log file is considered reasonable in several hundred megabytes to several GB bytes. The size of the log file is determined according to the online redemption quantity generated by the system. Under normal circumstances, it should be maintained at approximately 20 minutes to exchange log files once. If you have a redo log buffer competition, the performance impact on the database will also be large. In order to reduce the competition of redo log buffer, we can determine if the REDO Log file buffer is sufficient by querying the V $ SYSSTAT table. Select Name, Value from V $ sysstat where name = 'redo log space request'; here VALUE's value should be close to 0, otherwise, the value of the log_buffers of the initialization parameter file should be increased, the log file cannot be used. Change, but can add new, larger files, and the original file can be deleted. The specific implementation is as follows: 1. Suppose there is a three log group, with a member in each group, each member is 1MB, and now you want to change the membership size of these three log groups to 20MB. 2. Create a new log group ALTER DATABASE Add Logfile Group4 ('d: /oradb/redo04.log') Size 2048k; Alter Database Add logfile group5 ('d: /oradb/redo05.log') size 2048k; 3. Switching current Logs to the new log group ALTER System Switch logfile; ALTER SYSTEM SWITCH LOGFILE; 4. Delete old logs ALTER DATABASE DROP logfile group 1; ALTER DATABASE DROP Logfile Group 2; ALTER DATABASE DROP logfile group 3; 5. Delete the files in the original log file groups 1, 2, and 3 under the operating system. 6. Rebuild log group 1,2,3alter Database Add logfile group 1 ('d: /oradb/redo01_1.log') size 20m; Alter Database Add logfile group 2 ('d: /oradb/redo02_1.log') Size 20M; Alter Database Add logfile group 3 ('d: /oradb/redo03_1.log') Size 20M; 7. Switching log group ALTER SYSTEM SWITCH LOGFILE; ALTER SYSTEM SWITCH LOGFILE; ALTER SYSTEM SWITCH logfile; 8. Delete the log group of the intermediate transition 4, 5Alter Database Drop logfile group 4; ALTER DATABASE DROP LOGFILE GROUP 5; 9. Delete the files in the transition log file group 4, 5 under the operating system.

10. Back up the current latest control file SQL> Connect InternalSQL> ALTER DATABASE BACKUP ControlFile to TRACE RESETLOGS; online redo log files are also movable, specific methods are: first turn off the database, mobile online redo log file, then install the database, use The Alter Database command notifies the database connection to the new location of the log file. You can then open the instance with the log file on the new location. nine. Setting and managing archiving reform log When Oracle runs in ArchiveLog mode, the database is copied after each online redo log file, which is usually written to disk or writes other devices, but this needs Artificial intervention. Arch Background Performing an archive function, if there is a lot of frequent things, it will produce competition in the log file disk, avoiding this competition to distribute online redo log files to multiple disks. In order to improve the performance of the archive, you can create an online redo log file group with multiple members, but you must take into account I / O of each device. Archiving Heavy log files should not be stored in the same device with System, RBS, DATA, TEMP, INDEXES tablespace, and the like, and can not be stored in the same device with any online redo log files to avoid disgupping competition. When archived redo log files, it is possible to delete or remove, otherwise it will occupy a relatively large space affecting the use of the hard disk and reducing the performance of the system.

ten. Setting and managing the location of the Control File Control file specified in the instance initialization parameter file, to move the control file, you must first turn off the database instance, move the control file, edit the initialization parameter file, and then restart the instance. The following will explain how to move control files as an example: OS is Linux, Database is Oracle8i. 1. Query the location of the current database control file Select * from v $ controlfile; 2. Move the control file /u/oradata/foxmold/control01.ctl to the / db4 / oradata / directory. 3. svrmgrl4. Connect internal5. Shutdown6. Exit7. Cp /u/oradata/foxmold/control01.ctl /db4/oradata/control01.ctl8. Chmod 660 /db4/oradata/control01.ctl9. Initsid.ora control_files = ... 10. Startup mount foxmold The foxmold on the top of the current Database Name.

eleven. Summarizing the physical design of the Oracle database for a simple statement of database, which may be slightly different, but the overall thinking is still consistent. It is inevitable that there is an improper place to write, you are welcome to criticize and correct!

转载请注明原文地址:https://www.9cbs.com/read-105197.html

New Post(0)