As a network operating system, the Redundant Array Of INExpensive Disks, referred to as RAID is one of the necessary features. Starting from the Linux 2.4 kernel, Linux provides software RAID, without having to buy expensive hardware RAID controllers and attachments (general, high-gear servers provide such equipment and hot-swap hard drives), greatly enhance Linux disk I / O performance and reliability. At the same time, it also has functions that combine multiple smaller disk spaces into a larger disk space. The software RAID here does not mean the RAID function on a single physical hard disk. In order to improve the performance of RAID, it is best to use multiple hard drives, and the hard disk effect using the SCSI interface will be better. The RAID role and the main use of the type RAID set the normal hard disk into a disk array. When the host is written, the RAID controller decomposes the data to be written into multiple data blocks, and then writes the disk array in parallel; host read data When the RAID controller reads the data dispersed on each hard disk in the disk array and supplies them to the host. The accessibility of the storage system is improved due to parallel read writing operations. In addition, the RAID disk array is more important to use mirror, parity, etc. to improve the fault tolerance of the system to ensure the reliability of the data. The RAID installation configuration can be performed as needed when installing the Linux operating system. In the process of using the Linux operating system, the RAID configuration can be performed with the manual method according to the needs of the application. Configuration premise is that the RAIDTOOLS toolkit must already be installed. The package can download the latest version raidtools-1.00.3.tar.gz from http://peopletools, then use the root user to decompress the package and enter the following command: # cd raidtools-1.00.3 # ./configure# make # make install This RAIDTOOLS-1.00.3 is installed, so you can install the RAID at any time. In the Linux system, there is mainly provided RAID 0, RAID 1, RAID 5 RAID methods. RAID 0 is also known as Stripe or Striping, Chinese translation as a hatle work mode. It is the data to be accessed as much as possible in a strip shape as much as possible in a strip shape, and multiple hard drives are read and write at the same time, thereby increasing the reading and writing speed of the data. RAID 0 Another purpose is to achieve a larger "single" disk capacity. RAID 1 is also known as mirror or mirroring, in Chinese to mirroring mode. The emergence of this work mode is completely for data security, it is automatically copied to another hard disk or hard disk or hard disk. Data (mirror). When reading data, the system first reads data from the source disk of RAID 1. If the data is successfully read, the system does not depends on the backup disk; if the source disk data fails, the system is automatically turned to read. The data on the backup disk will not cause interruption of user work tasks. Because of the hundred percent backup of the stored data, RAID 1 provides the highest data security in all RAID levels. Similarly, due to 100% of the data, the backup data accounts for half of the total storage space, so that the disk space utilization rate of mirror is low and the storage cost is high. RAID 5 is a storage solution that stores performance, data security, and storage costs, as well as the most widely used RAID technology. Each block is separately hard drive for stripping segmentation, the same strip zone for parity (different or calculation), and the verification data is distributed on each hard disk. RAID 5 arrays constructed in n-block hard drives can have the capacity of N-1 hard drive, and the storage space utilization is very high.
RAID 5 does not back up the stored data, but stores the data and the corresponding parity information to each disk constituting RAID5, and parity information and corresponding data are stored on different disks, respectively. When data is lost on any hard disk of RAID 5, it can be calculated by verifying data. RAID 5 has the advantages of data security, fast read and write speed, high space utilization, and is very wide. The deficiency is that if there is a fault in 1 hard disk, the performance of the entire system will be greatly reduced. RAID 5 can provide data security for the system, but the degree of protection is lower than mirror, and the disk space utilization rate is higher than the MIRROR. The RAID 5 has a data read speed similar to the RAID 0, just a parity information, and the speed of writing data is slightly slower than the single disk. At the same time, since multiple data corresponds to a parity check information, the disk space utilization rate of RAID 5 is higher than RAID 1, and the storage cost is relatively low. The creation process of RAID under Linux is in the actual use process, generally uses multiple separate disks to establish RAID, of course, using a single disk to establish RAID, the specific steps are similar. Here, I use a single disk to establish RAID as an example. 1. Log in with the root user 2. Use the FDisk tool to create a RAID partition (1) FDISK / DEV / HDA, where the hard disk on the IDE1 primary apparatus is assumed to have the remaining space. (2) Use the command n to create multiple new partitions as the same size. If the RAID 0 or RAID 1 partition number is established, at least more than or equal to 2, the RAID 5 partition number is at least equal to 3. N-start cylinder (can be pressing directly) - partition size; repeated the above process to the number of RAID partitions you want to create. The results are as follows: Disk / dev / hda: 240 Heads, 63 Sectors, 3876 cylindersunits = cylinders of 15120 * 512 BYTESDEVICE BOOT START END Blocks ID System / DEV / HDA1 * 1 1221 9230728 C Win95 FAT32 (LBA) / DEV / HDA2 1222 1229 60480 83 Linux / DEV / HDA3 1230 1906 5118120 83 Linux / dev / hda4 1907 3876 14893200 F Win95 ext'd (lba) / dev / hda5 1907 1960 408208 82 Linux swap / dev / hda6 1961 2231 2048728 b Win95 FAT32 / dev / hda7 2709 3386 5125648 b Win95 FAT32 / dev / hda8 3387 3876 3704368 7 HPFS / NTFS / dev / hda9 2232 2245 105808 83 Linux / dev / hda10 2246 2259 105808 83 Linux / dev / hda11 2260 2273 105808 83 Linux / DEV / HDA12 2274 2287 105808 83 Linux Use the n command to create 4 Linux partitions, use the command P to display the partition. Here / DEV / HDA9, / DEV / HDA10, / DEV / HDA11 are created 4 Linux partitions. (3) Use the command T to change the partition type to the Software RAID type. T-partition number -FD (partition type); repeat the above process.
Modified partition type as follows: / dev / hda9 2232 2245 105808 fd Linux raid autodetect / dev / hda10 2246 2259 105808 fd Linux raid autodetect / dev / hda11 2260 2273 105808 fd Linux raid autodetect / dev / hda12 2274 2287 105808 FD Linux RAID AutodteTECT (4) Use the command W to save the partition table. 3. Restart the partition table to take effect 4. Use Man RaiDTab to view the configuration file structure 5. Use the edit command to write the configuration file content to / etc / raidTab as follows: RAIDDEV / DEV / MD0RAID-Level 5nr-raid-disks 3nr- spare-disks 1persistent-superblock 1parity-algorithm left-symmetricchunk-size 8device / dev / hda9raid-disk 0device / dev / hda10raid-disk 1device / dev / hda11raid-disk 2device / dev / hda12spare-disk 0 here create RAID-5, using 3 RAID disks, 1 alternate disk. Note "Chunk-size 8" cannot be less, specifying the block size used by RAID-5 is 8KB. RAID-5 volumes will be written into partitioned partitions with 8KB block, that is, the first 8KB of the RAID volume on HDA9, the second 8KB is on HDA10, and so on. The device name can be MD0 or MD1, etc. The "spare-disk" disk is mainly served, once a disk corruption can be immediately on top, you can do not. 6. Creating a RAID array using MKRAID / DEV / MD0 Here MD indicates that the RAID disk type is created.
The results are as follows: [root @ localhost root] # mkraid / dev / md0handling MD device / dev / md0analyzing super-blockdisk 0: / dev / hda9, 105808kB, raid superblock at 105728kBdisk 1: / dev / hda10, 105808kB, raid superblock at 105728kBdisk 2: / dev / hda11, 105808kB, raid superblock at 105728kBdisk 3: / dev / hda12, 105808kB, raid superblock at 105728kBmd0: WARNING:. hda10 appears to be on the same physical disk as hda9 Trueprotection against single-disk failure might be compromised.md0: WARNING: hda11 appears to be on the same physical disk as hda10 Trueprotection against single-disk failure might be compromised.md0:. WARNING: hda12 appears to be on the same physical disk as hda11 Trueprotection against single-disk. failure might be compromised.md: md0: RAID array is not clean - starting background reconstruction8regs: 2206.800 MB / sec32regs: 1025.200 MB / secpII_mmx: 2658.400 MB / secp5_mmx: 2818.400 MB / secraid5: using function: p5_mmx (2818.400 MB / sec) RAID5: RAID Level 5 Set MD0 Active with 3 Out of 3 Devices, Algor Ithm 27. Use lsraid -a / dev / md0 to see the RAID partition status results as follows: [Root @ localhost root] # lsraid -a / dev / md0 [dev 9, 0] / dev / md0 86391738.19bedd09.8f02c37b.51584dba online [dev 3, 9] / dev / hda9 86391738.19BEDD09.8F02C37B.51584DBA good [dev 3, 10] / dev / hda10 86391738.19BEDD09.8F02C37B.51584DBA good [dev 3, 11] / dev / hda11 86391738.19BEDD09.8F02C37B. 51584DBA Good [DEV 3, 12] / dev / hda12 86391738.19bedd09.8f02c37b.51584dba spare8.mkfs.ext3 / dev / md0 to format the RAID partition to EXT3 format results as follows: [Root @ localhost root] # mkfs.ext3 / dev / md0mke2fs 1.27 (8-Mar-2002) Filesystem label = OS type: LinuxBlock size = 1024 (log = 0) Fragment size = 1024 (log = 0) 53040 inodes, 211456 blocks10572 blocks (5.00%) reserved for the super Userfirst Data Block =
126 block groups8192 blocks per group, 8192 fragments per group2040 inodes per groupSuperblock backups stored on blocks: 8193, 24577, 40961, 57345, 73729, 204801raid5: switching cache buffer size, 4096 -> 1024Writing inode tables: doneCreating journal (4096 blocks) : doneWriting superblocks and filesystem accounting information:. doneThis filesystem will be automatically checked every 22 mounts or180 days, whichever comes first Use tune2fs -c or -i to override.9.mount / dev / md0 / mnt / md0 should be here in the first mnt Create an MD0 subdirectory in the directory. At this point, all creation work is completed, and the MD0 directory is a directory with a RAID role. Checking the effect of RAID We can use the following steps to verify the effect of the RAID. 1.DD if = / dev / zero of = / dev / hda9 BS = 100000000 count = 10 Put all the first disk partitions HDA9 of the RAID 0; BS indicates how many bits written once, and counts how many times. Here, it is necessary to make the write data greater than the capacity of the disk partition, otherwise the data will automatically restore the original value due to the role of the RAID.