Implementation of Software RAID under Linux

xiaoxiao2021-03-06  44

As a network operating system, the Redundant Array Of INExpensive Disks, referred to as RAID is one of the necessary features. Starting from the Linux 2.4 kernel, Linux provides software RAID, without having to buy expensive hardware RAID controllers and attachments (general, high-gear servers provide such equipment and hot-swap hard drives), greatly enhance Linux disk I / O performance and reliability. At the same time, it also has functions that combine multiple smaller disk spaces into a larger disk space. The software RAID here does not mean the RAID function on a single physical hard disk. In order to improve the performance of RAID, it is best to use multiple hard drives, and the hard disk effect using the SCSI interface will be better. The RAID role and the main use of the type RAID set the normal hard disk into a disk array. When the host is written, the RAID controller decomposes the data to be written into multiple data blocks, and then writes the disk array in parallel; host read data When the RAID controller reads the data dispersed on each hard disk in the disk array and supplies them to the host. The accessibility of the storage system is improved due to parallel read writing operations. In addition, the RAID disk array is more important to use mirror, parity, etc. to improve the fault tolerance of the system to ensure the reliability of the data. The RAID installation configuration can be performed as needed when installing the Linux operating system. In the process of using the Linux operating system, the RAID configuration can be performed with the manual method according to the needs of the application. Configuration premise is that the RAIDTOOLS toolkit must already be installed. The package can download the latest version raidtools-1.00.3.tar.gz from http://peopletools, then use the root user to decompress the package and enter the following command:

# cd raidtools-1.00.3

# ./configure

# Make

# make install

This way RAIDTOOLS-1.00.3 is installed, so you can install RAID at any time. In the Linux system, there is mainly provided RAID 0, RAID 1, RAID 5 RAID methods. RAID 0 is also known as Stripe or Striping, Chinese translation as a hatle work mode. It is the data to be accessed as much as possible in a strip shape as much as possible in a strip shape, and multiple hard drives are read and write at the same time, thereby increasing the reading and writing speed of the data. RAID 0 Another purpose is to achieve a larger "single" disk capacity. RAID 1 is also known as mirror or mirroring, in Chinese to mirroring mode. The emergence of this work mode is completely for data security, it is automatically copied to another hard disk or hard disk or hard disk. Data (mirror). When reading data, the system first reads data from the source disk of RAID 1. If the data is successfully read, the system does not depends on the backup disk; if the source disk data fails, the system is automatically turned to read. The data on the backup disk will not cause interruption of user work tasks. Because of the hundred percent backup of the stored data, RAID 1 provides the highest data security in all RAID levels. Similarly, due to 100% of the data, the backup data accounts for half of the total storage space, so that the disk space utilization rate of mirror is low and the storage cost is high. RAID 5 is a storage solution that stores performance, data security, and storage costs, as well as the most widely used RAID technology. Each block is separately hard drive for stripping segmentation, the same strip zone for parity (different or calculation), and the verification data is distributed on each hard disk. RAID 5 arrays constructed in n-block hard drives can have the capacity of N-1 hard drive, and the storage space utilization is very high. RAID 5 does not back up the stored data, but stores the data and the corresponding parity information to each disk constituting RAID5, and parity information and corresponding data are stored on different disks, respectively. When data is lost on any hard disk of RAID 5, it can be calculated by verifying data. RAID 5 has the advantages of data security, fast read and write speed, high space utilization, and is very wide. The deficiency is that if there is a fault in 1 hard disk, the performance of the entire system will be greatly reduced. RAID 5 can provide data security for the system, but the degree of protection is lower than mirror, and the disk space utilization rate is higher than the MIRROR. The RAID 5 has a data read speed similar to the RAID 0, just a parity information, and the speed of writing data is slightly slower than the single disk. At the same time, since multiple data corresponds to a parity check information, the disk space utilization rate of RAID 5 is higher than RAID 1, and the storage cost is relatively low. The creation process of RAID under Linux is in the actual use process, generally uses multiple separate disks to establish RAID, of course, using a single disk to establish RAID, the specific steps are similar. Here, I use a single disk to establish RAID as an example. 1. Log in with the root user 2. Use the FDisk tool to create a RAID partition (1) FDISK / DEV / HDA, where the hard disk on the IDE1 primary apparatus is assumed to have the remaining space. (2) Use the command n to create multiple new partitions as the same size. If the RAID 0 or RAID 1 partition number is established, at least more than or equal to 2, the RAID 5 partition number is at least equal to 3. N-start cylinder (can be pressing directly) - partition size; repeated the above process to the number of RAID partitions you want to create. The results are as follows: Disk / dev / hda: 240 Heads, 63 Sectors, 3876 Cylinders

Units = cylinders of 15120 * 512 BYTES

Device Boot Start End Blocks Id System / DEV / HDA1 * 1 1221 9230728 C Win95 FAT32 (LBA)

/ DEV / HDA2 1222 1229 60480 83 Linux

/ DEV / HDA3 1230 1906 5118120 83 Linux

/ DEV / HDA4 1907 3876 14893200 f Win95 ext'd (lba)

/ DEV / HDA5 1907 1960 408208 82 Linux SWAP

/ DEV / HDA6 1961 2231 2048728 b Win95 FAT32

/ DEV / HDA7 2709 3386 5125648 b Win95 FAT32

/ DEV / HDA8 3387 3876 3704368 7 HPFS / NTFS

/ DEV / HDA9 2232 2245 105808 83 linux

/ DEV / HDA10 2246 2259 105808 83 linux

/ DEV / HDA11 2260 2273 105808 83 linux

/ DEV / HDA12 2274 2287 105808 83 linux

After creating 4 Linux partitions with the n command, use the command P to display the partition. Here / DEV / HDA9, / DEV / HDA10, / DEV / HDA11 are created 4 Linux partitions. (3) Use the command T to change the partition type to the Software RAID type. T-partition number -FD (partition type); repeat the above process. After modifying the partition type, as follows:

/ dev / hda9 2232 2245 105808 fd Linux Raid Autodetect

/ dev / hda10 2246 2259 105808 fd linux raid autodtect

/ dev / hda11 2260 2273 105808 fd Linux Raid Autodetect

/ dev / hda12 2274 2287 105808 fd Linux Raid Autodetect

(4) Use the command W to save the partition table. 3. Restart the partition table to take effect 4. Use the Man RaiDTab to view the configuration file structure 5. Use the edit command to write the configuration file content / etc / raidtab as follows:

RAIDDEV / DEV / MD0

RAID-Level 5

NR-RAID-Disks 3

NR-SPARE-DISKS 1

Persistent-Superblock 1

Parity-Algorithm Left-Symmetric

CHUNK-SIZE 8

DEVICE / DEV / HDA9

RAID-DISK 0

DEVICE / DEV / HDA10

RAID-DISK 1

Device / dev / hda11

RAID-DISK 2

Device / dev / hda12spare-disk 0

Here you create RAID-5, 3 RAID disks, 1 alternate disk. Note "Chunk-size 8" cannot be less, specifying the block size used by RAID-5 is 8KB. RAID-5 volumes will be written into partitioned partitions with 8KB block, that is, the first 8KB of the RAID volume on HDA9, the second 8KB is on HDA10, and so on. The device name can be MD0 or MD1, etc. The "spare-disk" disk is mainly served, once a disk corruption can be immediately on top, you can do not. 6. Creating a RAID array using MKRAID / DEV / MD0 Here MD indicates that the RAID disk type is created. The results are as follows:

[root @ localhost root] # mkraid / dev / md0

HANDLING MD Device / dev / md0

Analyzing super-block

Disk 0: / dev / hda9, 105808kb, raid superblock AT 105728kb

Disk 1: / dev / hda10, 105808kb, raid superblock AT 105728kb

Disk 2: / dev / hda11, 105808kb, raid superblock AT 105728kb

Disk 3: / dev / hda12, 105808kb, raid superblock AT 105728kb

MD0: Warning: HDA10 APPEARS to Be on The Same Physical Disk As HDA9. True

Protection Against Single-Disk Failure Might Be Compromised.

MD0: Warning: HDA11 APPEARS TO BE ON The Same Physical Disk as HDA10. TRUE

Protection Against Single-Disk Failure Might Be Compromised.

MD0: Warning: HDA12 APPEARS to Be on The Same Physical Disk As HDA11. True

Protection Against Single-Disk Failure Might Be Compromised.

MD: MD0: RAID ARRAY IS Not Clean - Starting Background Reconstruction

8REGS: 2206.800 MB / Sec

32REGS: 1025.200 MB / SC

PII_MMX: 2658.400 MB / SEC

P5_MMX: 2818.400 MB / Sec

RAID5: USING FUNCTION: P5_MMX (2818.400 MB / SEC)

RAID5: RAID Level 5 Set MD0 Active with 3 Out of 3 Devices, Algorithm 2

7. Use lsraid -a / dev / md0 to see RAID partition status results as follows:

[root @ localhost root] # lsraid -a / dev / md0

[DEV 9, 0] / dev / md0 86391738.19bedd09.8f02c37b.51584dba online

[DEV 3, 9] / dev / hda9 86391738.19bedd09.8f02c37b.51584dba good

[DEV 3, 10] / dev / hda10 86391738.19bedd09.8f02c37b.51584dba good

[DEV 3, 11] / dev / hda11 86391738.19bedd09.8f02c37b.51584dba good [dev 3, 12] / dev / hda12 86391738.19bedd09.8f02c37b.51584dba spare

8. MKFS.EXT3 / DEV / MD0 Format the RAID partition to EXT3 format as follows:

[root @ localhost root] # mkfs.ext3 / dev / md0

MKE2FS 1.27 (8-MAR-2002)

FileSystem label =

OS Type: Linux

Block size = 1024 (log = 0)

FRAGMENT SIZE = 1024 (LOG = 0)

53040 Inodes, 211456 Blocks

10572 Blocks (5.00%) reserved for the super user

First Data Block = 1

26 Block Groups

8192 Blocks Per Group, 8192 Fragments Per Group

2040 inodes per group

Superblock Backups Stored On Blocks:

8193, 24577, 40961, 57345, 73729, 204801

RAID5: SWITCHING Cache Buffer Size, 4096 -> 1024

Writing Inode Tables: DONE

Creating Journal (4096 Blocks): DONE

Writing Superblocks and FileSystem Accounting Information:

DONE

This FileSystem Will Be Automatical CHECKED EVERY 22 MOUNTS OR

180 Days, Whichever Comes First. Use tune2fs -c or --i to override.

9.Mount / DEV / MD0 / MNT / MD0 Here you should first create an MD0 subdirectory in the MNT directory. At this point, all creation work is completed, and the MD0 directory is a directory with a RAID role. Checking the effect of RAID We can use the following steps to verify the effect of the RAID. 1.DD if = / dev / zero of = / dev / hda9 BS = 100000000 count = 10 Put all the first disk partitions HDA9 of the RAID 0; BS indicates how many bits written once, and counts how many times. Here, it is necessary to make the write data greater than the capacity of the disk partition, otherwise the data will automatically restore the original value due to the role of the RAID. As follows:

[root @ localhost root]

# DD if = / dev / zero of = / dev / hda9 bs = 100000000 count = 10

Dd: Writing `/ dev / hda9 ': No Space Left on Device

2 0 Records in

1 0 Records Out

2. View / dev / hda9 data is 0 as follows:

[root @ localhost root] # lsraid -a / dev / md0

Lsraid: Device "/ dev / hda9" Does Not Have a Valid Raid Superblock

Lsraid: Device "/ dev / hda9" Does Not Have a Valid Raid Superblock

Lsraid: Device "/ dev / hda9" Does Not Have A Valid Raid Superblocklsraid: Device "/ dev / hda9" Does Not Have a Valid RAID SuperBlock

[DEV 9, 0] / dev / md0 86391738.19bedd09.8f02c37b.51584dba online

[dev?,?] (unknown) @)) 0000000000000000000000000000000000 Missing

[DEV 3, 10] / dev / hda10 86391738.19bedd09.8f02c37b.51584dba good

[DEV 3, 11] / dev / hda11 86391738.19bedd09.8f02c37b.51584dba good

[DEV 3, 12] / dev / hda12 86391738.19bedd09.8f02c37b.51584dba spare

3.RaidStop / DEV / MD0 4.RaidStart / DEV / MD0 / DEV / HDA9 data is normal, indicating that the data check function of RAID has played. During using Linux, you can create RAIDs to improve data reliability and I / O performance, and even combine multiple hard drives remain in a large space.

转载请注明原文地址:https://www.9cbs.com/read-81286.html

New Post(0)