Hamsa K
Editor
9 min read | 2 months ago

Configuring RAID1 on Centos 7

Configuring Raid1 on centos 7

RAID stands for Redundant Array of Inexpensive Disks. RAID allows you to turn multiple physical hard drives into a single logical hard drive. There are many RAID levels such as RAID 0, RAID 1, RAID 5, RAID 10 etc.

There are two ways to implementing RAID technology into your environment 

  • Software RAID can be used with most of the modern Linux distributions but they have low performance as they use the resources pf their host but have advantage of being cheap as no dedicated hardware is needed,
  • Hardware RAID have their dedicated hardware know as RAID controllers which have its own memory modules & thus doesn’t use the resources of the host. They have good performance but are very costly to use.

Here we will discuss about RAID 1 which is also known as disk mirroring. RAID 1 creates identical copies of data. If you have two hard drives in RAID 1, then data will be written to both drives. The two hard drives have the same data.

Mirrors are created to protect against data loss due to disk failure. Each disk in a mirror involves an exact copy of the data. When one disk fails, the same data can be retrieved from other functioning disk. However, the failed drive can be replaced from the running computer without any user interruption.

Step1: Install mdadm

[root@lampblogs ~]# yum install mdadm

After mdadm package is installed you need to test them by following command

[root@lampblogs ~]# mdadm -E /dev/sd[b-c]
mdadm: No md superblock detected on /dev/sdb.
mdadm: No md superblock detected on /dev/sdc.

Its saying there is no superblock detected.so no raid defined.

Step 2: Drive partition

Here we are using minimum two disks to created raid1.Let’s create partitions on these two drives using fdisk

[root@lampblogs ~]# fdisk /dev/sdb
Welcome to fdisk (util-linux 2.23.2).

Changes will remain in memory only, until you decide to write them. Be careful before using the write command.

Device does not contain a recognized partition table Building a new DOS disklabel with disk identifier 0x15fb4e16.

Command (m for help): n Partition type: p primary (0 primary, 0 extended, 4 free) e extended Select (default p): Using default response p Partition number (1-4, default 1): First sector (2048-16777215, default 2048): Using default value 2048 Last sector, +sectors or +size{K,M,G} (2048-16777215, default 16777215): Using default value 16777215 Partition 1 of type Linux and of size 8 GiB is set

Command (m for help): p

Disk /dev/sdb: 8589 MB, 8589934592 bytes, 16777216 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk label type: dos Disk identifier: 0x15fb4e16

Device Boot Start End Blocks Id System /dev/sdb1 2048 16777215 8387584 83 Linux

Command (m for help): t Selected partition 1 Hex code (type L to list all codes): fd Changed type of partition 'Linux' to 'Linux raid autodetect'

Command (m for help): p

Disk /dev/sdb: 8589 MB, 8589934592 bytes, 16777216 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk label type: dos Disk identifier: 0x15fb4e16

Device Boot Start End Blocks Id System /dev/sdb1 2048 16777215 8387584 fd Linux raid autodetect

Command (m for help): w The partition table has been altered!

Calling ioctl() to re-read partition table. Syncing disks.

Follow same instructions to create partition for /dev/sdc

After creation of both partitions verify changes with mdadm command

[root@lampblogs ~]# mdadm-E /dev/sd[b-c]
/dev/sdb:
   MBR Magic : aa55
Partition[0] :     16775168 sectors at         2048 (type fd)
/dev/sdc:
   MBR Magic : aa55
Partition[0] :     16775168 sectors at         2048 (type fd)
[root@lampblogs ~]# mdadm -E /dev/sd[b-c]1
mdadm: No md superblock detected on /dev/sdb1.
mdadm: No md superblock detected on /dev/sdc1

still we don't have any superblocks detected yet. Now we will configure raid1

Step 3: creating raid1 device

we will create RAID array named as /dev/md0 using below command

[root@lampblogs ~]# mdadm --create /dev/md0 --level=mirror --raid-devices=2 /dev/sd[b-c]1
mdadm: Note: this array has metadata at the start and
    may not be suitable as a boot device.  If you plan to
    store '/boot' on this device please ensure that
    your boot-loader understands md/v1.x metadata, or use
    --metadata=0.90
Continue creating array? y
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.
[root@lampblogs ~]# cat /proc/mdstat
Personalities : [raid1] 
md0 : active raid1 sdc1[1] sdb1[0]
      8382464 blocks super 1.2 [2/2] [UU]
      unused devices: <none>

Now check raid devices type and raid array using following commands.

[root@lampblogs ~]# mdadm -E /dev/sd[b-c]1
[root@lampblogs ~]# mdadm --detail /dev/md0

sample output

[root@lampblogs ~]# mdadm --detail /dev/md0
/dev/md0:
           Version : 1.2
     Creation Time : Mon Sep  9 15:36:40 2019
        Raid Level : raid1
        Array Size : 8382464 (7.99 GiB 8.58 GB)
     Used Dev Size : 8382464 (7.99 GiB 8.58 GB)
      Raid Devices : 2
     Total Devices : 2
       Persistence : Superblock is persistent
   Update Time : Mon Sep  9 15:37:23 2019
         State : clean 
Active Devices : 2

Working Devices : 2 Failed Devices : 0 Spare Devices : 0

Consistency Policy : resync

          Name : lampblogs.com:0  (local to host lampblogs.com)
          UUID : 5bc7cec3:5efe20a9:fd5b9a72:adeb2bd2
        Events : 17

Number   Major   Minor   RaidDevice State
   0       8       17        0      active sync   /dev/sdb1
   1       8       33        1      active sync   /dev/sdc1

</pre

Step 4: Create file system using ext4 

RAID array is ready but still can’t be used as we have not assigned it a filesystem & have not mounted it on our system.so we will create file system and mount point.

[root@lampblogs ~]# mkfs.ext4 /dev/md0
[root@lampblogs ~]# mkdir /data
[root@lampblogs ~]# mount /dev/md0 /data
[root@lampblogs ~]# cd /data
[root@lampblogs data]# touch a b c
[root@lampblogs data]# ls
a  b  c  lost+found

To auto-mount RAID1 on system reboot, you need to make an entry in fstab file

[root@lampblogs ~]# vi /etc/fstab
/dev/md0          /data         ext4    defaults        0 0

Next, save the raid configuration manually to mdadm.conf file using the below command.

[root@lampblogs ~]# mdadm --detail --scan --verbose >> /etc/mdadm.conf
[root@lampblogs ~]# cat /etc/mdadm.conf
ARRAY /dev/md0 level=raid1 num-devices=2 metadata=1.2 name=lampblogs.com:0 UUID=5bc7cec3:5efe20a9:fd5b9a72:adeb2bd2
   devices=/dev/sdb1,/dev/sdc1

The above configuration file is read by the system at the reboots and load the RAID devices.

Now raid 1 configuration is done. now we will testour data by removing one disk

Step 5: verify data after disk failure

Our main purpose is, even after any of hard disk fail or crash our data needs to be available.Now we will remove one disk manually.

[root@lampblogs ~]# mdadm --detail /dev/md0
/dev/md0:
           Version : 1.2
     Creation Time : Mon Sep  9 15:36:40 2019
        Raid Level : raid1
        Array Size : 8382464 (7.99 GiB 8.58 GB)
     Used Dev Size : 8382464 (7.99 GiB 8.58 GB)
      Raid Devices : 2
     Total Devices : 1
       Persistence : Superblock is persistent
   Update Time : Mon Sep  9 16:11:32 2019
         State : clean, degraded 
Active Devices : 1

Working Devices : 1 Failed Devices : 0 Spare Devices : 0

Consistency Policy : resync

          Name : lampblogs.com:0  (local to host lampblogs.com)
          UUID : 5bc7cec3:5efe20a9:fd5b9a72:adeb2bd2
        Events : 21

Number   Major   Minor   RaidDevice State
   0       8       17        0      active sync   /dev/sdb1
   -       0        0        1      removed

Now we can see our data is present or not

[root@lampblogs ~]# cd /data/
[root@lampblogs data]# ls
a  b  c  lost+found

Now we have successfully configured Raid 1 on centos 7.

 

 

 

 



Warning! This site uses cookies
By continuing to browse the site, you are agreeing to our use of cookies. Read our terms and privacy policy