grub-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

GRUB2 MD RAID detection order


From: Jérôme Poulin
Subject: GRUB2 MD RAID detection order
Date: Mon, 10 Jan 2011 16:08:35 -0500

Hello,

After having problem with detecting RAID arrays on my computer and
discussions on IRC, I came across a problem in RAID detection in GRUB2
in certain condition. Details of my current setup at the bottom of
this message.
In my setup my 4 disks are partionned using GPT, 320GB each with a
protective 0xEE GPT partition.
Partition 1 on each disk is 0xEF02 for GRUB offsetted at sector 2048.
Partition 2 are 0xFD00 RAID1 with 4 members for /boot
Partition 3 are 0xFD00 RAID5 with 4 members formatted as LVM and takes
the rest of the disk, which means it includes the last available
non-GPT sector.
Both RAIDs have metadata format 0.9.

When GRUB detects RAID1, I get (md0) which is OK, but when it detects
the RAID5, it detects the RAID as starting from sector 0 of the disk
as it checks the disk instead of the GPT partition but still cuts the
length as in the superblock, so I get (md1,gpt[1,2,3]) and (md1,gpt3)
does not show correctly else LVM would detect it anyway (I guess).

So options presented to us to fix are:
1. Check for RAID in partitions first, then in whole disk.
2. Verify minor number in superblock to see if they are divisible per
16 which would mean it is whole disk, however I guess this is
SCSI/SATA-centric.
3. What I currently implemented because I didn't know GRUB2 better,
compare size in superblock vs size of the currently probed device plus
a margin to see if content fits with superblock.

I know I could have fixed this by reducing my last GPT partition a
bit, but I guess other people will fall in this problem sometime when
bigger disk start using GPT, anyway, maybe metadata 0.9 will start
disappearing afterward.

Notice how sda and sda3 are the same.

md0 : active raid1 sdc2[0] sdd2[3] sda2[2] sdb2[1]
     524224 blocks [4/4] [UUUU]
     bitmap: 0/64 pages [0KB], 4KB chunk

md1 : active raid5 sdd3[0] sda3[3] sdc3[2] sdb3[1]
     936115200 blocks level 5, 64k chunk, algorithm 2 [4/4] [UUUU]
     bitmap: 4/149 pages [16KB], 1024KB chunk



mdadm -E /dev/sda
/dev/sda:
         Magic : a92b4efc
       Version : 0.90.00
          UUID : cdf20521:cfed4cc3:130ede86:cb56fef4
 Creation Time : Sun May 17 16:53:26 2009
    Raid Level : raid5
 Used Dev Size : 312038400 (297.58 GiB 319.53 GB)
    Array Size : 936115200 (892.75 GiB 958.58 GB)
  Raid Devices : 4
 Total Devices : 4
Preferred Minor : 1

   Update Time : Mon Jan 10 15:37:30 2011
         State : clean
Internal Bitmap : present
Active Devices : 4
Working Devices : 4
Failed Devices : 0
 Spare Devices : 0
      Checksum : cfb6f02f - correct
        Events : 3676708

        Layout : left-symmetric
    Chunk Size : 64K

     Number   Major   Minor   RaidDevice State
this     3       8        3        3      active sync   /dev/sda3

  0     0       8       51        0      active sync   /dev/sdd3
  1     1       8       19        1      active sync   /dev/sdb3
  2     2       8       35        2      active sync   /dev/sdc3
  3     3       8        3        3      active sync   /dev/sda3
p4 ~ # mdadm -E /dev/sda3
/dev/sda3:
         Magic : a92b4efc
       Version : 0.90.00
          UUID : cdf20521:cfed4cc3:130ede86:cb56fef4
 Creation Time : Sun May 17 16:53:26 2009
    Raid Level : raid5
 Used Dev Size : 312038400 (297.58 GiB 319.53 GB)
    Array Size : 936115200 (892.75 GiB 958.58 GB)
  Raid Devices : 4
 Total Devices : 4
Preferred Minor : 1

   Update Time : Mon Jan 10 15:37:30 2011
         State : clean
Internal Bitmap : present
Active Devices : 4
Working Devices : 4
Failed Devices : 0
 Spare Devices : 0
      Checksum : cfb6f02f - correct
        Events : 3676708

        Layout : left-symmetric
    Chunk Size : 64K

     Number   Major   Minor   RaidDevice State
this     3       8        3        3      active sync   /dev/sda3

  0     0       8       51        0      active sync   /dev/sdd3
  1     1       8       19        1      active sync   /dev/sdb3
  2     2       8       35        2      active sync   /dev/sdc3
  3     3       8        3        3      active sync   /dev/sda3




p4 ~ # gdisk -l /dev/sda
GPT fdisk (gdisk) version 0.6.9

Partition table scan:
  MBR: protective
  BSD: not present
  APM: not present
  GPT: present

Found valid GPT with protective MBR; using GPT.
Disk /dev/sda: 625142448 sectors, 298.1 GiB
Logical sector size: 512 bytes
Disk identifier (GUID): 24746079-21E4-4960-B9B5-479955CC7462
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 625142414
Partitions will be aligned on 2048-sector boundaries
Total free space is 2014 sectors (1007.0 KiB)

Number  Start (sector)    End (sector)  Size       Code  Name
   1            2048            4095   1024.0 KiB  EF02  GRUB2
   2            4096         1052671   512.0 MiB   FD00  RAID1 Boot
   3         1052672       625142414   297.6 GiB   FD00  RAID5 LVM



p4 ~ # fdisk -luc /dev/sda
Disk /dev/sda: 320.1 GB, 320072933376 bytes
256 heads, 63 sectors/track, 38761 cylinders, total 625142448 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1               1   625142447   312571223+  ee  GPT



reply via email to

[Prev in Thread] Current Thread [Next in Thread]