RAID upgrade

From ThorxWiki
(Difference between revisions)
Jump to: navigation, search
m (LVM snapshot notes)
m (seso)
Line 42: Line 42:
 
87891520 blocks [2/2] [UU]
 
87891520 blocks [2/2] [UU]
   
md0 : active raid1 sda2[0] sdb2[2](S) sdd2[3](S) sdc2[1]
+
md0 : active raid1 sdb2[0] sdc2[2](S) sdd2[3](S) sda2[1]
 
8787456 blocks [2/2] [UU]
 
8787456 blocks [2/2] [UU]
   
Line 51: Line 51:
   
 
# pvs
 
# pvs
PV VG Fmt Attr PSize PFree
+
PV VG Fmt Attr PSize PFree
/dev/md1 volgrp_home lvm2 a- 83.82G 0
+
/dev/md1 vg_home lvm2 a- 83.82G 0
/dev/md2 volgrp_home lvm2 a- 83.82G 27.63G
+
/dev/md2 vg_home lvm2 a- 83.82G 0
/dev/md3 volgrp_shared lvm2 a- 2.46T 0
+
/dev/md3 vg_shared lvm2 a- 2.46T 0
   
 
# vgs
 
# vgs
VG #PV #LV #SN Attr VSize VFree
+
VG #PV #LV #SN Attr VSize VFree
volgrp_home 2 1 0 wz--n- 167.63G 27.63G
+
vg_home 2 1 0 wz--n- 167.63G 0
volgrp_shared 1 1 0 wz--n- 2.46T 0
+
vg_shared 1 1 0 wz--n- 2.46T 0
   
 
# lvs
 
# lvs
LV VG Attr LSize Origin Snap% Move Log Copy%
+
LV VG Attr LSize Origin Snap% Move Log Copy%
lv_home volgrp_home -wi-ao 140.00G
+
lv_home vg_home -wi-ao 167.63G
lv_shared volgrp_shared -wi-ao 2.46T
+
lv_shared vg_shared -wi-ao 2.46T
   
 
</pre>
 
</pre>
Line 73: Line 73:
 
Upgrading each disc in turn to a larger physical disc... (1.5TB or 2TB, etc), all levels can be grown and expanded...
 
Upgrading each disc in turn to a larger physical disc... (1.5TB or 2TB, etc), all levels can be grown and expanded...
   
I'll start with /dev/sda, initially making partitions 1,2,3 the same, and partition 4 (for md3 - /shared) larger. md3 will only be able to expand once ALL FOUR discs are enlarged!
+
I'll start with /dev/sdd and work backwards (sda and sdb are the enterprise and the quiet drive, respectively), making each partition larger as needed. Due to a change from MBR to EFI type partition tables, I'm no longer limited to four partitions - and so instead of having unusable space in an oversize partition on /shared, I'll create an extra partition out of that space to use, and repartition into this space when all drives are available to do so.
 
...note: I may enlarge md1 and md2 for larger /home as well...
 
   
 
So - like this
 
So - like this
 
# within mdadm
 
# within mdadm
## Remove sda2 from within the md0 raid1 - this array has 2 spares)
+
## Remove sdd2 from within the md0 raid1 - this array has 2 spares)
## Remove sda3 from within the md1 raid1 - this has NO SPARE)
+
## Remove sdd3 from within the md2 raid1 - this has NO SPARE)
## Remove sda4 from within the md3 raid5)
+
## Remove sdd4 from within the md3 raid5)
 
# Hardware setup
 
# Hardware setup
 
## Replace drive physically
 
## Replace drive physically
 
## Partition
 
## Partition
 
# in mdadm again
 
# in mdadm again
## join sda1, sda2, sda3 and sda4 into their respective MD devices
+
## join sdd1, sdd2, sdd3 and sdd4 into their respective MD devices
 
# within LVM
 
# within LVM
 
## enlarge the respective VG and LVs in turn
 
## enlarge the respective VG and LVs in turn
 
# Finally, enlarge the filesystem.
 
# Finally, enlarge the filesystem.
  +
# any spare partitions do "stuff" (possibly mirror and
   
   
Line 95: Line 96:
   
 
<pre>
 
<pre>
# mdadm --fail /dev/md0 /dev/sda2
+
# mdadm --fail /dev/md0 /dev/sdd2
# mdadm --remove /dev/md0 /dev/sda2
+
# mdadm --remove /dev/md0 /dev/sdd2
 
</pre>
 
</pre>
   
Line 108: Line 109:
 
Finally, I can now add that drive back as the oldest spare... just for kicks till it needs to be pulled physically...
 
Finally, I can now add that drive back as the oldest spare... just for kicks till it needs to be pulled physically...
 
<pre>
 
<pre>
# mdadm --add /dev/md0 /dev/sda2
+
# mdadm --add /dev/md0 /dev/sdd2
 
</pre>
 
</pre>
   

Revision as of 13:42, 5 June 2010

Contents

For my home server, I have filesystems on LVMs on RAIDs on partitions. Upgrading a disk is then, arguably, rather non-trivial. However the intended benefit of this setup is, hopefully, that I can roll over the oldest disk in the array every year or so, and so the whole lot grows incrementally as needed. "live" expansion on all levels means I should never have to create a new filesystem and copy data over, as per historic efforts.

These are my notes-to-self as of the time leading up to my first hardware change. Prior to this all disks are identical in size. There will be no significant size benefit until the fourth disk (smallest) is upgraded. After that, every upgrade (of the smallest disk - presumably replacing it to become the new 'largest') will yield a size increase - based upon the limits set by the 'new' smallest (oldest) disk.

This is not optimal use of available disk space for any given drive over it's life. However, it is hopefully rather nice in terms of budgetry upgrade requirements! :)

Pros

  • Rolling upgrades are win.
    • response: rolling upgrades are the planning headache

Cons

  • Each drive increase size is predicated on the drive purchased 3 drives back! So 'instant embiggening' is difficult.
    • response: you don't use this system unless you plan to think ahead anyway. Also, the old drive being taken out could be put into external USB caddy for additional space
    • response two: the 'unused' space on each drive (ie, the size difference between it and the smallest/oldest) could be partitioned into non-raid usable emergency spaces too!

My system

Software

# cat /etc/debian_version 
lenny/sid
# uname -a
Linux falcon 2.6.18-6-amd64 #1 SMP Mon Jun 16 22:30:01 UTC 2008 x86_64 GNU/Linux
# mdadm --version
mdadm - v2.5.6 - 9 November 2006
# lvm version
  LVM version:     2.02.07 (2006-07-17)
  Library version: 1.02.08 (2006-07-17)
  Driver version:  4.7.0

Setup


# cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4]
md3 : active raid5 sda4[0] sdd4[3] sdc4[2] sdb4[1]
      2637302400 blocks level 5, 64k chunk, algorithm 2 [4/4] [UUUU]

md2 : active raid1 sdc3[0] sdd3[1]
      87891520 blocks [2/2] [UU]

md0 : active raid1 sdb2[0] sdc2[2](S) sdd2[3](S) sda2[1]
      8787456 blocks [2/2] [UU]

md1 : active raid1 sda3[0] sdb3[1]
      87891520 blocks [2/2] [UU]

unused devices: <none>

# pvs
  PV         VG        Fmt  Attr PSize  PFree 
  /dev/md1   vg_home   lvm2 a-   83.82G     0 
  /dev/md2   vg_home   lvm2 a-   83.82G     0
  /dev/md3   vg_shared lvm2 a-    2.46T     0 

# vgs
  VG        #PV #LV #SN Attr   VSize   VFree 
  vg_home     2   1   0 wz--n- 167.63G     0
  vg_shared   1   1   0 wz--n-   2.46T     0

# lvs
  LV        VG        Attr   LSize   Origin Snap%  Move Log Copy% 
  lv_home   vg_home   -wi-ao 167.63G
  lv_shared vg_shared -wi-ao   2.46T

File:NemoLVM.png

The plan

Upgrading each disc in turn to a larger physical disc... (1.5TB or 2TB, etc), all levels can be grown and expanded...

I'll start with /dev/sdd and work backwards (sda and sdb are the enterprise and the quiet drive, respectively), making each partition larger as needed. Due to a change from MBR to EFI type partition tables, I'm no longer limited to four partitions - and so instead of having unusable space in an oversize partition on /shared, I'll create an extra partition out of that space to use, and repartition into this space when all drives are available to do so.

So - like this

  1. within mdadm
    1. Remove sdd2 from within the md0 raid1 - this array has 2 spares)
    2. Remove sdd3 from within the md2 raid1 - this has NO SPARE)
    3. Remove sdd4 from within the md3 raid5)
  2. Hardware setup
    1. Replace drive physically
    2. Partition
  3. in mdadm again
    1. join sdd1, sdd2, sdd3 and sdd4 into their respective MD devices
  4. within LVM
    1. enlarge the respective VG and LVs in turn
  5. Finally, enlarge the filesystem.
  6. any spare partitions do "stuff" (possibly mirror and


Implemented

Removing partition from a raid1 with spares

# mdadm --fail /dev/md0 /dev/sdd2
# mdadm --remove /dev/md0 /dev/sdd2

You can watch a spare take over with

# cat /proc/mdstat

This process starts immediately after the --fail. No, you don't get to choose which spare will be used. There is an internal order (mdstats shows it inside [] brackets). In my case, I was using a and c. When I failed a, it spared to b. Yet I plan for 'b' to be the next drive out, and so it'll spare over to d later and I'll have c and d running! (with a and b as newer drives as spares, so I think that works out sensibly :)

Finally, I can now add that drive back as the oldest spare... just for kicks till it needs to be pulled physically...

# mdadm  --add /dev/md0 /dev/sdd2


TODO

Removing partition from a raid1 without spares

This should be the same as the raid1 with spares, except that it'll remain a degraded raid until the new drive goes in and it's repaired.

Removing partition from a raid5 without spares

Same again. Fail a drive, and repair when the new drive is in?

Physically swapping disks and partitioning

Note that existing 1TB disks are partitioned using traditional MBR style. This fails after 2TiB disks, and EFI (GPT) partitioning (http://en.wikipedia.org/wiki/GUID_Partition_Table) will be required. There should NOT be an issue with mixing these? Also, more than 4 partitions. :)

Adding new partitions to existing raid1 and raid5 devices

This is an mdadm command. Something like --build or --repair or similar. Tired now. read doco later.

Embiggening volume groups

In fact we embiggen the physical volume - the volume group is just the sum of all volumes in it.

pvresize /dev/md0

Embiggening logical volumes

lvextend -L+100G /dev/vg_name/lv_name

...or lvresize  ?

Embiggening ext3 filesystem

e2fsck -f /dev/vg_name/lv_name
resize2fs /dev/vgname/lv_name

External reference

Personal tools
Namespaces

Variants
Actions
Navigation
meta navigation
More thorx
Tools