RAID upgrade

From ThorxWiki
(Difference between revisions)
Jump to: navigation, search
(better TODO list)
m (reorder the plan and the todo)
Line 64: Line 64:
 
...note: I may enlarge md1 and md2 for larger /home as well...
 
...note: I may enlarge md1 and md2 for larger /home as well...
   
= Methods =
+
So - like this
  +
# within mdadm
  +
## Remove sda2 from within the md0 raid1 - this array has 2 spares)
  +
## Remove sda3 from within the md1 raid1 - this has NO SPARE)
  +
## Remove sda4 from within the md3 raid5)
  +
# Hardware setup
  +
## Replace drive physically
  +
## Partition
  +
# in mdadm again
  +
## join sda1, sda2, sda3 and sda4 into their respective MD devices
  +
# within LVM
  +
## enlarge the respective VG and LVs in turn
  +
# Finally, enlarge the filesystem.
  +
  +
  +
= Implemented =
   
== Hot removing a disc from a raid1 ==
+
== Removing partition from a raid1 with spares ==
 
(spares already available)
 
(spares already available)
 
<pre>
 
<pre>
Line 78: Line 78:
 
</pre>
 
</pre>
 
This process starts immediately after the --fail.
 
This process starts immediately after the --fail.
No, you don't seem to get to choose which spare. (in my case, I was using a, c. Failed a, it spared to b. Yet I plan for 'b' to be the next drive out, and so it'll spare over to d - I'll have c and d running! (with a and b as newer drives as spares, so I think that works out sensible :)
+
No, you don't get to choose which spare will be used. There is an internal order (mdstats shows it inside [] brackets). In my case, I was using a, c. Failed a, it spared to b. Yet I plan for 'b' to be the next drive out, and so it'll spare over to d - I'll have c and d running! (with a and b as newer drives as spares, so I think that works out sensible :)
  +
  +
Finally, I can now add that drive back as the oldest spare... just for kicks till it needs to be pulled physically...
  +
<pre>
  +
# mdadm --add /dev/md0 /dev/sda2
  +
</pre>
  +
   
 
= TODO =
 
= TODO =
# within mdadm
+
## Remove sda3 (within the md1 raid1 - this has NO SPARE)
+
== Removing partition from a raid1 without spares ==
## Remove sda4 (within the md3 raid5)
+
== Removing partition from a raid5 without spares ==
# Hardware setup
+
== Adding new partitions to existing raid1 and raid5 devices ==
## Replace drive physically
+
== Embiggening volume groups ==
## Partition
+
== Embiggening logical volumes ==
# in mdadm again
+
== Embiggening ext3 filesystem ==
## join sda1, sda2, sda3 and sda4 into their respective MD devices
 
# within LVM
 
## enlarge the respective VG and LVs in turn
 
# Finally, enlarge the filesystem.
 

Revision as of 15:58, 17 February 2010

Contents

I have filesystems on LVMs on RAIDs on partitions. Updating a disk is then, arguably, rather non-trivial. These are my notes-to-self. Maybe they'll help you too?

My system

Software

# cat /etc/debian_version 
lenny/sid
# uname -a
Linux falcon 2.6.18-6-amd64 #1 SMP Mon Jun 16 22:30:01 UTC 2008 x86_64 GNU/Linux
# mdadm --version
mdadm - v2.5.6 - 9 November 2006
# lvm version
  LVM version:     2.02.07 (2006-07-17)
  Library version: 1.02.08 (2006-07-17)
  Driver version:  4.7.0

Setup


# cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4]
md3 : active raid5 sda4[0] sdd4[3] sdc4[2] sdb4[1]
      2637302400 blocks level 5, 64k chunk, algorithm 2 [4/4] [UUUU]

md2 : active raid1 sdc3[0] sdd3[1]
      87891520 blocks [2/2] [UU]

md0 : active raid1 sda2[0] sdb2[2](S) sdd2[3](S) sdc2[1]
      8787456 blocks [2/2] [UU]

md1 : active raid1 sda3[0] sdb3[1]
      87891520 blocks [2/2] [UU]

unused devices: <none>

# pvs
  PV         VG            Fmt  Attr PSize  PFree 
  /dev/md1   volgrp_home   lvm2 a-   83.82G     0 
  /dev/md2   volgrp_home   lvm2 a-   83.82G 27.63G
  /dev/md3   volgrp_shared lvm2 a-    2.46T     0 

# vgs
  VG            #PV #LV #SN Attr   VSize   VFree 
  volgrp_home     2   1   0 wz--n- 167.63G 27.63G
  volgrp_shared   1   1   0 wz--n-   2.46T     0

# lvs
  LV        VG            Attr   LSize   Origin Snap%  Move Log Copy% 
  lv_home   volgrp_home   -wi-ao 140.00G
  lv_shared volgrp_shared -wi-ao   2.46T

File:NemoLVM.png

The plan

Upgrading each disc in turn to a larger physical disc... (1.5TB or 2TB, etc), all levels can be grown and expanded...

I'll start with /dev/sda, initially making partitions 1,2,3 the same, and partition 4 (for md3 - /shared) larger. md3 will only be able to expand once ALL FOUR discs are enlarged!

...note: I may enlarge md1 and md2 for larger /home as well...

So - like this

  1. within mdadm
    1. Remove sda2 from within the md0 raid1 - this array has 2 spares)
    2. Remove sda3 from within the md1 raid1 - this has NO SPARE)
    3. Remove sda4 from within the md3 raid5)
  2. Hardware setup
    1. Replace drive physically
    2. Partition
  3. in mdadm again
    1. join sda1, sda2, sda3 and sda4 into their respective MD devices
  4. within LVM
    1. enlarge the respective VG and LVs in turn
  5. Finally, enlarge the filesystem.


Implemented

Removing partition from a raid1 with spares

(spares already available)

# mdadm --fail /dev/md0 /dev/sda2
# mdadm --remove /dev/md0 /dev/sda2

You can watch a spare take over with

# cat /proc/mdstat

This process starts immediately after the --fail. No, you don't get to choose which spare will be used. There is an internal order (mdstats shows it inside [] brackets). In my case, I was using a, c. Failed a, it spared to b. Yet I plan for 'b' to be the next drive out, and so it'll spare over to d - I'll have c and d running! (with a and b as newer drives as spares, so I think that works out sensible :)

Finally, I can now add that drive back as the oldest spare... just for kicks till it needs to be pulled physically...

# mdadm  --add /dev/md0 /dev/sda2


TODO

Removing partition from a raid1 without spares

Removing partition from a raid5 without spares

Adding new partitions to existing raid1 and raid5 devices

Embiggening volume groups

Embiggening logical volumes

Embiggening ext3 filesystem

Personal tools
Namespaces

Variants
Actions
Navigation
meta navigation
More thorx
Tools