RAID upgrade

From ThorxWiki
(Difference between revisions)
Jump to: navigation, search
m (clarifies)
(TODO)
Line 79: Line 79:
 
This process starts immediately after the --fail.
 
This process starts immediately after the --fail.
 
No, you don't seem to get to choose which spare. (in my case, I was using a, c. Failed a, it spared to b. Yet I plan for 'b' to be the next drive out, and so it'll spare over to d - I'll have c and d running! (with a and b as newer drives as spares, so I think that works out sensible :)
 
No, you don't seem to get to choose which spare. (in my case, I was using a, c. Failed a, it spared to b. Yet I plan for 'b' to be the next drive out, and so it'll spare over to d - I'll have c and d running! (with a and b as newer drives as spares, so I think that works out sensible :)
  +
  +
= TODO =
  +
# Remove sda3 (within the md1 raid1 - this has NO SPARE)
  +
# Remove sda4 (within the md3 raid5)
  +
# Replace drive physically
  +
# Partition and join sda1, sda2, sda3 and sda4 into their respective MD devices
  +
# Within LVM, enlarge the respective VG and LVs in turn
  +
# Enlarge the filesystem.

Revision as of 15:07, 17 February 2010

Contents

I have filesystems on LVMs on RAIDs on partitions. Updating a disk is then, arguably, rather non-trivial. These are my notes-to-self. Maybe they'll help you too?

My system

Software

# cat /etc/debian_version 
lenny/sid
# uname -a
Linux falcon 2.6.18-6-amd64 #1 SMP Mon Jun 16 22:30:01 UTC 2008 x86_64 GNU/Linux
# mdadm --version
mdadm - v2.5.6 - 9 November 2006
# lvm version
  LVM version:     2.02.07 (2006-07-17)
  Library version: 1.02.08 (2006-07-17)
  Driver version:  4.7.0

Setup


# cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4]
md3 : active raid5 sda4[0] sdd4[3] sdc4[2] sdb4[1]
      2637302400 blocks level 5, 64k chunk, algorithm 2 [4/4] [UUUU]

md2 : active raid1 sdc3[0] sdd3[1]
      87891520 blocks [2/2] [UU]

md0 : active raid1 sda2[0] sdb2[2](S) sdd2[3](S) sdc2[1]
      8787456 blocks [2/2] [UU]

md1 : active raid1 sda3[0] sdb3[1]
      87891520 blocks [2/2] [UU]

unused devices: <none>

# pvs
  PV         VG            Fmt  Attr PSize  PFree 
  /dev/md1   volgrp_home   lvm2 a-   83.82G     0 
  /dev/md2   volgrp_home   lvm2 a-   83.82G 27.63G
  /dev/md3   volgrp_shared lvm2 a-    2.46T     0 

# vgs
  VG            #PV #LV #SN Attr   VSize   VFree 
  volgrp_home     2   1   0 wz--n- 167.63G 27.63G
  volgrp_shared   1   1   0 wz--n-   2.46T     0

# lvs
  LV        VG            Attr   LSize   Origin Snap%  Move Log Copy% 
  lv_home   volgrp_home   -wi-ao 140.00G
  lv_shared volgrp_shared -wi-ao   2.46T

File:NemoLVM.png

The plan

Upgrading each disc in turn to a larger physical disc... (1.5TB or 2TB, etc), all levels can be grown and expanded...

I'll start with /dev/sda, initially making partitions 1,2,3 the same, and partition 4 (for md3 - /shared) larger. md3 will only be able to expand once ALL FOUR discs are enlarged!

...note: I may enlarge md1 and md2 for larger /home as well...

Methods

Hot removing a disc from a raid1

(spares already available)

# mdadm --fail /dev/md0 /dev/sda2
# mdadm --remove /dev/md0 /dev/sda2

You can watch a spare take over with

# cat /proc/mdstat

This process starts immediately after the --fail. No, you don't seem to get to choose which spare. (in my case, I was using a, c. Failed a, it spared to b. Yet I plan for 'b' to be the next drive out, and so it'll spare over to d - I'll have c and d running! (with a and b as newer drives as spares, so I think that works out sensible :)

TODO

  1. Remove sda3 (within the md1 raid1 - this has NO SPARE)
  2. Remove sda4 (within the md3 raid5)
  3. Replace drive physically
  4. Partition and join sda1, sda2, sda3 and sda4 into their respective MD devices
  5. Within LVM, enlarge the respective VG and LVs in turn
  6. Enlarge the filesystem.
Personal tools
Namespaces

Variants
Actions
Navigation
meta navigation
More thorx
Tools