RAID upgrade
m (minor clarify intro) |
(+summary of todos) |
||
Line 86: | Line 86: | ||
== Removing partition from a raid1 with spares == |
== Removing partition from a raid1 with spares == |
||
− | (spares already available) |
+ | |
<pre> |
<pre> |
||
# mdadm --fail /dev/md0 /dev/sda2 |
# mdadm --fail /dev/md0 /dev/sda2 |
||
Line 97: | Line 97: | ||
</pre> |
</pre> |
||
This process starts immediately after the --fail. |
This process starts immediately after the --fail. |
||
− | No, you don't get to choose which spare will be used. There is an internal order (mdstats shows it inside [] brackets). In my case, I was using a, c. Failed a, it spared to b. Yet I plan for 'b' to be the next drive out, and so it'll spare over to d - I'll have c and d running! (with a and b as newer drives as spares, so I think that works out sensible :) |
+ | No, you don't get to choose which spare will be used. There is an internal order (mdstats shows it inside [] brackets). In my case, I was using a and c. When I failed a, it spared to b. Yet I plan for 'b' to be the next drive out, and so it'll spare over to d later and I'll have c and d running! (with a and b as newer drives as spares, so I think that works out sensibly :) |
Finally, I can now add that drive back as the oldest spare... just for kicks till it needs to be pulled physically... |
Finally, I can now add that drive back as the oldest spare... just for kicks till it needs to be pulled physically... |
||
Line 108: | Line 108: | ||
== Removing partition from a raid1 without spares == |
== Removing partition from a raid1 without spares == |
||
+ | This should be the same as the raid1 with spares, except that it'll remain a degraded raid untill the new drive goes in and it's repaired. |
||
+ | |||
== Removing partition from a raid5 without spares == |
== Removing partition from a raid5 without spares == |
||
+ | Same again. Fail a drive, and repair when the new drive is in? |
||
+ | |||
== Adding new partitions to existing raid1 and raid5 devices == |
== Adding new partitions to existing raid1 and raid5 devices == |
||
+ | this is an mdadm command. Something like --build or --repair or similar. Tied now. read doco later. |
||
+ | |||
== Embiggening volume groups == |
== Embiggening volume groups == |
||
+ | Some magical vg* command |
||
+ | |||
== Embiggening logical volumes == |
== Embiggening logical volumes == |
||
+ | Some magical lv* commands? |
||
+ | |||
== Embiggening ext3 filesystem == |
== Embiggening ext3 filesystem == |
||
+ | Some magical resize2fs command? |
||
+ | |||
+ | = External reference = |
||
+ | * http://ubuntuforums.org/showthread.php?t=1308597 |
Revision as of 00:52, 18 February 2010
|
For my home server, I have filesystems on LVMs on RAIDs on partitions. Upgrading a disk is then, arguably, rather non-trivial. However the intended benefit of this setup is, hopefully, that I can roll over the oldest disk in the array every year or so, and so the whole lot grows incrementally as needed. "live" expansion on all levels means I should never have to create a new filesystem and copy data over, as per historic efforts.
These are my notes-to-self as of the time leading up to my first hardware change. Prior to this all disks are identical in size. There will be no significant size benefit until the fourth disk (smallest) is upgraded. After that, every upgrade (of the smallest disk - presumably replacing it to become the new 'largest') will yield a size increase - based upon the limits set by the 'new' smallest (oldest) disk.
This is not optimal use of available disk space for any given drive over it's life. However, it is hopefully rather nice in terms of budgetry upgrade requirements! :)
My system
Software
# cat /etc/debian_version lenny/sid # uname -a Linux falcon 2.6.18-6-amd64 #1 SMP Mon Jun 16 22:30:01 UTC 2008 x86_64 GNU/Linux # mdadm --version mdadm - v2.5.6 - 9 November 2006 # lvm version LVM version: 2.02.07 (2006-07-17) Library version: 1.02.08 (2006-07-17) Driver version: 4.7.0
Setup
# cat /proc/mdstat Personalities : [raid1] [raid6] [raid5] [raid4] md3 : active raid5 sda4[0] sdd4[3] sdc4[2] sdb4[1] 2637302400 blocks level 5, 64k chunk, algorithm 2 [4/4] [UUUU] md2 : active raid1 sdc3[0] sdd3[1] 87891520 blocks [2/2] [UU] md0 : active raid1 sda2[0] sdb2[2](S) sdd2[3](S) sdc2[1] 8787456 blocks [2/2] [UU] md1 : active raid1 sda3[0] sdb3[1] 87891520 blocks [2/2] [UU] unused devices: <none> # pvs PV VG Fmt Attr PSize PFree /dev/md1 volgrp_home lvm2 a- 83.82G 0 /dev/md2 volgrp_home lvm2 a- 83.82G 27.63G /dev/md3 volgrp_shared lvm2 a- 2.46T 0 # vgs VG #PV #LV #SN Attr VSize VFree volgrp_home 2 1 0 wz--n- 167.63G 27.63G volgrp_shared 1 1 0 wz--n- 2.46T 0 # lvs LV VG Attr LSize Origin Snap% Move Log Copy% lv_home volgrp_home -wi-ao 140.00G lv_shared volgrp_shared -wi-ao 2.46T
The plan
Upgrading each disc in turn to a larger physical disc... (1.5TB or 2TB, etc), all levels can be grown and expanded...
I'll start with /dev/sda, initially making partitions 1,2,3 the same, and partition 4 (for md3 - /shared) larger. md3 will only be able to expand once ALL FOUR discs are enlarged!
...note: I may enlarge md1 and md2 for larger /home as well...
So - like this
- within mdadm
- Remove sda2 from within the md0 raid1 - this array has 2 spares)
- Remove sda3 from within the md1 raid1 - this has NO SPARE)
- Remove sda4 from within the md3 raid5)
- Hardware setup
- Replace drive physically
- Partition
- in mdadm again
- join sda1, sda2, sda3 and sda4 into their respective MD devices
- within LVM
- enlarge the respective VG and LVs in turn
- Finally, enlarge the filesystem.
Implemented
Removing partition from a raid1 with spares
# mdadm --fail /dev/md0 /dev/sda2 # mdadm --remove /dev/md0 /dev/sda2
You can watch a spare take over with
# cat /proc/mdstat
This process starts immediately after the --fail. No, you don't get to choose which spare will be used. There is an internal order (mdstats shows it inside [] brackets). In my case, I was using a and c. When I failed a, it spared to b. Yet I plan for 'b' to be the next drive out, and so it'll spare over to d later and I'll have c and d running! (with a and b as newer drives as spares, so I think that works out sensibly :)
Finally, I can now add that drive back as the oldest spare... just for kicks till it needs to be pulled physically...
# mdadm --add /dev/md0 /dev/sda2
TODO
Removing partition from a raid1 without spares
This should be the same as the raid1 with spares, except that it'll remain a degraded raid untill the new drive goes in and it's repaired.
Removing partition from a raid5 without spares
Same again. Fail a drive, and repair when the new drive is in?
Adding new partitions to existing raid1 and raid5 devices
this is an mdadm command. Something like --build or --repair or similar. Tied now. read doco later.
Embiggening volume groups
Some magical vg* command
Embiggening logical volumes
Some magical lv* commands?
Embiggening ext3 filesystem
Some magical resize2fs command?