RAID upgrade
From ThorxWiki
(Difference between revisions)
(first writeup) |
(seso) |
||
Line 1: | Line 1: | ||
I have filesystems on LVMs on RAIDs on partitions. Updating a disk is then, arguably, rather non-trivial. These are my notes-to-self. Maybe they'll help you too? |
I have filesystems on LVMs on RAIDs on partitions. Updating a disk is then, arguably, rather non-trivial. These are my notes-to-self. Maybe they'll help you too? |
||
− | My setup: |
+ | == My system == |
− | # cat /etc/debian_version |
+ | <pre> |
− | lenny/sid |
+ | # cat /etc/debian_version |
− | # uname -a |
+ | lenny/sid |
− | Linux falcon 2.6.18-6-amd64 #1 SMP Mon Jun 16 22:30:01 UTC 2008 x86_64 GNU/Linux |
+ | # uname -a |
− | # mdadm --version |
+ | Linux falcon 2.6.18-6-amd64 #1 SMP Mon Jun 16 22:30:01 UTC 2008 x86_64 GNU/Linux |
− | mdadm - v2.5.6 - 9 November 2006 |
+ | # mdadm --version |
− | # lvm version |
+ | mdadm - v2.5.6 - 9 November 2006 |
− | LVM version: 2.02.07 (2006-07-17) |
+ | # lvm version |
− | Library version: 1.02.08 (2006-07-17) |
+ | LVM version: 2.02.07 (2006-07-17) |
− | Driver version: 4.7.0 |
+ | Library version: 1.02.08 (2006-07-17) |
+ | Driver version: 4.7.0 |
||
+ | </pre> |
||
+ | == My setup == |
||
+ | <pre> |
||
+ | |||
+ | # cat /proc/mdstat |
||
+ | Personalities : [raid1] [raid6] [raid5] [raid4] |
||
+ | md3 : active raid5 sda4[0] sdd4[3] sdc4[2] sdb4[1] |
||
+ | 2637302400 blocks level 5, 64k chunk, algorithm 2 [4/4] [UUUU] |
||
+ | |||
+ | md2 : active raid1 sdc3[0] sdd3[1] |
||
+ | 87891520 blocks [2/2] [UU] |
||
+ | |||
+ | md0 : active raid1 sda2[0] sdb2[2](S) sdd2[3](S) sdc2[1] |
||
+ | 8787456 blocks [2/2] [UU] |
||
+ | |||
+ | md1 : active raid1 sda3[0] sdb3[1] |
||
+ | 87891520 blocks [2/2] [UU] |
||
+ | |||
+ | unused devices: <none> |
||
+ | |||
+ | # pvs |
||
+ | PV VG Fmt Attr PSize PFree |
||
+ | /dev/md1 volgrp_home lvm2 a- 83.82G 0 |
||
+ | /dev/md2 volgrp_home lvm2 a- 83.82G 27.63G |
||
+ | /dev/md3 volgrp_shared lvm2 a- 2.46T 0 |
||
+ | |||
+ | # vgs |
||
+ | VG #PV #LV #SN Attr VSize VFree |
||
+ | volgrp_home 2 1 0 wz--n- 167.63G 27.63G |
||
+ | volgrp_shared 1 1 0 wz--n- 2.46T 0 |
||
+ | |||
+ | # lvs |
||
+ | LV VG Attr LSize Origin Snap% Move Log Copy% |
||
+ | lv_home volgrp_home -wi-ao 140.00G |
||
+ | lv_shared volgrp_shared -wi-ao 2.46T |
||
+ | |||
+ | </pre> |
||
== Hot removing a disc from a raid1 == |
== Hot removing a disc from a raid1 == |
||
(spares already available) |
(spares already available) |
||
− | + | <pre> |
|
− | mdadm --fail /dev/md0 /dev/sda2 |
+ | # mdadm --fail /dev/md0 /dev/sda2 |
− | mdadm --remove /dev/md0 /dev/sda2 |
+ | # mdadm --remove /dev/md0 /dev/sda2 |
+ | </pre> |
||
You can watch the spares take over with |
You can watch the spares take over with |
||
− | cat /proc/mdstat |
+ | <pre> |
+ | # cat /proc/mdstat |
||
+ | </pre> |
||
This process starts immediately after the --fail. |
This process starts immediately after the --fail. |
Revision as of 12:29, 17 February 2010
I have filesystems on LVMs on RAIDs on partitions. Updating a disk is then, arguably, rather non-trivial. These are my notes-to-self. Maybe they'll help you too?
My system
# cat /etc/debian_version lenny/sid # uname -a Linux falcon 2.6.18-6-amd64 #1 SMP Mon Jun 16 22:30:01 UTC 2008 x86_64 GNU/Linux # mdadm --version mdadm - v2.5.6 - 9 November 2006 # lvm version LVM version: 2.02.07 (2006-07-17) Library version: 1.02.08 (2006-07-17) Driver version: 4.7.0
My setup
# cat /proc/mdstat Personalities : [raid1] [raid6] [raid5] [raid4] md3 : active raid5 sda4[0] sdd4[3] sdc4[2] sdb4[1] 2637302400 blocks level 5, 64k chunk, algorithm 2 [4/4] [UUUU] md2 : active raid1 sdc3[0] sdd3[1] 87891520 blocks [2/2] [UU] md0 : active raid1 sda2[0] sdb2[2](S) sdd2[3](S) sdc2[1] 8787456 blocks [2/2] [UU] md1 : active raid1 sda3[0] sdb3[1] 87891520 blocks [2/2] [UU] unused devices: <none> # pvs PV VG Fmt Attr PSize PFree /dev/md1 volgrp_home lvm2 a- 83.82G 0 /dev/md2 volgrp_home lvm2 a- 83.82G 27.63G /dev/md3 volgrp_shared lvm2 a- 2.46T 0 # vgs VG #PV #LV #SN Attr VSize VFree volgrp_home 2 1 0 wz--n- 167.63G 27.63G volgrp_shared 1 1 0 wz--n- 2.46T 0 # lvs LV VG Attr LSize Origin Snap% Move Log Copy% lv_home volgrp_home -wi-ao 140.00G lv_shared volgrp_shared -wi-ao 2.46T
Hot removing a disc from a raid1
(spares already available)
# mdadm --fail /dev/md0 /dev/sda2 # mdadm --remove /dev/md0 /dev/sda2
You can watch the spares take over with
# cat /proc/mdstat
This process starts immediately after the --fail.