I have a server which has 1 drive holding the O/S, and 4-off 500GB drives as two pairs of RAID1 (mdraid) arrays.
I want to replace the 500GB drives with 1TB drives.
What is the best way to do this?
Ideally I'd do it offline, just copy (and resize) the partitions across and then put the disks back into the machine and boot it. But I don't know if there's any software that would allow me to do that.
So all suggestions welcome.
(I don't recall now why I have 2-off RAID1 arrays; a single RAID 5 array might be a better use of the drives, in which case the same question arises: how do I migrate to that configuration?)
** Mark Rogers mark@quarella.co.uk [2009-08-14 15:07]:
I have a server which has 1 drive holding the O/S, and 4-off 500GB drives as two pairs of RAID1 (mdraid) arrays.
I want to replace the 500GB drives with 1TB drives.
What is the best way to do this?
Ideally I'd do it offline, just copy (and resize) the partitions across and then put the disks back into the machine and boot it. But I don't know if there's any software that would allow me to do that.
So all suggestions welcome.
(I don't recall now why I have 2-off RAID1 arrays; a single RAID 5 array might be a better use of the drives, in which case the same question arises: how do I migrate to that configuration?)
** end quote [Mark Rogers]
I did this a while back and it was quite straightforward with the RAID mirrors I use. In simple terms the process was this:
o I dropped one of the drives out of the mirror and fitted the first of the new drives (that gave me an instantly safe backup for the process) o Then I partitioned the drive using the same layout as the original, but with the new partition sizes (each of equal or larger size than the originals) o Next I resynced all the partitions so I had the same setup on one old drive and one new drive (wasting for the time being the extra space o Once this had completed safely I dropped the second old drive out of the mirror and fitted the second new drive in its place o Then, as with the first, I partitioned using the same layout, but with the new partition sizes o Next it was a case of syncing the drives so that I now had the exact same setup as before on the new drives, but with wasted space on the partitions
Now comes the 'clever' bit:
o First you need to grow the RAID to use the extra space available in the partition with something like: mdadm --grow /dev/md5 --size=max o Next unmount the partition and use e2fsck -f to check it o Then you can resize the file system on the raid with something like: resize2fs /dev/md5 o Finally mount the drive again.
Of course I was luck in that I didn't need to resize /var, but I did clash with doing this to /, and I think I used a boot CD to do this from, although my notes seem to be missing that bit for some reason!
Of course the I should finish with the disclaimer that this worked fine for me, but make sure you have backups, make sure you understand what is happening as you go along and do so at your own risk :)
I would assume that the same process could be used with other RAID formats, but I've not played with migrating between RAID types, which I would suspect would be more complex given the differing nature of the layout of data on the disks used.
Paul Tansom wrote:
Now comes the 'clever' bit:
o First you need to grow the RAID to use the extra space available in the partition with something like: mdadm --grow /dev/md5 --size=max o Next unmount the partition and use e2fsck -f to check it o Then you can resize the file system on the raid with something like: resize2fs /dev/md5 o Finally mount the drive again.
Ah, trivial then! Actually does sound fairly straightforward, and at this point I'll have two 500GB drives with full backups anyway!
Of course I was luck in that I didn't need to resize /var, but I did clash with doing this to /, and I think I used a boot CD to do this from, although my notes seem to be missing that bit for some reason!
There's nothing but data on these drives. All the O/S is on a separate (non-RAID) drive.
I would assume that the same process could be used with other RAID formats, but I've not played with migrating between RAID types, which I would suspect would be more complex given the differing nature of the layout of data on the disks used.
There is the additional problem of having sufficient capacity to install all the disks while I work on them. The server has a maximum capacity of 4 drives on SATA, and 2 on IDE (one of which is my O/S, the other a DVD-ROM).
Mark Rogers wrote:
Ah, trivial then! Actually does sound fairly straightforward, and at this point I'll have two 500GB drives with full backups anyway!
If only!
It turns out that I had 4 drives as RAID5, not 2-off RAID1 pairs.
Hopefully this will still work out OK, though. I've got 4 new 1TB drives, and I'm replacing them one at a time, partitioning them as a single partition spanning the whole disk (filesystem type "fd"), then adding them to the array with mdadm -a /dev/md0 /dev/sda1 (etc).
Thus far I am 10 mins into the process, with /proc/mdstat suggesting I need about 3hrs for this drive alone, so I'm looking at somewhere through tomorrow before I complete this task.
PS: To recall a recent thread, these new drives came from TekHeads and were sensibly packaged!
OK, a summary for anyone who cares....
I started with 4-off 500GB SATA drives in a software RAID5 configuration.
My target was the same data on 4-off 1TB SATA drives (RAID5).
The process was: - Replace the first drive (sda) with a new drive. This causes the RAID array to run in a degraded state. - Partition the new drive (1 single 1TB partition, type "fd") - Add the new partition to the array (mdadm /dev/md0 -a /dev/sda1) - Wait until the array has rebuilt (watch cat /proc/mdstat), which in my case took about 2.5hrs - Replace the second drive (sdb), and repeat above, waiting another 2.5hrs - Replace the second drive (sdc), and repeat above, waiting another 2.5hrs. - Replace the second drive (sdd), and repeat above, waiting only 1.5 hrs this time.
Note that the array is available for use all of this time, so the only actual downtime is shutting down to swap disks, although I left the array unmounted for the full period as that was not a problem for me, and I wanted to be sure that when I had finished the old 4 drives would still comprise a full array if I needed them as a backup.
Interesting that the newer drives only had a significant impact on performance of the rebuild (and therefore I assume on the performance of the array as a whole) when I had a full set of new drives in place - ie the array was as slow as the slowest drive in the array. I was expecting the speed to be more of an average.
So, now I have 4 new disks comprising a complete array, but only using 500GB from each drive as they had been initially. So: - Resize the array (mdadm --grow /dev/md0 --size=max) - Wait for the array to rebuild (watch cat /proc/mdstat) - about 1.5hrs for me - Unmount the array (if not already done so: umount /dev/md0) - Check the filesystem (e2fsck -f /dev/md0) - about 1hr - Resize the filesystem (resize2fs -p /dev/md0) - about 0.5hr - Check the filesystem again (ok, so I get scared sometimes) (e2fsck -f /dev/md0) - about 1hr - Remount (mount /dev/md0)
So now I have 3TB of disk space across 4x1TB disks with very little necessary downtime (if everything is backed up before you start!). I think in a year or two I will buy some new drives of whatever is a good cost-effective size and repeat the process, this time just shutting down the server each night for 4 nights in a row to install the new disks, but otherwise leaving it running as a file server while the upgrades take place.
Thanks Paul Tansom for the guts of the process.