OK, a summary for anyone who cares....
I started with 4-off 500GB SATA drives in a software RAID5 configuration.
My target was the same data on 4-off 1TB SATA drives (RAID5).
The process was: - Replace the first drive (sda) with a new drive. This causes the RAID array to run in a degraded state. - Partition the new drive (1 single 1TB partition, type "fd") - Add the new partition to the array (mdadm /dev/md0 -a /dev/sda1) - Wait until the array has rebuilt (watch cat /proc/mdstat), which in my case took about 2.5hrs - Replace the second drive (sdb), and repeat above, waiting another 2.5hrs - Replace the second drive (sdc), and repeat above, waiting another 2.5hrs. - Replace the second drive (sdd), and repeat above, waiting only 1.5 hrs this time.
Note that the array is available for use all of this time, so the only actual downtime is shutting down to swap disks, although I left the array unmounted for the full period as that was not a problem for me, and I wanted to be sure that when I had finished the old 4 drives would still comprise a full array if I needed them as a backup.
Interesting that the newer drives only had a significant impact on performance of the rebuild (and therefore I assume on the performance of the array as a whole) when I had a full set of new drives in place - ie the array was as slow as the slowest drive in the array. I was expecting the speed to be more of an average.
So, now I have 4 new disks comprising a complete array, but only using 500GB from each drive as they had been initially. So: - Resize the array (mdadm --grow /dev/md0 --size=max) - Wait for the array to rebuild (watch cat /proc/mdstat) - about 1.5hrs for me - Unmount the array (if not already done so: umount /dev/md0) - Check the filesystem (e2fsck -f /dev/md0) - about 1hr - Resize the filesystem (resize2fs -p /dev/md0) - about 0.5hr - Check the filesystem again (ok, so I get scared sometimes) (e2fsck -f /dev/md0) - about 1hr - Remount (mount /dev/md0)
So now I have 3TB of disk space across 4x1TB disks with very little necessary downtime (if everything is backed up before you start!). I think in a year or two I will buy some new drives of whatever is a good cost-effective size and repeat the process, this time just shutting down the server each night for 4 nights in a row to install the new disks, but otherwise leaving it running as a file server while the upgrades take place.
Thanks Paul Tansom for the guts of the process.