Hi,
My in-home server is at the end of it's tether, and I need to move to new hardware (motherboard, psu, case, fans etc) before the old one finally fails.
This is a very stupid question. Under windows, in days of old (win 95, 98, xp for instance), I've had some success in the past, with just transferring hard disks into the new machine, and then installing relevant drivers.
Would this work under linux? (I'm running xubuntu)
If not, what's the best way to go? Installing the same version of xubuntu on the new machine with the same packages as the old one, then copying the data and config files, or copying everything from the old machine and then booting from a live cd and installing from there over the top of the transferred data, then tweaking or restoring the customised config files, or something else?
I'm currently raided (Raid 1, 2 disks mirrored). Presumably I should do this again? I have a / and a home partition. Again, should I stick with this?
Should I be using LVM (Logical Volume Management)? If so, how can I do it with RAID?
Any/all advice gratefully received!
Steve
On Sun, Sep 18, 2016 at 12:13:25AM +0100, steve-ALUG@hst.me.uk wrote:
Hi,
My in-home server is at the end of it's tether, and I need to move to new hardware (motherboard, psu, case, fans etc) before the old one finally fails.
This is a very stupid question. Under windows, in days of old (win 95, 98, xp for instance), I've had some success in the past, with just transferring hard disks into the new machine, and then installing relevant drivers.
Would this work under linux? (I'm running xubuntu)
It *might* but personally I'd install a new OS on the target machine on a new disk and then add the old disk from the other machine. If /home is a separate partition then it's very easy to simply mount that as /home on the new machine and all of your personal configuration will work. All you have to do then is work out what you have changed (if anything) in the base OS.
I'm currently raided (Raid 1, 2 disks mirrored). Presumably I should do this again?
Personally I think RAID is just a nuisance which will make a disk failure more difficult to clear up than otherwise. Make sure important stuff is backed up (e.g. /home and /etc) and then just have 'ordinary' disks.
I have a / and a home partition. Again, should I stick with this?
Yes.
On 18 September 2016 at 09:29, Chris Green cl@isbd.net wrote:
It *might* but personally I'd install a new OS on the target machine on a new disk and then add the old disk from the other machine.
In my experience it is far more likely to work with Linux than with Windows. Whether it's a good opportunity to go for a fresh install to have a spring clean is a good question, but it should "just work".
If you've ever installed Windows, and in particular older versions, you'll know that it involves lots of installing drivers separately from the Microsoft installation disk. This can include drivers required to access the hard disks, and even if Windows has the right drivers to at least perform the install it may only install the ones that suits the actual hardware, and after the installation it might be replaced with a manufacturer driver. All of this means that Windows can have a lot of problems if you try to boot another set of physical hardware from the same disk, and quite often it won't get much further than a blue screen early in the boot process. This can be quite a pain if you decide to try to virtualise an old physical machine, and as a result there are several "P2V" (physical to virtual) tools around which strip out drivers, install new drivers, and do other cleaning up ready to move a Windows disk to new (virtual, in this case) hardware. Even if it does boot, you then may well find you need to install new drivers for everything, which can be a bit issue if one of them is the network drivers...
On top of that, especially in newer versions, the hardware will play a key role in validating the license, so even if everything is OK from a driver point of view Windows may still refuse to function.
Contrast all that with Linux; all the drivers are there within the O/S and if the hardware has changed it'll just use different drivers, and there is no licensing to validate.
I'm currently raided (Raid 1, 2 disks mirrored). Presumably I should do this again?
Personally I think RAID is just a nuisance which will make a disk failure more difficult to clear up than otherwise. Make sure important stuff is backed up (e.g. /home and /etc) and then just have 'ordinary' disks.
There is no substitute for backups. That said I find software raid (mdadm) and RAID1 to be very easy to work with, and very easy to migrate between machines etc. I would never use hardware RAID these days because I need to be sure I can access the data in different hardware if it comes to it, and I'd likely avoid RAID5 etc where each single disk doesn't contain a coherent set of data, but with mdadm RAID 1 I've never had a problem replacing disks or taking a disk out of a failed machine and accessing its data in different hardware. LVM on the other hand...
On Mon, 2016-09-19 at 09:47 +0100, Mark Rogers wrote:
On 18 September 2016 at 09:29, Chris Green cl@isbd.net wrote:
I'm currently raided (Raid 1, 2 disks mirrored). Presumably I should do this again?
Personally I think RAID is just a nuisance which will make a disk failure more difficult to clear up than otherwise. Make sure important stuff is backed up (e.g. /home and /etc) and then just have 'ordinary' disks.
There is no substitute for backups. That said I find software raid (mdadm) and RAID1 to be very easy to work with, and very easy to migrate between machines etc. I would never use hardware RAID these days because I need to be sure I can access the data in different hardware if it comes to it, and I'd likely avoid RAID5 etc where each single disk doesn't contain a coherent set of data, but with mdadm RAID 1 I've never had a problem replacing disks or taking a disk out of a failed machine and accessing its data in different hardware. LVM on the other hand...
I'm a big fan of RAID ever since I had both disks fail within a few days of one another in my SunBLADE 2000. Didn't lose a single byte. OTOH, for some reason I've never bothered implementing it on my current Linux machine.
** Mark Rogers mark@more-solutions.co.uk [2016-09-19 09:48]:
On 18 September 2016 at 09:29, Chris Green cl@isbd.net wrote:
It *might* but personally I'd install a new OS on the target machine on a new disk and then add the old disk from the other machine.
In my experience it is far more likely to work with Linux than with Windows. Whether it's a good opportunity to go for a fresh install to have a spring clean is a good question, but it should "just work".
<snip>
On top of that, especially in newer versions, the hardware will play a key role in validating the license, so even if everything is OK from a driver point of view Windows may still refuse to function.
Contrast all that with Linux; all the drivers are there within the O/S and if the hardware has changed it'll just use different drivers, and there is no licensing to validate.
I've done it several times with Linux and had no problems, it isn't guaranteed though. If I think back some years to a scenario where it wouldn't have (although I didn't need to or try), I had a server I was replacing that was running Debian and the new hardware (customer supplied) had nVidia chipsets throughout. Graphics and sound were no problem since bing a server had no need (no desktop), but i/o was a major issue since Debian didn't supply the drivers because of the licensing, etc.. I spent a while trying, but ended up trying Ubuntu 16.06 (which sort of dates this!) and that is one of the things that pushed my switch of distro. If I'd just swapped disks it wouldn't have worked.
Windows wise I have done it, but it has never gone nicely. One desktop I tried had two changes of hardware before the Windows boot had a significant enough kick to force a reinstall of the relevant drivers to get it working gain (I used that boot so little I hadn't sorted it out in a year or so!). That said there were still drivers and software kicking around that I couldn't uninstall that caused the odd issue.
I'm currently raided (Raid 1, 2 disks mirrored). Presumably I should do this again?
Personally I think RAID is just a nuisance which will make a disk failure more difficult to clear up than otherwise. Make sure important stuff is backed up (e.g. /home and /etc) and then just have 'ordinary' disks.
There is no substitute for backups. That said I find software raid (mdadm) and RAID1 to be very easy to work with, and very easy to migrate between machines etc. I would never use hardware RAID these days because I need to be sure I can access the data in different hardware if it comes to it, and I'd likely avoid RAID5 etc where each single disk doesn't contain a coherent set of data, but with mdadm RAID 1 I've never had a problem replacing disks or taking a disk out of a failed machine and accessing its data in different hardware. LVM on the other hand...
I wouldn't use it instead of backup, but a software RAID mirror works very nicely under Linux to improve resilience and is just as easy to recover data from as a single disk setup. It also makes it easy to increase the size of the physical drives even without LVM - which I have done a few times in the past by dropping out one drive, installing a new one with larger partitions, sync'ing the data, swapping the second drive in and sync'ing and then expanding the partitions. :)
** end quote [Mark Rogers]
On 19/09/16 17:08, Paul Tansom wrote:
** Mark Rogers mark@more-solutions.co.uk [2016-09-19 09:48]:
On 18 September 2016 at 09:29, Chris Green cl@isbd.net wrote:
It *might* but personally I'd install a new OS on the target machine on a new disk and then add the old disk from the other machine.
In my experience it is far more likely to work with Linux than with Windows. Whether it's a good opportunity to go for a fresh install to have a spring clean is a good question, but it should "just work".
<snip> > On top of that, especially in newer versions, the hardware will play a > key role in validating the license, so even if everything is OK from a > driver point of view Windows may still refuse to function. > > Contrast all that with Linux; all the drivers are there within the O/S > and if the hardware has changed it'll just use different drivers, and > there is no licensing to validate. I've done it several times with Linux and had no problems, it isn't guaranteed though. If I think back some years to a scenario where it wouldn't have (although I didn't need to or try), I had a server I was replacing that was running Debian and the new hardware (customer supplied) had nVidia chipsets throughout. Graphics and sound were no problem since bing a server had no need (no desktop), but i/o was a major issue since Debian didn't supply the drivers because of the licensing, etc.. I spent a while trying, but ended up trying Ubuntu 16.06 (which sort of dates this!) and that is one of the things that pushed my switch of distro. If I'd just swapped disks it wouldn't have worked.
Windows wise I have done it, but it has never gone nicely. One desktop I tried had two changes of hardware before the Windows boot had a significant enough kick to force a reinstall of the relevant drivers to get it working gain (I used that boot so little I hadn't sorted it out in a year or so!). That said there were still drivers and software kicking around that I couldn't uninstall that caused the odd issue.
I'm currently raided (Raid 1, 2 disks mirrored). Presumably I should do this again?
Personally I think RAID is just a nuisance which will make a disk failure more difficult to clear up than otherwise. Make sure important stuff is backed up (e.g. /home and /etc) and then just have 'ordinary' disks.
There is no substitute for backups. That said I find software raid (mdadm) and RAID1 to be very easy to work with, and very easy to migrate between machines etc. I would never use hardware RAID these days because I need to be sure I can access the data in different hardware if it comes to it, and I'd likely avoid RAID5 etc where each single disk doesn't contain a coherent set of data, but with mdadm RAID 1 I've never had a problem replacing disks or taking a disk out of a failed machine and accessing its data in different hardware. LVM on the other hand...
I wouldn't use it instead of backup, but a software RAID mirror works very nicely under Linux to improve resilience and is just as easy to recover data from as a single disk setup. It also makes it easy to increase the size of the physical drives even without LVM - which I have done a few times in the past by dropping out one drive, installing a new one with larger partitions, sync'ing the data, swapping the second drive in and sync'ing and then expanding the partitions. :)
** end quote [Mark Rogers]
Thanks for the comments folks. When I'm feeling brave, I'll give it a try, and I may see if I can afford some new disks too!
Incidentally, (and Off-topically), In the days of yore, under windows, when swapping disks from one windows machine to another, sometimes, if it didn't work first time, I then did a windows install "over the top" of the existing installation. This tended to keep the existing configuration and fix any issues with drivers etc. I was a MSDN subscriber at the time so had access to MSDN Windows disks which were a bit more forgiving at doing "upgrade installs" than retail ones.
Doubt it'd work now, but thankfully, I don't need to try!
Cheers Steve
On 19 September 2016 at 17:08, Paul Tansom paul@aptanet.com wrote:
I had a server I was replacing that was running Debian and the new hardware (customer supplied) had nVidia chipsets throughout. Graphics and sound were no problem since bing a server had no need (no desktop), but i/o was a major issue since Debian didn't supply the drivers because of the licensing, etc.. I spent a while trying, but ended up trying Ubuntu 16.06 (which sort of dates this!) and that is one of the things that pushed my switch of distro. If I'd just swapped disks it wouldn't have worked.
Indeed: this is the issue with non-free drivers. There are other situations too; the new PC may have hardware that the old PC's kernel doesn't support, for example. Neither tends not to be a problem if the old PC has been kept up to date and only uses supplied drivers.
Video hardware can be an issue, because it's one area where non-free drivers are more common. But even so usually it'll boot to a commandline from which it can be fixed. Modern systems have empty Xorg.conf files and will adapt to changed hardware fairly well anyway.
I wouldn't use it instead of backup, but a software RAID mirror works very nicely under Linux to improve resilience
RAID is good protection against a disk failure. It's zero protection against accidental deletion, or malware, or a myriad of other things that backups protect from.
However, if a disk fails, having a system that keeps going regardless but notifies you, and which can relatively easily get back into a redundant state on the addition of a new disk (which can even be done without shutting down if you have hot-swap hardware), is pretty neat. In mostly-read situations it may also improve performance (as you have two drives to read from). It allows you to use two drives from different manufacturers so you're protected against batch or design faults. All that for the cost of a second disk, which is usually a fairly small cost.
On Sun, 2016-09-18 at 00:13 +0100, steve-ALUG@hst.me.uk wrote:
Hi,
My in-home server is at the end of it's tether, and I need to move to new hardware (motherboard, psu, case, fans etc) before the old one finally fails.
This is a very stupid question. Under windows, in days of old (win 95, 98, xp for instance), I've had some success in the past, with just transferring hard disks into the new machine, and then installing relevant drivers.
Would this work under linux? (I'm running xubuntu)
Yes.
If not, what's the best way to go? Installing the same version of xubuntu on the new machine with the same packages as the old one, then copying the data and config files, or copying everything from the old machine and then booting from a live cd and installing from there over the top of the transferred data, then tweaking or restoring the customised config files, or something else?
After some 23 years doing Unix/Solaris/Linux things, what I do with my home stuff when I do a major upgrade is ...
- Buy new disk(s). I use this as an excuse to increase the size of the disks.
- Install the new disk(s), then install the O/S from scratch. This allows me to tidy up, if necessary. If you don't want to tidy up, but just want the easy route, you can just save your current package list and reinstall it on the new O/S, thus;
dpkg --get-selections > dpkg_list.txt
Restore this using;
sudo dpkg --set-selections < dpkg_list.txt sudo apt-get -y update sudo apt-get dselect-upgrade
- I then copy my home directory off the old disk, by mounting it in a USB/SATA dock on the new hardware. That also automatically creates a full back of the old system, since I still have the old disk(s).
- I also keep /home on a separate partition, since this makes "in place" upgrades much easier, should I ever want to do one. FWIW, I also have a /data partition. These two things make backups a lot easier, also.
Rgds,
H.