On 18 September 2016 at 09:29, Chris Green cl@isbd.net wrote:
It *might* but personally I'd install a new OS on the target machine on a new disk and then add the old disk from the other machine.
In my experience it is far more likely to work with Linux than with Windows. Whether it's a good opportunity to go for a fresh install to have a spring clean is a good question, but it should "just work".
If you've ever installed Windows, and in particular older versions, you'll know that it involves lots of installing drivers separately from the Microsoft installation disk. This can include drivers required to access the hard disks, and even if Windows has the right drivers to at least perform the install it may only install the ones that suits the actual hardware, and after the installation it might be replaced with a manufacturer driver. All of this means that Windows can have a lot of problems if you try to boot another set of physical hardware from the same disk, and quite often it won't get much further than a blue screen early in the boot process. This can be quite a pain if you decide to try to virtualise an old physical machine, and as a result there are several "P2V" (physical to virtual) tools around which strip out drivers, install new drivers, and do other cleaning up ready to move a Windows disk to new (virtual, in this case) hardware. Even if it does boot, you then may well find you need to install new drivers for everything, which can be a bit issue if one of them is the network drivers...
On top of that, especially in newer versions, the hardware will play a key role in validating the license, so even if everything is OK from a driver point of view Windows may still refuse to function.
Contrast all that with Linux; all the drivers are there within the O/S and if the hardware has changed it'll just use different drivers, and there is no licensing to validate.
I'm currently raided (Raid 1, 2 disks mirrored). Presumably I should do this again?
Personally I think RAID is just a nuisance which will make a disk failure more difficult to clear up than otherwise. Make sure important stuff is backed up (e.g. /home and /etc) and then just have 'ordinary' disks.
There is no substitute for backups. That said I find software raid (mdadm) and RAID1 to be very easy to work with, and very easy to migrate between machines etc. I would never use hardware RAID these days because I need to be sure I can access the data in different hardware if it comes to it, and I'd likely avoid RAID5 etc where each single disk doesn't contain a coherent set of data, but with mdadm RAID 1 I've never had a problem replacing disks or taking a disk out of a failed machine and accessing its data in different hardware. LVM on the other hand...