Having brought myself a shiny new Lenovo Thinkserver https://www.serversdirect.co.uk/p/1182461/lenovo-thinkserver-ts150-intel-pen...
and tracked down all the cables and caddies required to make it run... I started installing *buntu on it, and then I had a thought, stopped and now not sure of the best way to go. It can run in EUFI mode, or traditional BIOS mode. Any thoughts? I've never had a machine with EUFI. Is that the way to go? Any gotchas?
Any thoughts or experience appreciated.
Cheers Steve
Hoping to have a go at this this afternoon. Would appreciate any comments :-)
Steve
On 24/01/18 09:30, steve-ALUG@hst.me.uk wrote:
Having brought myself a shiny new Lenovo Thinkserver https://www.serversdirect.co.uk/p/1182461/lenovo-thinkserver-ts150-intel-pen...
and tracked down all the cables and caddies required to make it run... I started installing *buntu on it, and then I had a thought, stopped and now not sure of the best way to go. It can run in EUFI mode, or traditional BIOS mode. Any thoughts? I've never had a machine with EUFI. Is that the way to go? Any gotchas?
Any thoughts or experience appreciated.
Cheers Steve
main@lists.alug.org.uk http://www.alug.org.uk/ https://lists.alug.org.uk/mailman/listinfo/main Unsubscribe? See message headers or the web site above!
On 26/01/18 09:22, steve-ALUG@hst.me.uk wrote:
Hoping to have a go at this this afternoon. Would appreciate any comments :-)
Well, I dove in...
I let the install of 64 bit *buntu progress. It seems it went BIOS mode. I have two disks. I got part way into the install process, let the machine set up partitions the way it wanted to, wrote them to disk, then quit the install. I then used a live CD to set up both disks with the same partition layout. (I'm setting up RAID 1 - mirroring)
I used
sfdisk -d /dev/sda > part_table
edited part_table, removed all uuids, changed sda to sdb then
sfdisk /dev/sdb < part_table
& voilà - identical sized (unformatted) partitions.
I then rebooted and restarted the installation process, and did manual partitioning. I deleted each partition in turn, and recreated it, marking it as part of a raid array. Deleting and recreating using the maximum available space meant basically that the partitions ended up exactly the same size as before deleting, but of the right type (raid member).
I wrote changes to disk, then did manage/create raid level 1, 2 active devices, 0 spares. I matched up a partition on each disk.
I raided md0 biosgrub "boot" partition (NB this is not the "/boot" partition/directory md1 / partition md2 /home partition & some swap partitions.
I then proceeded with the install. I then hit the problem.
The disks, being >2TB won't work well with a traditional partition table, so the installer chose a GPT partition table. Fair enough. BUT you can't make the "biosgrub" partition part of a raid array, or at least as far as I could tell as the installation failed with a grub error. Something like
grub-install: warning: this GPT partition label contains no BIOS Boot Partition; embedding won't be possible. embedding is not possible, but this is required for RAID
I've had to delete md0 and reformat these as normal fat32 partitions flagged as "biosgrub". Apparently you can easily use grub-install to install the relevant files to the "backup" partition.
This seems like a bit of a GOTCHA to me. You might RAID everything on the disks so one disk failure means you can still continue, except if you haven't remembered to set up the backup biosgrub partition.
So I can: a) Live with it. b) Switch to UEFI mode. I gather the install process is different, though I've not tried it. Does this have similar gotchas? c) Use built in "hardware raid" that my new machine is supplied with. Everything I've read says Linux's raid is better, because of the recovery tools that are available in case a drive fails.
I'm currently going with a). If that's mad, I haven't gone far enough into customising to not want to back out and start again.
Any comments and/or experience, again, gladly accepted.
Cheers Steve
On 27/01/18 00:10, steve-ALUG@hst.me.uk wrote:
while not given_up Head.bang("BrickWall") Head.retract() wend
OK, on a whim, I thought, as the machine has raid built in, I'll stop worrying about S/W raid and go the hardware route.
It installed OK, whilst installing it said something like "I can see you're using raid, do you want to load the drivers?" so I said yes. Then I could see that there was an entry in /proc/mdstat (or similar) showing the raid status. Great!
Then I rebooted, and it didn't.
So it's basically (hardware assisted) software raid. Although this time I installed in UEFI mode, I have the same problem. It seems something (grub2) needs a special partition (or two in this case) that is not RAIDed to boot from.
So the options are: Non-raided partition to boot from, rest raided. (Bios or UEFI mode) Extra disk to boot from, non raided. Raid the other. Either way, I have a single point of failure and I have to mitigate it somehow.
Surely there must be a way around this SPOF? Am I missing something??
Any advice/comments please to the usual email address (i.e. ALUG)
Steve
On 28 January 2018 at 17:40, steve-ALUG@hst.me.uk wrote:
So the options are: Non-raided partition to boot from, rest raided. (Bios or UEFI mode) Extra disk to boot from, non raided. Raid the other. Either way, I have a single point of failure and I have to mitigate it somehow.
Booting from an array created by mdadm is certainly possible and should avoid the SPOF issue.
It's been a while since I've done it; where I'm using software RAID it tends to be just for storage and I have a separate boot drive simply because there's nothing of substance on it, so when it fails I use it as an excuse to rebuild. Desktop machines where the O/S partition contains stuff I don't want to lose are backed up but not RAIDed so I've not needed to boot from software RAID for a couple of years.
For Ubuntu* I found the following which looks like it covers configuring RAID at installation time: https://help.ubuntu.com/community/Installation/SoftwareRAID
If you have a desktop kicking around with something like VirtualBox installed you can of-course play with this in a virtual machine before messing around with real hardware. I recall it being a bit tricky only because it was outside my knowledge but actually fairly straightforward otherwise.
*Other distros are available, of-course.
On 29/01/18 09:36, Mark Rogers wrote:
Booting from an array created by mdadm is certainly possible and should avoid the SPOF issue.
Yes & no. My current machine boots fine from MDADM. The new one (unless I'm missing something) doesn't.
Differences, old one 32 bit, disks 2TB, new one 64 bit, disks >2.2TB.
I think the disk size is the problem. The old machine has a "DOS" style partition table. Grub2 installs its boot loader somewhere after the MBR record, and consequently can boot from this.
The new machine, the disks are too big for that style of partition table. I could perhaps ignore the extra space, but I did buy bigger disks and wanted to use them. I have consequently to use GPT partitions.
If I boot the machine in BIOS mode, a BIOS boot partition is required in order to boot. It appears to be impossible to boot from this if it is (software) raided. E.g. https://serverfault.com/questions/749274/is-it-possible-and-wise-to-put-the-...
If the machine boots in UEFI mode, a partition called /boot/efi is required. It appears that this partition cannot be (software) raided. E.g. https://askubuntu.com/questions/66637/can-the-efi-system-partition-be-raided
BUT, it seems someone found out how by https://askubuntu.com/questions/660023/how-to-install-ubuntu-14-04-16-04-64-... BUT BUT, the UEFI partition is NOT raided, just cloned using DD.
It's been a while since I've done it; where I'm using software RAID it tends to be just for storage and I have a separate boot drive simply because there's nothing of substance on it, so when it fails I use it as an excuse to rebuild. Desktop machines where the O/S partition contains stuff I don't want to lose are backed up but not RAIDed so I've not needed to boot from software RAID for a couple of years.
Fair enough, but I wanted the whole thing RAIDed. I don't particularly want a separate boot disk, as we're back to a SPOF again.
For Ubuntu* I found the following which looks like it covers configuring RAID at installation time: https://help.ubuntu.com/community/Installation/SoftwareRAID
Thanks for looking. I have used that guide in the past. In fact I may have used it when I set up my previous machine. Unfortunately it doesn't help with this situation.
If you have a desktop kicking around with something like VirtualBox installed you can of-course play with this in a virtual machine before messing around with real hardware. I recall it being a bit tricky only because it was outside my knowledge but actually fairly straightforward otherwise.
I don't have a machine around with enough guts to run VirtualBox, but I do have the new machine, which is not yet set up to do anything, so it's not a problem using this to try things.
*Other distros are available, of-course.
Of course! Thanks for replying Mark,
SO, I've managed to get a system set up with a UEFI partition that's cloned using DD, / and /home RAID1'ed with btrfs RAID, and I'm going to set up a SWAP mdadm partition.
But I could go to a system set up with a UEFI partition that's cloned using DD, / and /home and SWAP RAID1'ed.
Pros and Cons MDADM, mature technology. Will boot into degraded mode. I've used it before & understand it. BTRFS, advantaged for COW (Copy on Write), resistant to BIT-ROW. but I've not used it before and don't understand it. With disk failure, it stops, apparently at the initramfs/busybox prompt waiting for you to fix it before continuing.
SO, do I persevere with btrfs or do I go back to mdadm?
Any more comments??
Steve