I have an HP server with on-board RAID, ie it takes two drives and presents them to the O/S as a single drive, but the management of that drive utilises the main CPU not a RAID controller.
If I use it, how would I be able to detect the RAID status?
Or should I just use software RAID and mdtools?
lspci reports: 00:11.0 RAID bus controller: ATI Technologies Inc SB7x0/SB8x0/SB9x0 SATA Controller [Non-RAID5 mode] (rev 40)
If I decided to move to software RAID, what are my chances of migrating the existing install to RAID without re-installing?
On 29/06/11 16:20, Mark Rogers wrote:
I have an HP server with on-board RAID, ie it takes two drives and presents them to the O/S as a single drive, but the management of that drive utilises the main CPU not a RAID controller.
Were you considering RAID 0 or RAID 1 here ?
If I use it, how would I be able to detect the RAID status?
Or should I just use software RAID and mdtools?
I'd say there is almost no contest there. Unless we are talking Enterprise quality SAS RAID on large arrays then there is little reason to bother with hardware assisted RAID. Particularly when we are talking about just 2 drives.
For some chipsets there are linux management tools but you still generally get worse performance than md arrays are capable of on decent hardware. Also you are bound to your hardware so if say the mainboard of your server fails then you aren't going to be able to mount the disks until you source another with the same chipset *
(* This isn't actually always strictly true as the kernel softraid actually supports some raid profiles used by some of the more common chipsets and sometimes with raid 1 the individual members look like a standard partition on their own when the mirror is broken.)
lspci reports: 00:11.0 RAID bus controller: ATI Technologies Inc SB7x0/SB8x0/SB9x0 SATA Controller [Non-RAID5 mode] (rev 40)
If I decided to move to software RAID, what are my chances of migrating the existing install to RAID without re-installing?
Assuming that you have somewhere to store the data whilst you are repartitioning then I'd say very high.
On 29/06/11 20:53, Wayne Stallwood wrote:
Were you considering RAID 0 or RAID 1 here ?
RAID1
I'd say there is almost no contest there. Unless we are talking Enterprise quality SAS RAID on large arrays then there is little reason to bother with hardware assisted RAID. Particularly when we are talking about just 2 drives.
You're not one to sit on the fence are you, Wayne? :-)
For some chipsets there are linux management tools but you still generally get worse performance than md arrays are capable of on decent hardware. Also you are bound to your hardware so if say the mainboard of your server fails then you aren't going to be able to mount the disks until you source another with the same chipset *
This has been my argument in the past so I'm glad it's backed up by someone more knowledgeable!
Assuming that you have somewhere to store the data whilst you are repartitioning then I'd say very high.
I was thinking of the second disk :-)
At the moment I'm using the "hardware" RAID. I should be able to remove one disk from the array and still boot from the other, then create a new degraded md array with the second disk, copy everything across, boot from the md array, add the second first disk to the array.
That sounds fine in theory but there are several stumbling blocks in practice due to my lack of knowledge, which lead me towards a fresh install, which isn't a great way to learn! (At present there's very little on the box other than a fresh install so if wiping it is the way to go that's what I'll do.)
I've got to say, I agree with everything said here.
I started out with RAID5 on a proper hardware adaptec raid card, and it was a bit flakey and Linux support wasn't great. I moved to mdadm and sold the card (which gave me the cash for some bigger drives) and it's been rock solid. I moved the server from Debian to Ubuntu and have had 3 motherboard upgrades since I started. The array has just moved across with no hastle (it's used as a data drive, I have the system on another disk)
Matt
-----Original Message----- From: main-bounces@lists.alug.org.uk [mailto:main-bounces@lists.alug.org.uk] On Behalf Of Mark Rogers Sent: 30 June 2011 09:06 To: main@lists.alug.org.uk Subject: Re: [ALUG] Motherboard software RAID vs Linux FakeRAID
On 29/06/11 20:53, Wayne Stallwood wrote:
Were you considering RAID 0 or RAID 1 here ?
RAID1
I'd say there is almost no contest there. Unless we are talking Enterprise
quality SAS RAID on large arrays then there is little reason to bother
with
hardware assisted RAID. Particularly when we are talking about just 2
drives.
You're not one to sit on the fence are you, Wayne? :-)
For some chipsets there are linux management tools but you still generally
get worse performance than md arrays are capable of on decent hardware.
Also
you are bound to your hardware so if say the mainboard of your server
fails
then you aren't going to be able to mount the disks until you source
another
with the same chipset *
This has been my argument in the past so I'm glad it's backed up by someone more knowledgeable!
Assuming that you have somewhere to store the data whilst you are repartitioning then I'd say very high.
I was thinking of the second disk :-)
At the moment I'm using the "hardware" RAID. I should be able to remove one disk from the array and still boot from the other, then create a new degraded md array with the second disk, copy everything across, boot from the md array, add the second first disk to the array.
That sounds fine in theory but there are several stumbling blocks in practice due to my lack of knowledge, which lead me towards a fresh install, which isn't a great way to learn! (At present there's very little on the box other
than a fresh install so if wiping it is the way to go that's what I'll do.)
On 30/06/11 09:05, Mark Rogers wrote:
At the moment I'm using the "hardware" RAID. I should be able to remove one disk from the array and still boot from the other, then create a new degraded md array with the second disk, copy everything across, boot from the md array, add the second first disk to the array.
That being the case then I would be going about it in a similar way (adjusting as needed the instructions if you are running grub2) to this
http://www.howtoforge.com/software-raid1-grub-boot-debian-etch
That sounds fine in theory but there are several stumbling blocks in practice due to my lack of knowledge, which lead me towards a fresh install, which isn't a great way to learn! (At present there's very little on the box other than a fresh install so if wiping it is the way to go that's what I'll do.)
Well sounds like you don't have much to lose apart from some time trying to do it in situ first, and by trying it this way you will learn pretty much everything you need to know about software raid on linux as you will be doing it all manually rather than letting the installer do all the work for you. I say back up what is needed on that box and give it a go...it's not irreversible until you add the final member from your original array anyway.
On 01/07/11 00:18, Wayne Stallwood wrote:
On 30/06/11 09:05, Mark Rogers wrote:
At the moment I'm using the "hardware" RAID. I should be able to remove one disk from the array and still boot from the other, then create a new degraded md array with the second disk, copy everything across, boot from the md array, add the second first disk to the array.
That being the case then I would be going about it in a similar way (adjusting as needed the instructions if you are running grub2) to this
http://www.howtoforge.com/software-raid1-grub-boot-debian-etch
Oh I forgot to mention an important step...
Once you have it all working get mail up and running on the box and edit mdadm.conf and adjust MAILADDR to point to somewhere where you are likely to read the alerts. Redundant arrays are no use if you aren't made aware when the array is degraded because a member has died :)
Otherwise just divert root mail to somewhere sensible as I think it defaults to sending the alerts there. Then use
mdadm -monitor /dev/md0 --test
to check the alerts work.
On 01/07/11 00:18, Wayne Stallwood wrote:
Well sounds like you don't have much to lose apart from some time trying to do it in situ first, and by trying it this way you will learn pretty much everything you need to know about software raid on linux as you will be doing it all manually rather than letting the installer do all the work for you. I say back up what is needed on that box and give it a go...it's not irreversible until you add the final member from your original array anyway.
In the interests of full disclosure I have played with mdadm quite a bit in the past, but it's always been on data drives not boot drives. There's a lot of things you can screw up without losing your data or making the server unbootable when you're only playing with data!
I'll read through the howtoforge howto and see if that gives me the confidence to experiment. It has one things going for it: the server has no optical drive and my USB one has been "borrowed" so that's just made the option of re-installing a bit harder to push the balance towards the in-place upgrade. (Yes I know I can install from USB but I'm making excuses here!)