My system has two physical disks. When I installed Fedora 7 I gave it one of the two disks all to itself (having copied the data I wanted elsewhere). Fedora 7 uses LVM2 to manage its disks.
Now the installation seems to be well sorted and stable I'd like to use LVM for the other disk as well, it's currently partitioned the 'old fashioned' way.
I can (hopefully!) manage the issues of saving the data on the old disk before splatting it with LVM but I need some guidance on the best ways to use LVM - i.e. how to split up the disk etc.
Currently Fedora has made the one physical disk it has into one Physical Volume and has assigned it all to one Volume Group.
Presumably I'll make the other disk (when empty) into a second Physical Volume.
But then I wonder what to do next. Should I add the new Physical Volume to the existing Volume Group or would I be better off adding a new Volume Group? What are the pros and cons of the two approaches?
Then should I split the Volume Group into more than the two logical volumes it has at the moment? Currently it's just a huge / and a 2Gb swap. /home is on the 'old' disk at present. Would I be better putting at least /home into a separate Logical Volume (I think the answer to that is 'yes')?
Currently /etc/fstab is as follows:- /dev/VolGroup00/LogVol00 / ext3 defaults 1 1 LABEL=/boot /boot ext3 defaults 1 2 tmpfs /dev/shm tmpfs defaults 0 0 devpts /dev/pts devpts gid=5,mode=620 0 0 sysfs /sys sysfs defaults 0 0 proc /proc proc defaults 0 0 /dev/sda1 /slackroot ext3 defaults 1 2 /dev/sda3 /home ext3 defaults 1 2 /dev/sda4 /scratch ext3 defaults 1 2 /dev/VolGroup00/LogVol01 swap swap defaults 0 0 /dev/sda5 swap swap defaults 0 0
One other issue that occurs to me is that there must (I guess) be something on /dev/sda that tells grub that /boot is /dev/sdb1 and I'm a bit worried that this might get splatted and stop the system booting. Or is the boot block sacrosanct from lvm/fdisk etc.?
I can understand the basics of LVM, i.e. how to do things, but it would be useful to have a 'why' document.
On Thursday 11 October 2007 13:23:22 Chris G wrote:
I can understand the basics of LVM, i.e. how to do things, but it would be useful to have a 'why' document.
http://tldp.org/HOWTO/LVM-HOWTO/
would be a good place to start.
These HOWTO thingies are often pretty good. I found it through Google http://www.google.co.uk/ (also pretty good, if a little evil.)
Cheers, Richard
On Thu, Oct 11, 2007 at 01:52:24PM +0100, Richard Lewis wrote:
On Thursday 11 October 2007 13:23:22 Chris G wrote:
I can understand the basics of LVM, i.e. how to do things, but it would be useful to have a 'why' document.
http://tldp.org/HOWTO/LVM-HOWTO/
would be a good place to start.
I had already found that and read it! :-)
That's what allowed me to post my questions, it does a pretty good job of describing the 'how' of LVM but does very little on the 'why' front.
I need something that tells me good strategies for splitting ones disks, e.g. :-
Is it best to keep two large[ish] SATA disks as two Volume Groups or would they be better as one?
What's a reasonable strategy for spltting into logical volumes?
These HOWTO thingies are often pretty good. I found it through Google http://www.google.co.uk/ (also pretty good, if a little evil.)
That's how I found the HOWTO too.
----- Original Message ----- From: "Chris G" cl@isbd.net To: main@lists.alug.org.uk Sent: Thursday, October 11, 2007 2:57 PM Subject: Re: [ALUG] LVM documentation and/or basic help wanted
On Thu, Oct 11, 2007 at 01:52:24PM +0100, Richard Lewis wrote:
On Thursday 11 October 2007 13:23:22 Chris G wrote:
I can understand the basics of LVM, i.e. how to do things, but it would be useful to have a 'why' document.
http://tldp.org/HOWTO/LVM-HOWTO/
would be a good place to start.
I had already found that and read it! :-)
That's what allowed me to post my questions, it does a pretty good job of describing the 'how' of LVM but does very little on the 'why' front.
I need something that tells me good strategies for splitting ones disks, e.g. :-
Is it best to keep two large[ish] SATA disks as two Volume Groups or would they be better as one?
What's a reasonable strategy for spltting into logical volumes?
I have only played a little with LVM2, but I found the following articles very helpful. I use Gentoo and these are very distro-oriented but I am sure the 'why' will transfer as-is:
http://www.gentoo.org/doc/en/articles/lvm-p1.xml http://www.gentoo.org/doc/en/articles/lvm-p2.xml
Hope this helps
Jim
On Thu, Oct 11, 2007 at 04:05:05PM +0100, Jim Rippon wrote:
That's what allowed me to post my questions, it does a pretty good job of describing the 'how' of LVM but does very little on the 'why' front.
I need something that tells me good strategies for splitting ones disks, e.g. :-
Is it best to keep two large[ish] SATA disks as two Volume Groups or would they be better as one?
What's a reasonable strategy for spltting into logical volumes?
I have only played a little with LVM2, but I found the following articles very helpful. I use Gentoo and these are very distro-oriented but I am sure the 'why' will transfer as-is:
http://www.gentoo.org/doc/en/articles/lvm-p1.xml http://www.gentoo.org/doc/en/articles/lvm-p2.xml
Thanks, that's useful, it has quite a bit of 'how' still but a bit more 'why'.
On Thu, 2007-10-11 at 14:57 +0100, Chris G wrote:
I need something that tells me good strategies for splitting ones disks, e.g. :-
Is it best to keep two large[ish] SATA disks as two Volume Groups or would they be better as one? What's a reasonable strategy for spltting into logical volumes?
Multiple physical devices in a single volume group scare me unless either the physical devices are in themselves fault tolerant or I really don't care about the data I am going to put in the resulting partitions.
Failure of a single device will result in all the partitions within the group probably being irrecoverably unmountable. So when a volume group comprises of two devices you have doubled the chances of disk failure wiping out your data.
That said if you require logical volumes larger than your biggest disk then it is your only choice.
On Thu, Oct 11, 2007 at 10:11:32PM +0100, Wayne Stallwood wrote:
On Thu, 2007-10-11 at 14:57 +0100, Chris G wrote:
I need something that tells me good strategies for splitting ones disks, e.g. :-
Is it best to keep two large[ish] SATA disks as two Volume Groups or would they be better as one? What's a reasonable strategy for spltting into logical volumes?
Multiple physical devices in a single volume group scare me unless either the physical devices are in themselves fault tolerant or I really don't care about the data I am going to put in the resulting partitions.
Failure of a single device will result in all the partitions within the group probably being irrecoverably unmountable. So when a volume group comprises of two devices you have doubled the chances of disk failure wiping out your data.
That said if you require logical volumes larger than your biggest disk then it is your only choice.
OK, that's on thing to bear in mind then. Since my disks are 300Gb each I can see no great disadvantage in setting them up as two separate volume groups.
On Fri, 2007-10-12 at 09:50 +0100, Chris G wrote:
OK, that's on thing to bear in mind then. Since my disks are 300Gb each I can see no great disadvantage in setting them up as two separate volume groups.
Given that you already have 2 disks the same size why not purchase a third and have a 600GB fault tolerant array ? Given the price of 300GB drives these days if you have space in your case it must be worthwhile.
On Fri, Oct 12, 2007 at 06:40:52PM +0100, Wayne Stallwood wrote:
On Fri, 2007-10-12 at 09:50 +0100, Chris G wrote:
OK, that's on thing to bear in mind then. Since my disks are 300Gb each I can see no great disadvantage in setting them up as two separate volume groups.
Given that you already have 2 disks the same size why not purchase a third and have a 600GB fault tolerant array ? Given the price of 300GB drives these days if you have space in your case it must be worthwhile.
Well maybe! :-)
However, given the reliability of hard disks these days is it really worth it? In the twenty or so years that I've had PCs I don't think I've ever lost data to a hard disk failure, anyway everything that's important is backed up off-site.
On Fri, 2007-10-12 at 18:49 +0100, Chris G wrote:
However, given the reliability of hard disks these days is it really worth it? In the twenty or so years that I've had PCs I don't think I've ever lost data to a hard disk failure, anyway everything that's important is backed up off-site.
oooh there is a brave man, tempting fate like that :)
*kerchunk* *buzzzzz* *kerchunk*
There is also a performance gain to be had if it is done right too
On Sat, Oct 13, 2007 at 12:00:01AM +0100, Wayne Stallwood wrote:
On Fri, 2007-10-12 at 18:49 +0100, Chris G wrote:
However, given the reliability of hard disks these days is it really worth it? In the twenty or so years that I've had PCs I don't think I've ever lost data to a hard disk failure, anyway everything that's important is backed up off-site.
oooh there is a brave man, tempting fate like that :)
*kerchunk* *buzzzzz* *kerchunk*
I have had a couple of disks do that, but only after showing some signs of failure beforehand. Like one that was sticky starting and needed a quick thump to spin up, after about the second 'start with a thump' I copied all the data off it. It carried on working in non-critical roles for quite a while after that.
I've also had a couple which have developed a few bad blocks before rapidly deteriorating into uselessness, again I've had ample time to remove all imprtant data before the end.
There is also a performance gain to be had if it is done right too
That's true but I'm very rarely disk performance bound - in fact I'm very rarely any sort of performance bound except the speed of my ADSL connection. Speeding up either my processor or my hard disk would probably save me 50mS per day! :-)
On Sat, 2007-10-13 at 11:28 +0100, Chris G wrote:
I have had a couple of disks do that, but only after showing some signs of failure beforehand. Like one that was sticky starting and needed a quick thump to spin up, after about the second 'start with a thump' I copied all the data off it. It carried on working in non-critical roles for quite a while after that.
Some go like that but others go with no warning (no warning in that no noise was heard and no automated health monitoring was being run) or go like that faulty series of fujitsu's where one day they just stop appearing on the bus. Smartd won't predict impending electrical failure and nor will you get any noise. Also some drives go through the warning signs to total failure stage very quickly (as in overnight)
I've also had a couple which have developed a few bad blocks before rapidly deteriorating into uselessness, again I've had ample time to remove all imprtant data before the end.
Yes and that is fine as long as you are monitoring such things. But in my experience it is far better to have systems that have a degree of tolerance to failures than to rely on something or someone noticing an impending failure (In a perfect world you have both)
On Sat, Oct 13, 2007 at 12:50:16PM +0100, Wayne Stallwood wrote:
I've also had a couple which have developed a few bad blocks before rapidly deteriorating into uselessness, again I've had ample time to remove all imprtant data before the end.
Yes and that is fine as long as you are monitoring such things. But in my experience it is far better to have systems that have a degree of tolerance to failures than to rely on something or someone noticing an impending failure (In a perfect world you have both)
The trouble then is that you don't know that half your fault-tolerant RAID array is dead until the other half dies as well, the fault toleration may mask the underlying failures.
It's horses for courses anyway, I don't have a need for an ultra reliable 24/7 system, I *do* need to protect some data quite carefully (company accounts and such). To protect the data I copy off-site in two ways, one to my hosting provider's system and the other CDs in the garage. If my Linux box expired totally tomorrow I'd possibly lose 24 hours of business data (easily redone) and some time to build a new system.
On Sat, Oct 13, 2007 at 01:29:02PM +0100, Chris G wrote:
On Sat, Oct 13, 2007 at 12:50:16PM +0100, Wayne Stallwood wrote:
Yes and that is fine as long as you are monitoring such things. But in my experience it is far better to have systems that have a degree of tolerance to failures than to rely on something or someone noticing an impending failure (In a perfect world you have both)
The trouble then is that you don't know that half your fault-tolerant RAID array is dead until the other half dies as well, the fault toleration may mask the underlying failures.
I've found Linux MD support quite good at correctly kicking a dead disk out of an array, whether it's failed due to a cable fault, an electrical disk issue or a mechanical fault. Also mdadm then emails me to let me know the array is degraded. For bonus points it also runs an array check on a monthly basis, just in case a failure /has/ been masked. I've never actually seen this report a problem though.
J.
On Sat, 2007-10-13 at 13:29 +0100, Chris G wrote:
The trouble then is that you don't know that half your fault-tolerant RAID array is dead until the other half dies as well, the fault toleration may mask the underlying failures.
Erm I do, as soon as the array status changes it will mail me. This is just a standard function of mdadm.
Please tell if there are any raid systems around that when properly configured don't send some sort of notification when a device goes offline so that I can avoid them. :)