Hi ALUG,
I've been managing a Xen hypervisor for about nine months now and occasionally I get problems with DomUs running out of memory. The clients are all students and most of them are running WordPress blogs. The instances of OOM errors I've seen seem to be related to Apache and students serving up large image files. I'm thinking that this shouldn't really be happening; serving up a few ~1MB files shouldn't really be causing Apache to make the OS exhaust *all* available memory, should it? Even if it were servicing several requests simultaneously.
Some details:
* The host is a Dell PowerEdge x86_64 system with 32 GB RAM * The host OS is Debian 6.0 * We're running Xen 4.0.1 from Debian * The guests all run Debian 6.0 * Each guest has 15 GB of storage, 512 MB RAM, and 1 GB of swap * We currently have ~40 guests running
Below is some console output from a DomU that suffered this problem earlier today. You can see that the OOM killer killed Apache. And I'm guessing it killed sshd too as I couldn't connect to the guest. I couldn't find any errors in Xen's logs.
Any thoughts on what might be going on here?
And come September we'll have a short window in which we could alter out setup. Any suggestions for better ways of providing virtual machines? Perhaps alternatives hypervisors? Or some mechanism other than hypervisors?
Cheers, Richard
(Working from home: http://pic.twitter.com/NetsgOS2)
On Wed, Jul 11, 2012 at 04:53:06PM +0100, Richard Lewis wrote:
Below is some console output from a DomU that suffered this problem earlier today. You can see that the OOM killer killed Apache. And I'm guessing it killed sshd too as I couldn't connect to the guest. I couldn't find any errors in Xen's logs.
Any thoughts on what might be going on here?
The oom killer doesn't necessarily kill what is using the RAM it guesses at the best thing to kill. See http://linux-mm.org/OOM_Killer for more information on that.
What I'd suggest is that you put a job into cron that runs every 5 minutes and grabs a complete process list and shows how much RAM each process is using. You should then be able to see where the memory is going and what is using it as I'm going to suggest something has a memory leak but it may not be Apache itself causing the problem.
Adam
At Wed, 11 Jul 2012 18:57:39 +0100, Adam Bower wrote:
On Wed, Jul 11, 2012 at 04:53:06PM +0100, Richard Lewis wrote:
Below is some console output from a DomU that suffered this problem earlier today. You can see that the OOM killer killed Apache. And I'm guessing it killed sshd too as I couldn't connect to the guest. I couldn't find any errors in Xen's logs.
Any thoughts on what might be going on here?
The oom killer doesn't necessarily kill what is using the RAM it guesses at the best thing to kill. See http://linux-mm.org/OOM_Killer for more information on that.
What I'd suggest is that you put a job into cron that runs every 5 minutes and grabs a complete process list and shows how much RAM each process is using. You should then be able to see where the memory is going and what is using it as I'm going to suggest something has a memory leak but it may not be Apache itself causing the problem.
Thanks for the suggestion. It's been executing a script along these lines:
#!/bin/bash
LOG=/var/log/rjl-mem-info.log NOW=`date +"%Y-%m-%d %H:%M:%S"` echo "============================" >> $LOG echo $NOW >> $LOG ps aux | awk '{print $4, $10, $11}' | sort -rn | head -30 >> $LOG free -m >> $LOG
every couple of minutes for the last few hours. So far swap usage has remained almost entirely static at 20 MB. And RAM usage has fluctuated between about 400 and 500 MB. Of course, this has reminded me that these VMs are using pre-fork Apache, so I see 10 Apache processes. Typically, 7 of them report that they are using ~9% of the RAM, and 3 that they are using ~8%. Then there's also the parent Apache process using ~2%.
I'm not (so far) seeing any other processes using any significant amount of RAM, apart from MySQL. But that seems fairly static at 1.7%.
And, of course, it's also failed to misbehave :-(
Cheers, Richard
On Wed, Jul 11, 2012 at 10:11:03PM +0100, Richard Lewis wrote:
I'm not (so far) seeing any other processes using any significant amount of RAM, apart from MySQL. But that seems fairly static at 1.7%.
That suggests it may be a single thing that happens that causes something to eat memory all of a sudden. I'm afraid you'll just have to keep waiting in this case :)
Part of the point of this exercise is simply to see if memory usage stays constant over time or if it suddenly gets used exponentially or linearly which might help you after it has gone wrong again.
Adam
On 11 July 2012 23:08, Adam Bower adam@thebowery.co.uk wrote:
On Wed, Jul 11, 2012 at 10:11:03PM +0100, Richard Lewis wrote:
I'm not (so far) seeing any other processes using any significant amount of RAM, apart from MySQL. But that seems fairly static at 1.7%.
That suggests it may be a single thing that happens that causes something to eat memory all of a sudden. I'm afraid you'll just have to keep waiting in this case :)
Part of the point of this exercise is simply to see if memory usage stays constant over time or if it suddenly gets used exponentially or linearly which might help you after it has gone wrong again.
I had an OOM killer problem on one of my vms hosted at bytemark for weeks before I managed to trace the problem: bots trawling the trac directory of an apache site. Banning them with robots.txt fixed it. I had one script running every five minutes that checked for memory usage, and if it was above a certain amount then to send all sorts of memory usage/processes data to an output file. From this I could see it was always apache (even though, as Adam says, oomkiller was randomly killing anything it could to reclaim memory), and from there start to monitor the apache connections until I found it always stuck on listening to googlebots. Memory usage would balloon from a few hundred MB to > 800 within minutes, when oomkiller kicked in, making it very hard to pinpoint the problem.
I've still got scriptage if that helps.
Jenny
At Sun, 29 Jul 2012 11:04:26 +0100, Jenny Hopkins wrote:
On 11 July 2012 23:08, Adam Bower adam@thebowery.co.uk wrote:
On Wed, Jul 11, 2012 at 10:11:03PM +0100, Richard Lewis wrote:
I'm not (so far) seeing any other processes using any significant amount of RAM, apart from MySQL. But that seems fairly static at 1.7%.
That suggests it may be a single thing that happens that causes something to eat memory all of a sudden. I'm afraid you'll just have to keep waiting in this case :)
Part of the point of this exercise is simply to see if memory usage stays constant over time or if it suddenly gets used exponentially or linearly which might help you after it has gone wrong again.
I had an OOM killer problem on one of my vms hosted at bytemark for weeks before I managed to trace the problem: bots trawling the trac directory of an apache site. Banning them with robots.txt fixed it. I had one script running every five minutes that checked for memory usage, and if it was above a certain amount then to send all sorts of memory usage/processes data to an output file. From this I could see it was always apache (even though, as Adam says, oomkiller was randomly killing anything it could to reclaim memory), and from there start to monitor the apache connections until I found it always stuck on listening to googlebots. Memory usage would balloon from a few hundred MB to > 800 within minutes, when oomkiller kicked in, making it very hard to pinpoint the problem.
I've still got scriptage if that helps.
Thanks for sharing your experiences. I'll consider your reply evidence of interest in the thread and so provide a brief update.
The VM in question did eventually go on to misbehave in exactly the same way as before. I restarted it and checked my log file which reported complete memory saturation (RAM and swap) by lots and lots of Apache processes. As a result, I had a look at the Apache configuration and decided to do some performance tuning, especially of the Keep-Alive settings. All the settings were at their default values. The Keep-Alive timeout is possibly the most significant; I changed that from 15s to 3s which will hopefully get Apache processes out of the way quicker in future.
This particular VM has been running problem-free for around two weeks now. I suppose the take home message is, don't try and blame your virtualisation hypervisor before you've actually tuned your pre-fork Web server sensibly. I've effectively re-discovered something which has actually been common knowledge since about September 1993.
Richard