On 06 Mar 10:23, Mark Rogers wrote:
On 6 March 2013 09:20, Brett Parker iDunno@sommitrealweird.co.uk wrote:
Why are you wanting to migrate to nginx? I've seen many instances of nginx falling over and then not being able to be restarted cleanly, leading to all manner of fun.
Interesting, this isn't something I was aware of.
Serving statics on ftp.uk.debian.org actually was one of the things that we've watched it die from... ;)
I've only heard good things about nginx. Maybe I look in the wrong places!
There's plenty of people that love it, I just ain't one of 'em... which means that when I do hear bad things, I tend to remember 'em :)
(Also, I kinda like the sheer flexibility I've got with apache, so, erm... yeah :)
Where you using mod_php in apache? Are you planning on using the same system in nginx? i.e. you're still not seperating users from each other and you still have all the issues of php to deal with, except now you're in a less tested environment with a relatively young and unproven webserver.
No, the plan is to move to FGM. This is one of the motivations for the move, although FastCGI isn't limited to nginx.
Do you mean FPM? As in PHP-FPM? But anyways - that's just fastcgi, really... and for that I'd use mod_fcgid, a small wrapper script, mod_suexec, and a seperate small wrapper script for each user (the wrapper script potentially sets various envvars and ulimits, maybe even chroots them in to a new and interesting place).
Customers tend not to test things, no matter how nicely you ask them. What actually happens is that they say "yeah, that's all fine!", then 2 weeks after you've changed things they go "oh, hang on, this bit is broken, and this, and this..."
Don't get me wrong, all sites would get a proper amount of testing by me too. By "proper" I mean extensive for commercial sites but rather less for the handful of "freebies" that always seem to accumulate... The server in question is mostly either internal or personal stuff so a failure is not critical - I wouldn't use anything I didn't know in a production environment for important sites and this exercise was partly about trialling something new, and partly about learning it's quirks so that if we do use it production at a future date I know it well enough to do so.
Fair enough - I'd suggest waiting for the apache2-event-mpm to become more stable if you really want to use an event based webserver...
So, erm, the only thing that nginx is supposedly better at than apache is serving static sites... not a lot of yours are... erm... why are you moving to it again? (Please, don't tell me it's because "I heard of nginx and someone told me it was cool!")
I heard of nginx and, er, hmmm....
However, yes nginx touts static as a major strength (although lets be fair: even a heavily PHP based site will still likely be serving substantial amounts of images and Javascript/CSS files so static performance is always relevant).
Most of which, if it's an even remotely busy site, will be in the filesystem cache, and once there, apache is going to be able to serve them fairly much as quickly anyways....
My biggest concern is the memory usage of Apache; in recent months I've had several issues due to running out of RAM on systems that really ought not have an issue, and memory footprint is my prime motivation for looking at alternatives. Yes, Apache can be tuned (and largely this is something I haven't done), but fundamentally the event model seems better than the options that Apache (2.2) gives me. To be fair I have no knowledge of the event model used by Apache 2.4 (I didn't know it existed until yesterday) but that would also trip a lot of the "relatively young and unproven" tests.
Have you got limits set in PHP? Are large files being uploaded? Could it be that there's a bad PHP script that's leaking all over the place, and using a frequently restart fastcgi script would limit the problems?
Aside from making sure I have enough RAM to run both alongside each other, any reason to avoid this approach?
Only as outlined above - i.e. why fix something that isn't broken.
Because what I have now is broken, ie it's not stable due to memory requirements. Every time I've had a major issue the logs have suggested that Apache caused the system to run out of memory. Don't get me wrong: I'm not just assuming that Apache is the underlying problem here, as it's likely to be the code that Apache is running (via mod_php or whatever) that is the real culprit. But it has prompted me to look at alternatives, and until now I'd not heard a bad word said against nginx.
OK - so, I'd be switching to a mod_fcgid based php setup, and limiting the resources that apache uses. We recently had an issue with mod_python (*sigh* - we are moving to mod_wsgi, and the world is becoming better, but first we have to get rid of the legacy crap...).
So, what one can do is limit the overall memory for a process, what I use here is (in /etc/default/apache2):
ulimit -v 1048576
Which means that each apache process can only use up to 1G of memory (yes, that's a ridiculous amount, and I'd much rather it was much less)...
Then, instead of your server falling over and dying in a messy heap, apache goes "bum, that went wrong" and closes down that thread and fires up a new one (whilst returning a memory error to whatever was looking at the page that caused the issue).
Aside: Another motivation is that we do a lot of work with ARM based systems which rarely have more than 512MB (think Raspberry Pi sort of spec). We usually end up with Apache on these systems because it's what we know, but I really think that nginx/lighttpd/other would be a better fit when resources are tight.
Depends what you're serving from the Pi, really. But nginx wouldn't be my choice for a pi... lighttpd, maybe, perhaps. Boa, maybe too. But I'd be looking that end for anything simple, not fully functioning web stacks, but small "good enough" stacks.
Cheers,