I have a site with a cheapie SSL certificate from 123-Reg. It has been working for the past couple of years but they've just sent me a new certificate which is crashing Apache when I try to install it.
The Apache config for the virtual host includes: SSLEngine on SSLCertificateFile /blah/blah/mysite.123.cert SSLCertificateKeyFile /blah/blah/mysite.123.key SSLCaCertificateFile /blah/blah/AlphaSSLroot.crt
This differs from the instructions on 123-Reg's site:
http://www.123-support.co.uk/support/answers/installing-your-ssl-apache-open... .. but is working. I don't know the reasons behind or the implications of the differences.
When I drop in the replacement mysite.123.cert, I get the following error_log: [error] Unable to configure RSA server private key [error] SSL Library Error: 185073780 error:0B080074:x509 certificate routines:X509_check_private_key:key values mismatch
Frustratingly, "apache2 -t" does not report any config errors but this problem crashes the server.
On 23/06/10 09:10, Mark Rogers wrote:
I have a site with a cheapie SSL certificate from 123-Reg. It has been working for the past couple of years but they've just sent me a new certificate which is crashing Apache when I try to install it.
Fixed it myself.
123-Reg helpfully email you the new certificate at renewal time, but they don't email you the key, nor do they tell you to update it in the certificate email. I can see why emailing it would be insecure, but if you only replace one then (obviously) it breaks. I say obviously, although I obviously didn't work it out myself instantly. Generally speaking, though, if you have a new padlock then it's not much use without a new key.....
On 23/06/10 09:10, Mark Rogers wrote:
Frustratingly, "apache2 -t" does not report any config errors but this problem crashes the server.
This has got me thinking....
Apache may be the world leader, but then Windows is the leading O/S on much the same basis, and Apache does have its flaws.
One of which is that it does seem quite easy to make a config error in one virtual server that brings down the whole server, and all the unrelated virtual servers with it. Testing the configuration doesn't always work either.
To me, an ideal server would run multiple virtual servers as independently as possible. That means that a fault in one shouldn't be able to bring down the whole server, but it also means that one code running in one site should not be able to access directories from other sites (in much the same way that, for example, PureFTP has a single ftpuser at the filesystem level but prevents one user from accessing another users files).
Of-course the biggest problem is that I have a server with a load of virtual servers on it (sharing an IP), so migrating to a new server means migrating all the sites in one go, although I could run some test sites on a different port initially. So this may all be pie-in-the-sky thinking, but what other web servers are worth a look?
On 23 June 2010 09:46, Mark Rogers mark@quarella.co.uk wrote:
On 23/06/10 09:10, Mark Rogers wrote:
Frustratingly, "apache2 -t" does not report any config errors but this problem crashes the server.
One of which is that it does seem quite easy to make a config error in one virtual server that brings down the whole server, and all the unrelated virtual servers with it.
Do you see any similar report in Apache's bug tracking system?
Because... *any* crash is a bug.
Srdjan
On 23/06/10 10:04, Srdjan Todorovic wrote:
Do you see any similar report in Apache's bug tracking system?
Because...*any* crash is a bug.
I should be more careful with phrases like "crash", it's a bit too subjective.
To my mind, if I have a running server, and make a config change (*any* change, could be complete garbage), then the sequence: apache2 -t /etc/init.d/apache2 reload .. should either error at the -t test, or should be running after the reload.
I *thought* that a reload basically started a new thread with the new configuration, and as the old thread(s) finished handling their current connections they terminated and new threads started up as necessary. If that (vague) picture is correct, then if the first thread is unable to start then the rest should continue as they are and an error be reported, rather than the other threads continue to stop. And this should be the case regardless of whether the configuration has been tested.
Over the years I have seen several things trigger this. Someone starting to configure a new virtual host, not finishing the job but leaving the config file in an invalid state, then the server dies when logrotate causes a reload sometime in the night (albeit that a -t test would probably have warned about it if it had been used).
I'm not sure that the spec of the -t test is to catch *every*thing that might cause Apache not to run, nor do I think that Apache's reload command is intended to cope with a mis-configuration (I just think it should!). So I'm looking for an alternative web server that thinks differently about it's responsibilities, like (for example) trying harder to compartmentalise different virtual hosts so that they're less able to interfere with each other.
There may be a case to file an Apache bug against the -t test not picking up the specific certificate problem, though.
Maybe,
On 23/06/10 11:00, James Bensley wrote:
Maybe,
I know of Nginx, but have never used it. As something well suited for large high-traffic sites, I assumed that it's virtual hosting capabilities would be weaker (it's great to know it hosts sites like WordPress, but I guessed it wouldn't have many other virtual hosts on the same box!)
Do you have any experience of Nginx to know how (differently) it handles name based virtual hosts?
My first post on this list... be gentle! :)
To my mind, if I have a running server, and make a config change (*any*
change, could be complete garbage), then the sequence: apache2 -t /etc/init.d/apache2 reload .. should either error at the -t test, or should be running after the reload.
Rather than changing to another web server, perhaps it would be better to run an identically configured version of Apache on a test server so that you can try out configuration changes there?
Richard
On 23/06/10 11:01, Richard Parsons wrote:
Rather than changing to another web server, perhaps it would be better to run an identically configured version of Apache on a test server so that you can try out configuration changes there?
Yes, this would be the "correct" solution.
However, you can never make two systems identical; they'll need to have different IP addresses (and trying to bind to the wrong IP could trigger an error), they'll have different traffic hitting them (and incoming connections might be what triggers an error), etc. That's not to say it's not without value, but in my experience it's as important to make sure a system recovers from an error as it is to try to avoid errors occurring.
MySQL has startup scripts which check that MySQL continues to run and restarts it otherwise, for example; there ought to be something similar for Apache (and I'm not saying there isn't, I just haven't found it if there is!) As far as I know MySQL doesn't have the ability to fall back to a known-good configuration but then MySQL's config doesn't change as often as Apache's does (I don't recall killing MySQL with a bad configuration anyway, but maybe that's selective memory).
Mark Rogers wrote:
However, you can never make two systems identical; they'll need to have different IP addresses (and trying to bind to the wrong IP could trigger an error), they'll have different traffic hitting them (and incoming connections might be what triggers an error), etc. That's not to say it's not without value, but in my experience it's as important to make sure a system recovers from an error as it is to try to avoid errors occurring.
MySQL has startup scripts which check that MySQL continues to run and restarts it otherwise, for example; there ought to be something similar for Apache (and I'm not saying there isn't, I just haven't found it if there is!) As far as I know MySQL doesn't have the ability to fall back to a known-good configuration but then MySQL's config doesn't change as often as Apache's does (I don't recall killing MySQL with a bad configuration anyway, but maybe that's selective memory).
It is possible to kill MySQL with config (I've done it), but generally only when first installing/setting-up - some of the same issues that pertain to Apache startup failure can apply (although it's much less likely to find its default port hijacked by anything else). I seem to remember though that it's a little more forthcoming about logging failures, especially if you run it directly to test (as in not using mysql_safe)
The big difference is that I would guess that 98% of people, once it's running, *never* mess with MySQL config ever again. In fact, most people who install MySQL through a package manager probably never mess with the config to start with, whereas with Apache you pretty much have to write some config to get it do anything useful.
You're right too though that you can never make a dev and live system totally identical - issues like inode/file-handle limits, thread counts/child process limits or memory issues are hard to test outside of real-world environments - at least a fall-back to last-known-good would help you keep your existing live setup running, but at least being able to test config breakages on a dev system is a Good Thing.
On 23 Jun 09:46, Mark Rogers wrote:
On 23/06/10 09:10, Mark Rogers wrote:
Frustratingly, "apache2 -t" does not report any config errors but this problem crashes the server.
This has got me thinking....
Apache may be the world leader, but then Windows is the leading O/S on much the same basis, and Apache does have its flaws.
One of which is that it does seem quite easy to make a config error in one virtual server that brings down the whole server, and all the unrelated virtual servers with it. Testing the configuration doesn't always work either.
To me, an ideal server would run multiple virtual servers as independently as possible. That means that a fault in one shouldn't be able to bring down the whole server, but it also means that one code running in one site should not be able to access directories from other sites (in much the same way that, for example, PureFTP has a single ftpuser at the filesystem level but prevents one user from accessing another users files).
Of-course the biggest problem is that I have a server with a load of virtual servers on it (sharing an IP), so migrating to a new server means migrating all the sites in one go, although I could run some test sites on a different port initially. So this may all be pie-in-the-sky thinking, but what other web servers are worth a look?
No it doesn't, it means that you setup the current apache to proxy to the new server whilst migrating the sites. That way you can migrate a site at a time, and then when they're all done, turn off apache and bring the new server up on the correct port.
Personally, I stick with apache2, it's still by far the most flexible, and other than the mod_proxy modules, I use the mod_fcgid and mod_wsgi modules, so that my (django) sites are run as a seperate user to the main apache user (actually, each site is run as a seperate user), and have mod_fcgid to do PHP if I get really desperate (this doesn't usually happen... but may do if I ever get round to installing a webmail system on there...).
nginx appears to be the current alternative webserver of choice, though.
Cheers,
On 23 June 2010 13:10, Brett Parker iDunno@sommitrealweird.co.uk wrote:
nginx appears to be the current alternative webserver of choice, though.
I have no personal experience with it, I just suggested it because it seems to be faster, functionality wise I know nothing :)
http://blog.webfaction.com/a-little-holiday-present
http://www.joeandmotorboat.com/2008/02/28/apache-vs-nginx-web-server-perform...
On 23/06/10 13:10, Brett Parker wrote:
No it doesn't, it means that you setup the current apache to proxy to the new server whilst migrating the sites. That way you can migrate a site at a time, and then when they're all done, turn off apache and bring the new server up on the correct port.
It did cross my mind that this would be a reasonable option.
One concern I have is that I'll get part-way through the process and it'll stick like that (some sites proxied), because life (other jobs) get in the way. So actually maybe I'm better forgetting about the proxy option and work on the basis that I'll spend a weekend doing the migration then deal with the fallout....
nginx appears to be the current alternative webserver of choice, though.
I'm thinking I should set up a test install and see how I get on. I looked through the config file format and it seems pretty sane.
Of-course the whole discussion is irrelevant unless it is better at separating the virtual hosts - I'll investigate that through their website.
On Wed, 23 Jun 2010 09:46:32 +0100 Mark Rogers mark@quarella.co.uk allegedly wrote:
One of which is that it does seem quite easy to make a config error in one virtual server that brings down the whole server, and all the unrelated virtual servers with it. Testing the configuration doesn't always work either.
To me, an ideal server would run multiple virtual servers as independently as possible. That means that a fault in one shouldn't be able to bring down the whole server, but it also means that one code running in one site should not be able to access directories from other sites (in much the same way that, for example, PureFTP has a single ftpuser at the filesystem level but prevents one user from accessing another users files).
Of-course the biggest problem is that I have a server with a load of virtual servers on it (sharing an IP), so migrating to a new server means migrating all the sites in one go, although I could run some test sites on a different port initially. So this may all be pie-in-the-sky thinking, but what other web servers are worth a look?
Mark
I use, and can recommend, lighttpd. It is very lightweight (in both memory and cpu resource requirements), the configuration syntax is relatively straightforward, and it handles virtual servers pretty well. But my requirements are modest (low traffic for personal websites on a VPS).
However, your question got me thinking and I have done some checking with my own setup. I deliberately introduced some syntax errors in the configuration of a virtual server and tested with "lighttpd -t -f configfile" (the lighty equivalent of of apache2 -t) and found that I could get an error through the testing ("syntax ok") which would then cause the server to fail to reload.
So the problem is not confined to apache unfortunately.
Mick
---------------------------------------------------------------------
The text file for RFC 854 contains exactly 854 lines. Do you think there is any cosmic significance in this?
Douglas E Comer - Internetworking with TCP/IP Volume 1
http://www.ietf.org/rfc/rfc854.txt ---------------------------------------------------------------------
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
mick wrote:
However, your question got me thinking and I have done some checking with my own setup. I deliberately introduced some syntax errors in the configuration of a virtual server and tested with "lighttpd -t -f configfile" (the lighty equivalent of of apache2 -t) and found that I could get an error through the testing ("syntax ok") which would then cause the server to fail to reload.
So the problem is not confined to apache unfortunately.
It's the classic case of unit testing (seeing if the config passes basic syntax checks) versus integration testing (running the thing to check that the whole lot doesn't fall over). Short of actually starting the server up it's never going to know if it has, for instance, appropriate file-system permissions, config-specified directories in place (my favourite error is defining a custom log directory and failing to create it first - Apache fails to start, silently), unimpeded access to the port it's going to run on, etc, etc.
For this reason, if you're using Apache (or any webserver) for anything public-facing or "serious", it's always a good idea to have a mirror development system you can /really/ test config on first, before deploying it to the world.
Hth, Simon
- -- - --------------------------------------------------------------------- Simon Ransome http://nosher.net
Photography RSS feed - http://nosher.net/images/images.rss
On 23/06/10 22:24, simon ransome wrote:
mick wrote:
So the problem is not confined to apache unfortunately.
Mick: Thanks for trying this - it's really appreciated. Shame it didn't show lighttpd to be better than Apache in this respect, though.
Short of actually starting the server up it's never going to know if it has, for instance, appropriate file-system permissions, config-specified directories in place [...]
There are two ways to (potentially) achieve this.
One is to find a way to load a second instance of the server with the new configuration; this could never be complete (ie it couldn't bind to the same IP/port) but it could catch some obscure errors, leaving unit testing to catch the rest.
However the simplest would be to have two configurations, a "known good" and a "current". On startup, you load the current config, and if that fails you restart loading the known good config. Minimal downtime, and (I would have thought) relatively simple to achieve. Indeed this could probably be achieved through scripting; start Apache, if it fails within a few seconds switch to a known-good config and restart. If it fails after a longer period just restart it (to guard against crashes unrelated to the config). Maybe something like this already exists, I'll have to go hunting!
Mark Rogers wrote:
However the simplest would be to have two configurations, a "known good" and a "current". On startup, you load the current config, and if that fails you restart loading the known good config. Minimal downtime, and (I would have thought) relatively simple to achieve. Indeed this could probably be achieved through scripting; start Apache, if it fails within a few seconds switch to a known-good config and restart. If it fails after a longer period just restart it (to guard against crashes unrelated to the config). Maybe something like this already exists, I'll have to go hunting!
Actually, that's such a reasonable idea and seemingly so trivial for Apache (or lighttpd, or a.n.other webserver) to implement it's a wonder that they don't already. After all, even *Windows* offers a fall-back to a last-known-good configuration if it reboots into a failed state after adding new hardware.
On 24/06/10 09:59, Simon Ransome wrote:
Actually, that's such a reasonable idea and seemingly so trivial for Apache (or lighttpd, or a.n.other webserver) to implement it's a wonder that they don't already.
I joined the nginx mailing list and posed the question, and got this response:
Igor Sysoev wrote:
You can run "nginx -t" before applying configuration: it catches almost all possible errors except some fatal errors: no memory, files, etc. If you send -HUP signal to reconfigure and a new configuration is bad, then nginx continues to run with an old configuration, if no fatal errors will happen. SSL certificate without key case is not the fatal error.
I believe Igor is one who would know!
So, it sounds like nginx has a sensible default and has just jumped up the leaderboard quite significantly!
On Wed, Jun 23, 2010 at 09:10:13AM +0100, Mark Rogers wrote:
Frustratingly, "apache2 -t" does not report any config errors but this problem crashes the server.
I fixed precisely this problem for someone a couple of months ago. They were in the process of changing their cert and only replaced the bit they were sent. They then also did something like change a site config and then restarted the server and... nothing.
I'm not sure if it's possible to get Apache to tell you this somehow but I found the problem by disabling all sites apart from 1 simple one and then realised it was only ssl sites affected. At this point they mentioned they'd changed the cert earlier that day and we resolved it quite quickly from there.
Adam