On 10/12/2008 16:56:07, Chris G wrote:
OK, problem solved, or at least moved elsewhere. It appears that my Firefox on my work system is the culprit, I suepct that it's taking a very long time to unzip compressed HTML. I tried IE in my Vmware guest and that works impressively quickly on the same pages that hang up Firefox.
So the question changes to - does anyone know why Firefox would be very slow with gzipped HTML?
I would want to try out firefox with gzipped HTML when the page concerned is fairly big but static (from the server's point of view).
I am wondering if the problem is that the python generates the HTML in very small pieces with the result that the document arrives as a very large number of TCP packets which are handed to firefox one by one.
You could check this out directly with a packet sniffer but if firefox loads a compressed but static page must faster it would to support this hypothesis.
Regards, Steve.