Hi Guys
I have recently devoted a VM to tails (tails.boum.org). That VM appears in a DNS round robin with five other servers. When I first offered the server, trials showed I could expect to see outbound around 20-30 GB per day. I can live with that. Unfortunately, the traffic has now jumped to 150-180 GB per day. I can't live with that. It will cost me too much.
So I need some advice on bandwith shaping. Can anyone recommend any tools which will allow me to set limits so that I can throttle web traffic to around 30 Gb per day (or 250 GB per week or 1000 GB per month for example).
Cheers
Mick
---------------------------------------------------------------------
The text file for RFC 854 contains exactly 854 lines. Do you think there is any cosmic significance in this?
Douglas E Comer - Internetworking with TCP/IP Volume 1
http://www.ietf.org/rfc/rfc854.txt ---------------------------------------------------------------------
On 09/01/12 18:14, mick wrote:
Hi Guys
I have recently devoted a VM to tails (tails.boum.org). That VM appears in a DNS round robin with five other servers. When I first offered the server, trials showed I could expect to see outbound around 20-30 GB per day. I can live with that. Unfortunately, the traffic has now jumped to 150-180 GB per day. I can't live with that. It will cost me too much.
So I need some advice on bandwith shaping. Can anyone recommend any tools which will allow me to set limits so that I can throttle web traffic to around 30 Gb per day (or 250 GB per week or 1000 GB per month for example).
I am not sure what you want to do to enforce the limit ?
So do you propose to serve 30GB of data over 24 hours and then stop/go offline ?
Or do you wish to limit bandwidth so that it isn't possible to transfer more than 30GB over 24 hours. In which case you need to throttle bandwidth to 2.75Mb/s (assuming the utilisation is constant which presumably it won't be)
It's sounds like having such a heavy continuous rate limit on the sort of service you are providing would make your contribution less useful and the only "clever" way I know of doing it slightly better is to use Committed Access Rates which can at least give burst allowances to smooth out the peaks.
AFAIK you can do CAR using policy based routing in Linux using tools like iproute2. Last time I did it I was using a Cisco router.
On Tue, 10 Jan 2012 21:26:19 +0000 Wayne Stallwood ALUGlist@digimatic.co.uk allegedly wrote:
On 09/01/12 18:14, mick wrote:
So I need some advice on bandwith shaping. Can anyone recommend any tools which will allow me to set limits so that I can throttle web traffic to around 30 Gb per day (or 250 GB per week or 1000 GB per month for example).
I am not sure what you want to do to enforce the limit ?
So do you propose to serve 30GB of data over 24 hours and then stop/go offline ?
No I don't want to go offline. What I have in mind is a throttle which will stop me ever reaching the point at which I might need to go offline.
And I am an idiot for not considering the obvious - throttling at application level. I had been looking at tc and iptables mangling, or something like trickle, all of which looked horribly more complicated than the approach taken by tor (which allows you to simply set the acceptable bandwidth rate to some limit, plus set an accounting period max of some total transfer limit per day/week whatever). And of course my webserver (lighttpd) allows something similar. Just set the server limit to some chosen max transfer rate and, if necessary, also impose a per IP max rate.
I've just done that and tested pulling a 620 meg iso and it seems to work as expected.
Or do you wish to limit bandwidth so that it isn't possible to transfer more than 30GB over 24 hours. In which case you need to throttle bandwidth to 2.75Mb/s (assuming the utilisation is constant which presumably it won't be)
That's more like it - see above
BTW. My transfer rate jumped dramatically simply because tails 0.10 was released on the 4th. I should have been better prepared. Lesson learned.
Mick