Hullo there,
I've just moved my backup server offsite.
The server has an nfs mounted share on my server at work, and the work server runs dirvish backup (which uses rsync), backing up to this mounted directory. I'm using an openvpn tunnel, and have specified the nfs mount thus: rw,hard,wsize=1024,rsize=1024,tcp,addr=10.3.0.3.
Both tun0 have MTU=1500, and the network connection between work and home is broadband.
Anyway, it took me ages to get it all going and it finally works, but it seems ever so slooooow. Is there a handy way to check transfer rate, or a way to speed it up?
Thanks, Jenny
On Fri, 2006-01-06 at 20:38 +0000, Jenny Hopkins wrote:
Hullo there,
I've just moved my backup server offsite.
The server has an nfs mounted share on my server at work, and the work server runs dirvish backup (which uses rsync), backing up to this mounted directory. I'm using an openvpn tunnel, and have specified the nfs mount thus: rw,hard,wsize=1024,rsize=1024,tcp,addr=10.3.0.3.
We messed around with remote backups quite a bit recently. We have settled on using BackupPC which after a bit of tweaking works quite well. However we are doing it the other way round (remote system initiates and controls the backup run)
I suspect dirvsh works in a similar way to backuppc . What I don't understand is why you are using rsync over nfs...this is adding overhead to the vpn connection.
We found the most efficient way was to run rsync over ssh (compression flag on helps a bit too) There should be no need to nfs mount the directory to be backed up to/from first. In your case you could just run rsync directly from server to remote machine over the vpn tunnel or dispense with the tunnel and do it over ssh.
On Fri, Jan 06, 2006 at 08:38:51PM +0000, Jenny Hopkins wrote:
Hullo there,
I've just moved my backup server offsite.
The server has an nfs mounted share on my server at work, and the work server runs dirvish backup (which uses rsync), backing up to this mounted directory. I'm using an openvpn tunnel, and have specified the nfs mount thus: rw,hard,wsize=1024,rsize=1024,tcp,addr=10.3.0.3.
Both tun0 have MTU=1500, and the network connection between work and home is broadband.
Anyway, it took me ages to get it all going and it finally works, but it seems ever so slooooow. Is there a handy way to check transfer rate, or a way to speed it up?
Remember that ADSL (if that's what you mean by 'broadband') is only 256kb/s in the 'up' direction. That's 256k *bits* per second. Thus at the absolute best you're only going to get 256k/8 bytes per second which is 32kbytes/sec. Thus it's going to take something like 30 seconds per magabyte transferred.
It's forty times slower than even 10Mb/s ethernet and four hundred times slower than your typical office 100Mb/s network.
On 07/01/06, Chris Green chris@areti.co.uk wrote:
Remember that ADSL (if that's what you mean by 'broadband') is only 256kb/s in the 'up' direction. That's 256k *bits* per second. Thus at the absolute best you're only going to get 256k/8 bytes per second which is 32kbytes/sec. Thus it's going to take something like 30 seconds per magabyte transferred.
It's forty times slower than even 10Mb/s ethernet and four hundred times slower than your typical office 100Mb/s network.
Grief, no wonder it is slow then. That is going to be so slow as to be unusable. How do other people manage?
Thanks, Jenny
On Sat, Jan 07, 2006 at 12:14:56PM +0000, Jenny Hopkins wrote:
On 07/01/06, Chris Green chris@areti.co.uk wrote:
Remember that ADSL (if that's what you mean by 'broadband') is only 256kb/s in the 'up' direction. That's 256k *bits* per second. Thus at the absolute best you're only going to get 256k/8 bytes per second which is 32kbytes/sec. Thus it's going to take something like 30 seconds per magabyte transferred.
It's forty times slower than even 10Mb/s ethernet and four hundred times slower than your typical office 100Mb/s network.
Grief, no wonder it is slow then. That is going to be so slow as to be unusable. How do other people manage?
You probably need to slim down what you're backing up to the absolute minimum, if you just back up whole directories/folders without being selective you are probably backing up lots of stuff unnecessarily.
The use all the optimisation that rsync can give you.
On Sat, Jan 07, 2006 at 12:14:56PM +0000, Jenny Hopkins wrote:
Grief, no wonder it is slow then. That is going to be so slow as to be unusable. How do other people manage?
Take your first backup over the local lan, then move the disk/DVD/tape to the other machine and copy the data off (make sure you preserve all the file time attributes!) then you will only need to backup changes which if you aren't dealing with a *huge* dataset shouldn't be a problem.
Also like Wayne said, ditch using NFS over a tunnel and just use rsync over ssh with compression, that should make quite a big difference.
Thanks Adam
On 07/01/06, Adam Bower adam@thebowery.co.uk wrote:
Take your first backup over the local lan, then move the disk/DVD/tape to the other machine and copy the data off (make sure you preserve all the file time attributes!) then you will only need to backup changes which if you aren't dealing with a *huge* dataset shouldn't be a problem.
Also like Wayne said, ditch using NFS over a tunnel and just use rsync over ssh with compression, that should make quite a big difference.
Yay! Success. The backup is already just incremented changes..I used to run it at work and have moved it offsite, but the 14 daily images are still there. So I've now swapped the server actually running the process to being the same one as the backup directory is on, and have begun the backup using rsync over ssh. It has completed its backup in about 2 hours, and I know there has been a lot of data chucked on my work server since the last backup. The attempt I made over the nfs/openvpn tunnel was nowhere near even after leaving it running overnight.
Many thanks for all the answers.
Jenny
On Sat, 2006-01-07 at 12:14 +0000, Jenny Hopkins wrote:
Grief, no wonder it is slow then. That is going to be so slow as to be unusable. How do other people manage?
As you seem to have already found out, when you are doing byte level changes (as rsync does on anything that is not compressed) a surprisingly small amount of data actually changes.
We are using similar methods to back up whole (small) companies, including shared files and everyone's mailbox, plus stuff like databases etc.
Once we have the initial full copy (which is done locally via a USB drive) we tend to find that only a manageable amount of traffic needs to move overnight.
The only problem I have is that certain events can trigger a much much bigger download....people reorganising a file structure on the server for example and thus changing the paths to many files...backuppc is a bit smarter in this respect as it will not transfer a file of which it has an identical copy already in the pool...instead it will hardlink it across to the new location locally.
Oh and you have to watch compressed files. rsync will need to move the entire file from the point of the first change, because of how most compression works for every byte of uncompressed changes an awful lot can change in the compressed file.
But backuppc is probably overkill for your application and it has been a royal pain to get working as we wanted it....much better at managing multiple hosts and giving a series of backups (2 days ago 1 week ago etc) due to the way that the pool works with compression and hard linking this does not consume as much diskspace as you might expect.
Where we haven't used it yet (apart from on our own machines) which is actually what it is designed for, is for backup of local workstations. Because of the way you can set a backup opportunity window, throttle the backup operation etc. It is possible to backup local workstations without the end user even noticing. If the user reboots or otherwise drops off the network it will simply resume from where it was interrupted next time they are available. It actually works better than some expensive commercial applications (TSM for example)