I've been using rsnapshot to do backups for a year or two now but my backup NAS is just about full so I need to move to something with a bigger disk.
Rsnapshot uses rsync's --link-dest to create the hard-linked snapshots along with a lot of arcane perl (as far as I'm concerned Perl is always arcane!) and a big, complex configuration file.
So I'm considering a roll-your-own approach, it seems just too simple to be true. Does following pseudo-code look right:-
Run every day in the small hours IF (day_of_month = 1) IF (month = 1) # i.e. January 1st rename month12 to this_year ELSE remove month12 FI rename month11 to month 12 .... month1 to month2 rename day31 to month1 ELSE remove day31 FI move day30 to day31 .... day1 to day2 rsync with the --link-dest=DIR pointing to day2
I can create all the directories empty to start with so they're all there to be moved even though with nothing in them.
This will (hopefully!) create daily backups for 31 days, monthly backups for 12 months and yearly backups for ever (!?).
On 5 October 2014 18:12, Chris Green cl@isbd.net wrote:
I've been using rsnapshot to do backups for a year or two now but my backup NAS is just about full so I need to move to something with a bigger disk.
Rsnapshot uses rsync's --link-dest to create the hard-linked snapshots along with a lot of arcane perl (as far as I'm concerned Perl is always arcane!) and a big, complex configuration file.
So I'm considering a roll-your-own approach, it seems just too simple to be true. Does following pseudo-code look right:-
Run every day in the small hours IF (day_of_month = 1) IF (month = 1) # i.e. January 1st rename month12 to this_year ELSE remove month12 FI rename month11 to month 12 .... month1 to month2 rename day31 to month1 ELSE remove day31 FI move day30 to day31 .... day1 to day2 rsync with the --link-dest=DIR pointing to day2
I can create all the directories empty to start with so they're all there to be moved even though with nothing in them.
This will (hopefully!) create daily backups for 31 days, monthly backups for 12 months and yearly backups for ever (!?).
--
Have a look at Dirvish, which uses rsync and has built-in preferences. No need to roll your own. Also a good support mailing list.
Jwnny
On Sun, Oct 05, 2014 at 09:32:18PM +0100, Jenny Hopkins wrote:
On 5 October 2014 18:12, Chris Green cl@isbd.net wrote:
I've been using rsnapshot to do backups for a year or two now but my backup NAS is just about full so I need to move to something with a bigger disk.
Rsnapshot uses rsync's --link-dest to create the hard-linked snapshots along with a lot of arcane perl (as far as I'm concerned Perl is always arcane!) and a big, complex configuration file.
So I'm considering a roll-your-own approach, it seems just too simple to be true. Does following pseudo-code look right:-
Run every day in the small hours IF (day_of_month = 1) IF (month = 1) # i.e. January 1st rename month12 to this_year ELSE remove month12 FI rename month11 to month 12 .... month1 to month2 rename day31 to month1 ELSE remove day31 FI move day30 to day31 .... day1 to day2 rsync with the --link-dest=DIR pointing to day2
I can create all the directories empty to start with so they're all there to be moved even though with nothing in them.
This will (hopefully!) create daily backups for 31 days, monthly backups for 12 months and yearly backups for ever (!?).
--
Have a look at Dirvish, which uses rsync and has built-in preferences. No need to roll your own. Also a good support mailing list.
Dirvish seems rather like rsnapshot, horribly complicated! :-) It also has a number of deficiencies from my point of view:-
It doesn't do 'graded' older backups as far as I can see, it just does daily snapshots for a number of days and then deletes everything older than x days. I want monthly and yearly snapshots as well so I can find stuff from last year or even further back.
As far as I understand it Dirvish does 'pull' backups so all the work is on the backup machine, this doesn't work so well on a low-powered (in processor terms) NAS.
Being a 'pull' backup introduces all sorts of security problems, my backup NAS can only be accessed by ssh and except for physical removal the backed up files are accessible only to that root login. (the above psuedo-code doesn't manage this quite so well, I'm working on it!).
The separate dirvish-expire and dirvish-runall seems a weakness to me (minor, and could be overcome).
As mentioned I'm working on making my d-i-y backup script as secure as my existing rsnapshot one, to this end I have added a bit more detail to my pseudo-code as follows:-
'Parent' script on backup client (my desktop), run every day in the small hours
Call 'child' script on NAS via dedicated ssh login which can only run the specified script. This is just a collection of 'mv' and 'rm' commands so doesn't take much processor power.
IF (day_of_month = 1) IF (month = 1) # i.e. January 1st rename month12 to this_year ELSE remove month12 FI rename month11 to month 12 .... month1 to month2 rename day31 to month1 ELSE remove day31 FI move day30 to day31 .... day1 to day2 exit (returns control back to 'Parent')
rsync with the --link-dest=DIR pointing to day2
The rsync that runs on the parent can be operated it two different ways such that it can *only* modify/write todays backup:-
1 - Use an rsync server on the backup NAS configured so that the only directories visible are 'today' (read/write) and 'yesterday' (read only, for the --link-dest to point at)
2 - Mount the two required dirctories on the client using NFS with permissions as in 1. The run the rsync 'locally' on the client.
I think (from previous disk speed tests I have done) that 2 will be significantly faster than 1, it's also rather simpler as rsync server configuration is rather odd (it's the way I secure my existing backup system on the old, full-up NAS)
So, views on the algorithm above (the guts of the script to run on the NAS) would be welcome. It really is only a dozen or so lines of shell script as far as I can see.
Another good thing is that I can run up and test virtually the whole set-up on one (desktop) machine, then move the disk drive to the NAS, change 'localhost' to the NAS hostname and it should still work. (No doubt a few things will need changing but the principle is good and also the first, full, backup will be in place).
On 06/10/14 08:59, Chris Green wrote:
On Sun, Oct 05, 2014 at 09:32:18PM +0100, Jenny Hopkins wrote:
On 5 October 2014 18:12, Chris Green cl@isbd.net wrote:
I've been using rsnapshot to do backups for a year or two now but my backup NAS is just about full so I need to move to something with a bigger disk.
Have a look at Dirvish, which uses rsync and has built-in preferences. No need to roll your own. Also a good support mailing list.
Dirvish seems rather like rsnapshot, horribly complicated! :-) It also has a number of deficiencies from my point of view:-
My 2p worth. I use "back in time" I think it can do what you're looking for.
Should be in your s/w repository but if not then here: http://backintime.le-web.org/
Nev
On Mon, Oct 06, 2014 at 09:30:45AM +0100, Nev Young wrote:
On 06/10/14 08:59, Chris Green wrote:
On Sun, Oct 05, 2014 at 09:32:18PM +0100, Jenny Hopkins wrote:
On 5 October 2014 18:12, Chris Green cl@isbd.net wrote:
I've been using rsnapshot to do backups for a year or two now but my backup NAS is just about full so I need to move to something with a bigger disk.
Have a look at Dirvish, which uses rsync and has built-in preferences. No need to roll your own. Also a good support mailing list.
Dirvish seems rather like rsnapshot, horribly complicated! :-) It also has a number of deficiencies from my point of view:-
My 2p worth. I use "back in time" I think it can do what you're looking for.
Should be in your s/w repository but if not then here: http://backintime.le-web.org/
That looks more like it. It's written in Python too which (to me) is a 'good thing'. :-)
It's still quite complex though, the configuration file is 60 lines or so.
It strikes me that these things "just grow", looking at the code for "Back in time" for example I see there's loads of code handling file restore. To my mind that's totally unnecessary with snapshot backups, you just navigate to the require file on the required date and there it is.
"Back in time" is interesting though and I may try it out to see if it can be made to fit in with my security paranoia. Thanks.
On 05/10/14 18:12, Chris Green wrote:
I've been using rsnapshot to do backups for a year or two now but my backup NAS is just about full so I need to move to something with a bigger disk.
[SHIP]
Have you looked at Duplicity?
http://duplicity.nongnu.org/ http://ubuntuforums.org/showthread.php?t=2213815 http://scie.nti.st/2013/4/13/using-duplicity-for-full-server-backup-on-ubunt...
Cheers, Laurie.
On Mon, Oct 06, 2014 at 10:39:15AM +0100, Laurie Brown wrote:
On 05/10/14 18:12, Chris Green wrote:
I've been using rsnapshot to do backups for a year or two now but my backup NAS is just about full so I need to move to something with a bigger disk.
[SHIP]
Have you looked at Duplicity?
Just as a little question to everyone who is recommending ready made solutions, that's OK and is of interest, but why hasn't anyone actually answered the question I asked: does my simple pseudo-code look correct? :-)
On 06 Oct 11:01, Chris Green wrote:
On Mon, Oct 06, 2014 at 10:39:15AM +0100, Laurie Brown wrote:
On 05/10/14 18:12, Chris Green wrote:
I've been using rsnapshot to do backups for a year or two now but my backup NAS is just about full so I need to move to something with a bigger disk.
[SHIP]
Have you looked at Duplicity?
Just as a little question to everyone who is recommending ready made solutions, that's OK and is of interest, but why hasn't anyone actually answered the question I asked: does my simple pseudo-code look correct? :-)
Mostly because, you know, everytime that someone rolls their own, sooner or later they feck it up royally again, then ask another question - getting you to use something that's already supported by a larger group of people is *far* more preferable than having to debug your code later on down the line.
I'd be looking at obnam personally, and it is what I use.
Cheers,
On Mon, Oct 20, 2014 at 04:13:38PM +0100, Brett Parker wrote:
On 06 Oct 11:01, Chris Green wrote:
On Mon, Oct 06, 2014 at 10:39:15AM +0100, Laurie Brown wrote:
On 05/10/14 18:12, Chris Green wrote:
I've been using rsnapshot to do backups for a year or two now but my backup NAS is just about full so I need to move to something with a bigger disk.
[SHIP]
Have you looked at Duplicity?
Just as a little question to everyone who is recommending ready made solutions, that's OK and is of interest, but why hasn't anyone actually answered the question I asked: does my simple pseudo-code look correct? :-)
Mostly because, you know, everytime that someone rolls their own, sooner or later they feck it up royally again, then ask another question -
Well I don't *think* I've screwed it up. My simple solution is working well, there have inevitably been a couple of minor hiccoughs but nothing too bad. The most recent one was a power cut (we were told beforehand but I'd forgotten about it) that found a couple of things that needed sorting out on the backup NAS - but proved the error reporting was working! :-)
getting you to use something that's already supported by a larger group of people is *far* more preferable than having to debug your code later on down the line.
I'd be looking at obnam personally, and it is what I use.
Its major weakness for me is the one it admits itself, it's not very good over SFTP. For me some of the most important things about backups are:-
They go to a remote system, if the house burns down the backups aren't lost.
They go to a place that's very difficult to reach (in the computer connection sense) from the system being backed up. So an intruder on my desktop system, even with root access, can't do anything to my backups.
On Mon, Oct 20, 2014 at 04:27:46PM +0100, Chris Green wrote:
On Mon, Oct 20, 2014 at 04:13:38PM +0100, Brett Parker wrote:
getting you to use something that's already supported by a larger group of people is *far* more preferable than having to debug your code later on down the line.
I'd be looking at obnam personally, and it is what I use.
Its major weakness for me is the one it admits itself, it's not very good over SFTP. For me some of the most important things about backups are:-
They go to a remote system, if the house burns down the backups aren't lost. They go to a place that's very difficult to reach (in the computer connection sense) from the system being backed up. So an intruder on my desktop system, even with root access, can't do anything to my backups.
Not to mention one more fundamental problem with obnam, it doesn't offer any obvious/easy way to automate daily/monthly/yearly backups. The documentation seems basically to expect you to do a backup when you feel like it and then clear some out when there are too many.
On Mon, Oct 20, 2014 at 04:41:14PM +0100, Chris Green wrote:
Not to mention one more fundamental problem with obnam, it doesn't offer any obvious/easy way to automate daily/monthly/yearly backups.
If only there was a way on unix systems to run a command on a regular schedule without user intervention.
Adam