On 6 April 2016 at 09:47, Chris Green cl@isbd.net wrote:
Well (as you can see from my reply to your first response) that's somewhat bigger than I'm talking about. However mine will be across a slow, internet link. One direction will only be 0.5Mb/s or so.
Speed shouldn't really be an issue aside from the transfer itself, so it's a question of how to reduce that. My instinct would be that btsync will be the most efficient - it comes from bittorrent which was always designed for transferring files over a wide variety of connections including those with slow upstream. Most of the "work" is done at each end in terms of hashing all the files, and content changes are transferred incrementally. So if you start with pretty much synced folders (ie initial copy via a different means) then the volume of actual data transfer will be pretty minimal over the link.
From my understanding of rsync, because it doesn't store its state
between uses it'll have to send the meta data of every single file across the link on each invocation in order to detect changes. Unison does store its state so should be better although I'd still bet on btsync to be better. Syncthing (and it's fork, pulse) suffer from the lack of incremental change handling (assuming I have that right) but as you say that's probably not an issue for you with small files.
(My use case is also over the Internet, but the link is a good deal faster than yours. However it wasn't always thus, and I used it for smaller folders quite happily back then too.)
For small text files I'd also consider git/svn/etc