3/6/2026 at 2:19:03 PM
The claim of being 7x faster than rsync is very dubious. I would like to know the test conditions for such a result.I use every day rsync over SSH, and even between 7 to 10 years old computers it reaches the maximum link speed over 2.5 Gb/s Ethernet.
So in order to need something faster than rsync and be able to test it, one must use at least 10 Gb/s Ethernet, where I do not know how good must be your CPU to reach link speed.
For 7x faster, one would need to use at least 25 Gb/s Ethernet, and this in the worst case for rsync, when it were not faster on higher speed Ethernet than what I see on cheap 2.5 Gb/s Ethernet.
If on a higher-speed Ethernet the link speed would not be reached due to an ancient CPU that has insufficient speed for AES-GCM or for AES-UMAC, then using multiple connections would not improve the speed. If the speed is not limited by encryption, then changing TCP parameters, like window sizes, would probably have the same effect as using multiple connections, even when using just rsync over ssh.
If the transfers are done over the Internet, then the speed is throttled by some ISP and it is not determined by your computers. There are some cases when a small number of connections, e.g. 2 or 3 may have a higher aggregate throughput than 1, but in most cases that I have seen the ISPs limit the aggregated throughput for the traffic that goes to 1 IP address, so if you open more connections you get the same throughput as with fewer connections.
by adrian_b
3/6/2026 at 2:33:46 PM
> I use every day rsync over SSH, and even between 7 to 10 years old computers it reaches the maximum link speed over 2.5 Gb/s Ethernet.What are you rsyncing? Is it Maildirs for 5000 users? Or a multi-TB music and movie archive? The former might benefit greatly if the filesystem and its flash backing store is bottlenecking on metadata lookup, not bandwidth. The latter, not so much.
I too would like to know the test conditions. This is probably one of those tools that is lovely for the right use case, useless for the wrong one.
by i_think_so
3/6/2026 at 8:43:10 PM
Maildirs too, though not for so many users, so usually only a few thousand files are transferred, but more frequently some big files of tens of GB each are transferred.The syncings are done most frequently between (Gentoo) Linux using XFS and FreeBSD using UFS, both on NVMe SSDs (Samsung PRO).
As I have said, on 2.5 Gb/s Ethernet, the bottleneck is clearly the network link, so rsync, ssh, sshd and the filesystems are faster than this even on old Coffee Lake CPUs or first generation Epyc CPUs.
The screen capture from the linked repository of parsync shows extremely slow transfer speeds of a few MB per second, so that seems possible only when there is no local connection between computers but rsync is done over the Internet. In that case the speed is greatly influenced by whatever policies are used by the ISP to control the flow and much less by what your computers do. For a local connection, even an older 1 Gb/s Ethernet should display a constant speed of around 110 MB/s for all files transferred by rsync.
When the ISP limits the speed per connection, without limiting the aggregate throughput, then indeed transferring many files in parallel can be a great win. However the ISPs with which I am interacting have never done such a thing for decades and they limit the aggregated bandwidth, so multiple connections do not increase the throughput.
by adrian_b
3/6/2026 at 3:07:33 PM
Anecdote: I have rsync’d maildirs and I recall managing a ~7x perf improvement by combining rsync with GNU parallel (trivial to fan out on each maildir)by wolttam
3/6/2026 at 3:56:24 PM
Awww yeah. +1 for GNU parallel.When I think of those obscenely ugly scripting hacks I used to do back in the day....
"Well, trust me, this way's easier." -- Bill Weasley
by i_think_so
3/6/2026 at 9:59:57 PM
I've used parsyncfp2 which I think is just another implementation of the same thing and I've definitely seen 2x-3x throughput improvement when transferring over large distances.As you mentioned it definitely depends on how the ISP handles traffic.
I have yet to try but I've heard good things about hpn-ssh as well.
by magixx