upvote
> efficiently divide the set of files.

It turns out, fpart does just that! Fpart is a Filesystem partitioner. It helps you sort file trees and pack them into bags (called "partitions"). It is developed in C and available under the BSD license.

It comes with an rsync wrapper, fpsync. Now I'd like to see a benchmark of that vs rclone! via https://unix.stackexchange.com/q/189878/#688469 via https://stackoverflow.com/q/24058544/#comment93435424_255320...

https://www.fpart.org/

reply
Sometimes find (with desired maxdepth) piped to gnu-parallel rsync is fine.
reply
robocopy! Wow, blast from the past. Used to use it all the time when I worked in a Windows shop.
reply
I am using robocopy right now on a project. The /MIR option is extremely useful for incrementally maintaining copies of large local directories.
reply
My go-to for fast and easy parallelization is xargs -P.

  find a-bunch-of-files | xargs -P 10 do-something-with-a-file

       -P max-procs
       --max-procs=max-procs
              Run up to max-procs processes at a time; the default is 1.
              If max-procs is 0, xargs will run as many processes as
              possible at a time.
reply
note that one should use -print0 and -0 for safety
reply
Thanks! I've been using the -F{} do-something-tofile "{}" approach which is also handy for times in which the input is one pram among others. -0 is much faster.

Edit: Looks like when doing file-by-file -F{} is still needed:

  # find tmp -type f | xargs -0 ls
  ls: cannot access 'tmp/b file.md'$'\n''tmp/a file.md'$'\n''tmp/c file.md'$'\n': No such file or directory
reply
You have to do `find ... -print0` so find also uses \0 as the separator.
reply