Among other things, I’m running a small Nextcloud instance on my home server and over time, data somewhat piles up (especially photos). My main storage is sufficiently sized & redundant for now, but I wonder how I am supposed to do a serious backup: The moment will come where the backup’s size will exceed any reasonly prized single drive. What then?

Of course I can just buy another disk and distribute the chunks, but that’s manual work - or is it not? At least rsync has no builtin option for that.
Using a virtual, larger file system spanning among multiple drives looks like the easiest option, but I doubt it’s a good idea for a reliable backup - if one disk fails, all of the backup is gone. On the other hand: That’s true for the distributed chunks as well.

What do you people do? Right now I don’t have to bother as my data fits on a single device easily, but I wonder what in theory & practice is the best solution.
Regards!

  • poVoq@lemmy.ml
    link
    fedilink
    arrow-up
    1
    ·
    3 years ago

    As usual the answer is ZFS ;)

    If you put your data storage on a ZFS volume you can easily and efficiently back it up on another remote ZFS volume.

    • Aarkon@feddit.deOP
      link
      fedilink
      arrow-up
      1
      ·
      3 years ago

      Ok, but if I have to backup, say, 5 TByte worth of data, I’d have to plug in several disks and reinstantiate my pool. ^^

      What I am rather looking for is a script or something that would split the backup files into folders of, let’s say, 3 TByte which I can rsync to different drives. But as it looks, I’ll have to write that myself. That’s not an impossible task, but I wonder how good that works with duplicity/borg.