Among other things, I’m running a small Nextcloud instance on my home server and over time, data somewhat piles up (especially photos). My main storage is sufficiently sized & redundant for now, but I wonder how I am supposed to do a serious backup: The moment will come where the backup’s size will exceed any reasonly prized single drive. What then?

Of course I can just buy another disk and distribute the chunks, but that’s manual work - or is it not? At least rsync has no builtin option for that.
Using a virtual, larger file system spanning among multiple drives looks like the easiest option, but I doubt it’s a good idea for a reliable backup - if one disk fails, all of the backup is gone. On the other hand: That’s true for the distributed chunks as well.

What do you people do? Right now I don’t have to bother as my data fits on a single device easily, but I wonder what in theory & practice is the best solution.
Regards!

  • tmpodA
    link
    fedilink
    arrow-up
    2
    ·
    3 years ago

    Take a look at the wonderful borg utility and its many wrappers/extensions, maybe one can be of use to you :)

  • poVoq@lemmy.ml
    link
    fedilink
    arrow-up
    1
    ·
    3 years ago

    As usual the answer is ZFS ;)

    If you put your data storage on a ZFS volume you can easily and efficiently back it up on another remote ZFS volume.

    • Aarkon@feddit.deOP
      link
      fedilink
      arrow-up
      1
      ·
      3 years ago

      Ok, but if I have to backup, say, 5 TByte worth of data, I’d have to plug in several disks and reinstantiate my pool. ^^

      What I am rather looking for is a script or something that would split the backup files into folders of, let’s say, 3 TByte which I can rsync to different drives. But as it looks, I’ll have to write that myself. That’s not an impossible task, but I wonder how good that works with duplicity/borg.

  • rhymepurple@lemmy.ml
    link
    fedilink
    arrow-up
    1
    ·
    3 years ago

    Consider implementing networked attached storage (NAS) like TrueNAS or Unraid. There are other options (both NAS and non-NAS options) that will help you achieve this. Going with a NAS will help you mitigate risks relating to drive failure (ie - you install multiple hard drives and if one hard drive fails, the system will keep working until you get the failed drive replaced), will make the storage accessible across the devices on your network (depending on how you configure your NAS and network) instead of just the device the hard drive is plugged into, can run additional services/applications on the system, and will likely have some sort of backup system that you can enable/configure for either cloud backups or local backups to another device. The downside is that it will likely require additional hardware and/or some network work. It can be done pretty easily though depending on your needs. For example, Network Chuck has a tutorial on setting up a NAS on a Raspberry Pi.

      • rhymepurple@lemmy.ml
        link
        fedilink
        arrow-up
        2
        ·
        3 years ago

        I realized that may be the case after commenting. I didn’t read your post as closely as I should have, but I kept the comment up in case someone finds it helpful.

        Unfortunately I’m not aware of any solutions beyond buy bigger drives, stand up a backup NAS, or omit unimportant/non-critical/easily recoverable data from backups. I don’t think that’s what you’re looking for though.