So I have a nearly full 4 TB hard drive in my server that I want to make an offline backup of. However, the only spare hard drives I have are a few 500 GB and 1 TB ones, so the entire contents will not fit all at once, but I do have enough total space for it. I also only have one USB hard drive dock so I can only plug in one hard drive at a time, and in any case I don’t want to do any sort of RAID 0 or striping because the hard drives are old and I don’t want a single one of them failing to make the entire backup unrecoverable.

I could just play digital Tetris and just manually copy over individual directories to each smaller drive until they fill up while mentally keeping track of which directories still need to be copied when I change drives, but I’m hoping for a more automatic and less error prone way. Ideally, I’d want something that can automatically begin copying the entire contents of a given drive or directory to a drive that isn’t big enough to fit everything, automatically round down to the last file that will fit in its entirety (I don’t want to split files between drives), and then wait for me to unplug the first drive and plug in another drive and specify a new mount point before continuing to copy the remaining files, using as many drives as necessary to copy everything.

Does anyone know of something that can accomplish all of this on a Linux system?

  • mindlessLump@lemmy.world
    link
    fedilink
    arrow-up
    5
    ·
    11 months ago

    You could create a Python script to do this. There is a library called psutil that would help. Basically,

    • iterate over mounted drives and see how much each has available
    • based on these values, iterate over your backup files and separate them into chunks that will fit on each drive
    • copy chunks to respective drives

    Would be a fun little project even for a beginner I think.