Not sure if this is better fit for datahoarder or some selfhost community, but putting my money on this one.
The problem
I currently have a cute little server with two drives connected to it running a few different services (mostly media serving and torrents). The key facts here is that 1) it’s cute and little, 2) it’s handling pretty bulky data. Cute and little doesn’t go very well with big raid setups and such, and apart from upgrading one of the drives I’m probably at my limit in terms of how much storage I can physically fit in the machine. Also if I want to reinstall it or something that’s very difficult to do without downtime since I’d have to move the drive and services of to a different machine (not a huge problem since I’m the only one using it, but I don’t like it).
Solution
A distributed FS would definitely solve the issue of physically fitting more drives into the chassi, since I could basically just connect drives to a raspberry pi and have this raspi join the distributed fs. Great.
I think it could also solve the issue of potential downtime if I reinstall or do maintenance, since I can have multiple services read of the same distributed FS and reroute my reverse proxy to use the new services while the old ones are taken offline. There will potentially be a disruption, but no downtime.
Candidates
I know there are many different solutions for distributed filesystems, such as ceph, moosefs, glusterfs and miniio. I’m kinda leaning towards ceph because of it’s integration in proxmox, but it also seems like the most complicated solution in the bunch. Is it worth it? What are your experiences with these, and given the above description of my use-case which do you think would be the best fit?
Since I already have a lot of data it’s a bonus if it’s easy to migrate from my current filesystem somehow.
My current setup uses a lot of hard links as well, so it’s a big bonus if the solution has something similar (i.e. some easy way of storing the same data in multiple places without duplicating it)
I can’t really tell you what to use, but from my personal experience - stay away from glusterfs and drbd. both have caused me serious trouble when trying to run them in a production setup. ceph seems to be pretty solid, though.
That’s very helpful because glusterfs and ceph are probably my top two candidates. Will probably try it out.
If you’re on linux or bsd, look into ZFS. Insanely easy to set up and admin, fs-level volume management, compression and encryption, levels of RAID if you want them, and recently they even added phe option to expand your data pools with new drives. All of that completely userspace, without having to fiddle with expensive RAID cards or motherboard firmware.
I get what you’re proposing but I’d respectfully suggest looking into unRAID on basically any hardware that can boot an OS.
It won’t necessarily be small and cute (though you can accomplish that if you wish), but you can make it do just about anything. I bought old enterprise hardware to run my main and backup servers on. I feel really comfortable with my data safety.
FYI you probably shouldn’t be saying you feel really comfortable with your data safety while suggesting unraid. The way unraid handles it’s storage will lead to data loss at some point. Unraid only locks down an array and protects it when smart starts issuing warnings that a drive has failed. Smart isn’t magic though and when a drive starts to die it might start writing garbage data for days if not weeks before smart catches on. If a drive writes garbage for long enough there’s nothing you can do to fix it due to that way unraid handles arrays. This is why ZFS is such a popular option as it treats hard drive with a level of skepticism and verifies the data was actually written correctly along with verifying the data from time to time.
That’s not even mentioning unraid is charging for what other software does for free.