Greetings, I have a Debian server at home running a file server, jellyfin server, among some other things. I also had 4 external drives hooked up to it in a Raid 10 (or is it 1+0?) configuration. The SSD I had the actual server installed on failed overnight and looks to be beyond recovery. So my question is, when I install Debian on a new drive to replace the failed one, is there any method I could use to get the raid array to work with the new server without being rebuilt? From what I have found it looks like that is not possible but I just figured I would ask. The actual raid disks are fine, ironically they are about 8 years old while the SSD was only about 2 and it was the one that failed. No important data was lost, it will just be a bit of a pain to replace everything that was on the server if I have to rebuild the array and lose all of the data. Thanks in advance.

EDIT: Forgot to include that this was setup using mdadm

EDIT2: So it turns out this was not as massive a problem as I thought. I had assumed that since the server that set up the raid array with mdadm was lost, that I would not be able to get back into the array even though the data was still there. That was not the case. As soon as I connected the drives to the new server, mdadm recognized the array. It turns out one of the raid disks had also failed (no idea why but they are old), but I luckily have a spare one so I swapped it in and now have to wait patiently for 12 ish hours for the array to rebuild. In the meantime I got my file share back up and running and confirmed everything is accounted for. So provided there are no other random failures in the next 12 hours everything should turn out fine. Thanks all for your help. Now to get jellyfin installed and running again and I can get back to streaming the same shows over and over…

  • Doombot1@beehaw.org
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 year ago

    Is it a hardware raid or a software raid? If it’s software (not sure abt hardware), the discs themselves should have the array’s metadata on it, and you can just use mdraid & restart the array.

    • lemmyvore@feddit.nl
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      1 year ago

      Since nobody mentioned how to tell if it’s hardware or software RAID:

      1. If you created the arrays from Linux, it’s software. You can install any Linux (doesn’t have to be the same distro) and you will be able to recover and access it.
      2. If you created the arrays from a special config tool started at boot, it’s hardware. That usually only happens if you have a dedicated RAID card in your PC. To recover such an array you typically need the exact same card model to be in your PC.
      3. If you created the arrays from the BIOS, it’s a bastard form of proprietary software RAID implemented by the motherboard. To recover such an array you need the same model of motherboard.

      But you should NOT lose the array in any of the above cases. Losing the system disk doesn’t have any impact on them. In case (1) you can simply reinstall, and in (2) and (3) you only lose the array if the card or the motherboard die.

      • Doombot1@beehaw.org
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        Great explanation. Yes - I’ve done this before! Built up a system with a RAID array but then realized I wanted a different boot drive. Didn’t really want to wait for dual 15Tb arrays to rebuild - and luckily for me, I didn’t have to! Because the metadata is saved on the discs themselves. If I had to guess (I could be wrong though) - I believe ‘sudo mdadm —scan —examine’ should probably bring up some info about the discs, or something similar to that command.

        • lemmyvore@feddit.nl
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          mdadm --examine looks at the superblocks of all available partitions and prints information about the ones that are RAID arrays.

          mdadm --detail prints information about running arrays.

          When added to one of the above, --scan will get any missing information from /proc/mdstat or from /etc/[mdadm/]mdadm.conf.

          The output of that command is also commonly used to populate /etc/mdadm.conf. That file is a way to fine-tune array assembly and add meta information, human-friendly names, alert emails etc. It is not a substitute for either /proc/mdstat (which is maintained by the kernel directly) or /etc/fstab. It can be very useful to create consistent reference points for the arrays, especially if you port them to another system, or reinstall. mdadm.conf can be used to identify discs by block ID (instead of device names) and also give them custom names (instead of names like md3, where the kernel can issue different numbers on a different install).