• 3 Posts
  • 735 Comments
Joined 3 years ago
cake
Cake day: July 7th, 2023

help-circle

  • Having the disks connected externally is the same as having them connected internally

    No, it 1000% is not, especially in the case of USB that I used. Even in the way Linux handles everything as a file and target, it is vastly different.

    No RAID solution I know of would lose the array on a power outage

    Hardware RAID enclosures have batteries on the disk controllers for this very reason. We aren’t talking about those though, we’re talking about software RAID on JBOD, which wouldn’t have those sanity protections. Here’s some random blog explaining deeper.

    Honestly I don’t see how interrupt handling would be any different between internally or externally connected devives, except for different buses/protocols handling it differently intrinsicly

    See above

    Maybe I’m too spolied by using ZFS, but again I don’t think this would actually be a problem

    That’s a filesystem solution to a hardware problem, so yes, probably a bit spoiled there, or at least it’s skewing your understanding of what RAID is and how it works. One of the reasons ZFS exists, actually. It’s nice to have nice things though.







  • The main issue is statefullness of the host.

    Say you’re on a laptop, and you get an external JBOD box without any hosted controller. You use that laptop to setup a RAID1 array on 2 disks, and go about your business. Few weeks in you’re in the middle of some editing of video or whatever, and you have a power outage.

    That RAID array is assuredly damaged or dead. Your host machine being the controller in the middle of a write when the entire array dissapears is going to give up quickly, and the cached data in flux to write is gone. You miiight be able to recover the array if you’re lucky, but whatever you’re working on is gone.

    A number of diff5scenariis where this may happen exist without a power outage, but the problem is the target not being able to manage its own interrupt, and you have two different states in two different devices that won’t match. It’s toast.


  • There’s a few things at work here:

    1. Not much “hardware” RAID anymore because offloading works just fine and doesn’t draw excessive resources.
    2. It sounds like you want to just take your existing disks and pop them into something else, which won’t work.
    3. You shouldn’t be running RAID over any external connections for a number of reasons if the coordinator (your machine) is hosting it. I can go deeper into that if you want.

    You want a self-contained NAS that manages its own RAID and disks. I would honestly just get a diskless unit and start clean. You’ll be better off in the long run.



  • You sound new to the ecosystem at large, and I don’t mean that to be condescending, just that you may not have all the context needed to understand why it exists. Any distro that exists right now can flip back to SysV if they want to. They just don’t want to. It may be more flexible to the neckbeards, but it’s massively more comprehensive in scaling and integrating than a set of Init scripts. It has huge benefits to system integrators, OEMs, and especially the people who manage the largest concentration of Linux deployments: Datacenter Ops teams.

    The fact that you, a Desktop user takes issue with that is meaningless to the ecosystem at large. I manage thousands of deployed bare metal machines, and I’d never switch back, because it SysV was fucking painful. Sure it was easier to debug in some cases, but was it as useful or reliable? Not even close.

    Just go use something else and stop letting it bother you. You’ll feel better in the long run.







  • Separate the use-case here:

    1. For your desktop, whatever works. There is no one distro that gives you some leg-up on performance or anything else. You can install the same software on all, and the kernel is largely the same.

    2. Just get or build a NAS for hosting media. A Synology or Qnap has a bit of added cost, but the maintenance overhead is reduced by a LOT versus running TrueNAS, OMV, or similar. That being said, choose the right tool for the job, and don’t just run Debian for this purpose because it just adding admin overhead you don’t need. This probably has been solved from your specific angle. What you want is simplicity in maintenance. Being able to hotswap and repair a failed drive means a huge win.