I personally think of a small DIY rack stuffed with commodity HDDs off Ebay with an LVM spanned across a bunch of RAID1s. I don’t want any complex architectural solutions since my homelab’s scale always equals 1. To my current understanding this has little to no obvious drawbacks. What do you think?

  • panda_abyss@lemmy.ca
    link
    fedilink
    English
    arrow-up
    15
    ·
    edit-2
    2 months ago

    I set up garage, which works fine.

    Advantage of an s3 style layer is it’s simplicity and integration with apps.

    I also use it so I can run AI agents that have zero access to a disk based system

  • Dalraz@lemmy.ca
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    1
    ·
    2 months ago

    This has been my journey.

    I started with pure docker and hostpath on an Ubuntu server. This worked well for me for many years and is good for most people.

    Later I really wanted to learn k8s so I built a 3 node cluster with NSF managed PVC for storage, this was fantastic for learning. I enjoyed this for 3 plus years. This is all on top of proxmox and zfs

    About 8 months ago I decided I’m done with my k8s learning and I wanted more simplicity in my life. I created a lxc docker and slowly migrated all my workloads back to docker and hostpath, this time backed by my mirrored zfs files system.

    I guess my point is what are you hoping to get out of your journey and then tailor your solution to that.

    Also I do recommend using proxmox and zfs.

  • skilltheamps@feddit.org
    link
    fedilink
    English
    arrow-up
    6
    ·
    2 months ago

    You need to ask yourself what properties you want in your storage, then you can judge which solution fits. For me it is:

    • effortless rollback (i.e. in case something with a db updates, does a db migration and fails)
    • effortless backups, that preserve database integrity without slow/cumbersome/downtime-inducing crutches like sql dump
    • a scheme that works the same way for every service I host, no tailored solutions for individual services/containers
    • low maintenance

    The amount of data I’m handling fits on larger harddrives (so I don’t need pools), but I don’t want to waste storage space. And my homeserver is not my learn and break stuff environment anymore, but rather just needs to work.

    I went with btrfs raid 1, every service is in its own subvolume. The containers are precisely referenced by their digest-hashes, which gets snapshotted together with all persistent data. So every snapshot holds exactly the amount of data that is required to do a seamless rollback. Snapper maintains a timeline of snapshots for every service. Updating is semi-automated where it does snapshot -> update digest hash from container tags -> pull new images -> restart service. Nightly offsite backups happen with btrbk, which mirrors snapshots in an incremental fashion on another offsite server with btrfs.

  • Jason2357@lemmy.ca
    link
    fedilink
    English
    arrow-up
    5
    ·
    2 months ago

    Hot take: For personal use, I see no value at all in “availability,” only data preservation. If a drive fails catastrophically and I lose a day waiting for a restore from backups, no one is going to fire me. No one is going to be held up in their job. It’s not enterprise.

    However, redundancy doesn’t save you when a file is deleted, corrupted, ransom-wared or whatever. Your raid mirror will just copy the problem instantly. Snapshots and 3,2,1 backups are what are important to me because when personal data is lost, it’s lost forever.

    I really do think a lot of hobbyists need to focus less on highly available redundancy and more on real backups. Both time and money are better spent on that.

    • MrModest@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      2 months ago

      Why btrfs and not ZFS? In my info bubble, the btrfs has a reputation of an unstable FS and people ended up with unrecoverable data.

      • non_burglar@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 months ago

        That is apparently not the case anymore, but ZFS is certainly more rich in features and more battle-tested.

      • ikidd@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 months ago

        Just the 5-6 raid modes are shit. And its weird willingness to let you boot a failed raid without letting you know a drive is borked.

      • unit327@lemmy.zip
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 months ago

        Btrfs used to be easier to install because it is part of the kernel while zfs required shenanigans, though I think that has changed now.

        Btrfs also just works with whatever drives of mismatched sizes you throw at it and adding more later is easy. This used to be impossible with zfs pools but I think is a feature now?

      • squinky@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        2 months ago

        All I know about ZFS is that there are weird patent or closed source encumbrances or something. I hear it’s good, and it seems popular, I just avoid proprietary Oracle products.

        As for btrfs, the only thing that’s claimed to be unstable is raid 5 or 6. And people use it in production saying the claims are overblown. I don’t. I use it in raid1 mode. But raid1 in btrfs doesn’t require a bunch of matching drives. It lets you glom together a number of mismatched disks and just puts every block on more than one of them. So it’s a nice cross between a raid and LFS or JBOD.

        • MrModest@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          28 days ago

          There’s a thing called OpenZFS. With ZFS happened almost the same thing as with Java. Oracle bought a company and tried to close ZFS, but people just reimplemented ZFS under a FOSS licence and community. I don’t know who uses Oracle ZFS nowadays. Everyone uses OpenZFS.

          It’s true that there’s some licence incompatibility that doesn’t allow integrate OpenZFS into a Linux core, but it’s not like ZFS is proprietary

          https://openzfs.github.io/openzfs-docs/License.html

          While both (OpenZFS and Linux Kernel) are free open source licenses they are restrictive licenses. The combination of them causes problems

  • spacemanspiffy@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    2 months ago

    I have a few Ext4 drives connected and I mount them in /etc/fstab and that’s it.

    I’ve yet to find a reason to change it.

  • azureskypirate@lemmy.zip
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 months ago

    I’ve got Proxmox running on a nvme mirror. Two HDDs are passed to Turnkey Linux mediaserver; they are mirrored with BTRFS and act as storage. I am satisfied with all (prox, turnkey, btrfs) and would recommend.

    I had one BTRFS drive fail, and replacing it with no experience took about an hour.

    I do wish there was better user documentation for WebDAVcgi, the WebDAV frontend in Turnkey linux mediaserver.

    mediaserver comes with Samba, so I use that to connect devices like phone or laptop to the server

    Turnkey’s mediaserver was my replacement for Openmediavault with Filebrowser plugin. Filebrowser creates an internal user to write files for anything uploaded via web interface, so if you mount the folder later via NFS, the permissions don’t match. Openmediavault would stall or crash a lot as a container and especially as a VM, but maybe it runs better on bare metal.