Curious to know what the experiences are for those who are sticking to bare metal. Would like to better understand what keeps such admins from migrating to containers, Docker, Podman, Virtual Machines, etc. What keeps you on bare metal in 2025?
Containers run on “bare metal” in exactly the same way other processes on your system do. You can even see them in your process list FFS. They’re just running in different cgroup’s that limit access to resources.
Yes, I’ll die on this hill.
But, but, docker, kubernetes, hyper-scale convergence and other buzzwords from the 2010’s! These fancy words can’t just mean resource and namespace isolation!
In all seriousness, the isolation provided by containers is significant enough that administration of containers is different from running everything in the same OS. That’s different in a good way though, I don’t miss the bad old days of everything on a single server in the same space. Anyone else remember the joys of Windows Small Business Server? Let’s run Active Directory, Exchange and MSSQL on the same box. No way that will lead to prob… oh shit, the RAM is on fire.
kubernetes
Kubernetes isn’t just resource isolation, it encourages splitting services across hardware in a cluster. So you’ll get more latency than VMs, but you get to scale the hardware much more easily.
Those terms do mean something, but they’re a lot simpler than execs claim they are.
…oh shit, the RAM is on fire.
The RAM. The RAM. The 🐏 is on fire. We don’t need no water let the mothefuxker burn.
Burn mothercucker, burn.
(Thanks phone for the spelling mistakes that I’m leaving).
Oh for sure - containers are fantastic. Even if you’re just using them as glorified chroot jails they provide a ton of benefit.
Learning this fact is what got me to finally dockerize my setup
I’ve been self-hosting since the '90s. I used to have an NT 3.51 server in my house. I had a dial in BBS that worked because of an extensive collection of .bat files that would echo AT commands to my COM ports to reset the modems between calls. I remember when we had to compile the slackware kernel from source to get peripherals to work.
But in this last year I took the time to seriously learn docker/podman, and now I’m never going back to running stuff directly on the host OS.
I love it because I can deploy instantly… Oftentimes in a single command line. Docker compose allows for quickly nuking and rebuilding, oftentimes saving your entire config to one or two files.
And if you need to slap in a traefik, or a postgres, or some other service into your group of containers, now it can be done in seconds completely abstracted from any kind of local dependencies. Even more useful, if you need to move them from one VPS to another, or upgrade/downgrade core hardware, it’s now a process that takes minutes. Absolutely beautiful.
Hey, you made my post for me though I’ve been using docker for a few years now. Never, looking, back.
My NAS will stay on bare metal forever. Any complications there is something I really don’t want. Passthrough of drives/PCIe-devices works fine for most things, but I won’t use it for ZFS.
As for services, I really hate using Docker images with a burning passion. I’m not trusting anyone else to make sure the container images are secure - I want the security updates directly from my distribution’s repositories, and I want them fully automated, and I want that inside any containers. Having Nixos build and launch containers with systemd-nspawn solves some of it. The actual docker daemon isn’t getting anywhere near my systems, but I do have one or two OCI images running. Will probably migrate to small VMs per-service once I get new hardware up and running.
Additionally, I never found a source of container images I feel like I can trust long term. When I grab a package from Debian or RHEL, I know that package will keep working without any major changes to functionality or config until I upgrade to the next major. A container? How long will it get updates? How frequently? Will the config format or environment variables or mount points change? Will a threat actor assume control of the image? (Oh look, all the distros actually enforce GPG signatures in their repos!)
So, what keeps me on bare metal? Keeping my ZFS pools safe. And then just keeping away from the OCI ecosystem in general, the grass is far greener inside the normal package repositories.
A NAS as bare metal makes sense.
It can then correctly interact with the raw disks.You could pass an entire HBA card through to a VM, but I feel like it should be horses for courses.
Let a storage device be a storage device, and let a hypervisor be a hypervisor.
Have done it both ways. Will never go back to bare metal. Dependency hell forced multiple clean installs down to bootloader.
The only constant is change.
I started self hosting in the days well before containers (early 2000’s). Having been though that hell, I’m very happy to have containers.
I like to tinker with new things and with bare metal installs this has a way of adding cruft to servers and slowly causing the system to get into an unstable state. That’s my own fault, but I’m a simple person who likes simple solutions. There are also the classic issues with dependency hell and just flat out incompatible software. While these issues have gotten much better over the years, isolating applications avoids this problem completely. It also makes OS and hardware upgrades less likely to break stuff.These days, I run everything in containers. My wife and I play games like Valheim together and I have a Dockerfile template I use to build self-hosted serves in a container. The Dockerfile usually just requires a few tweaks for AppId, exposed ports and mount points for save data. That paired with a docker-compose.yaml (also built off a template) means I usually have a container up and running in fairly short order. The update process could probably be better, I currently just rebuild the image, but it gets the job done.
Yes containers have made everything so easy.
Ok I’m arguing for containers/VMs and granted I do this for a living… I’m a systems architect so I build VMs and containers pretty much all the time time at work… but having just one sorta beefy box at home that I can run lots of different things is the way to go. Plus I like to tinker with things so when I screw something up, I can get back to a known state so much easier.
Just having all these things sandboxed makes it SO much easier.
I consider them unnecessary layers of abstraction. Why do I need to fiddle with Docker Compose to install Immich, Vaultwarden etc.? Wouldn’t it be simpler if I could just run
sudo apt install immich vaultwarden
, just like I can dosudo apt install qbittorrent-nox
today? I don’t think there’s anything that prohibits them from running on the same bare metal, actually I think they’d both run as well as in Docker (if not better because of lack of overhead)!I use Raspberry Pi 4 with 16GB SD-card. I simply don’t have enough memory and CPU power for 15 separate database containers for every service which I want to use.
Databases on sd cards are a nightmare for sd card lifetimes. I would really recommend getting at least a USD SSD stick instead if you want to keep it compact.
Your SD card will die suddenly someday in the near future otherwise.
Thank you for your advice. I do use an external hard drive for my data.
So, are you running 15 services on the Pi 4 without containers?
I see. Are you the only user?
No.
Is your favorite color purple?
Im a hobbiest who just learned how to self host my own static website on a spare laptop over the summer. I went with what I knew and was comfortable with which is a fresh install of linux and installing from the apt package manager.
As im getting more serious im starting to take another look at docker. Unforunately my OS package manager only has old outdated versions of docker I may need to reinstall with like ubuntu/debian LTS server something with more cutting edge software in repo. I don’t care much for building from scratch and navigating dependency roulette.
What OS are you using?
Linux Mint 22
I guess it isn’t the most user friendly process, but you can add the official Docker repo and get an up-to-date version without compiling or anything. You just want to make sure to uninstall any Docker packages that you installed before, before you start.
https://linuxiac.com/how-to-install-docker-on-linux-mint-22/
They can but - if their current setup meets their needs - why? There ain’t nothing wrong with having a few simple spare laptops, each an isolated environment for a few simple home server tasks each.
Don’t get me wrong - I too advocate for docker, particularly on new builds, or as a relatively turnkey solution to get started for novice friends, but the best setup is the one that works, and they sound like they got theirs where they want it.
…because that isn’t what they said. They said that they are getting more serious and now looking at Docker, but the outdated version in the Mint repo is preventing them from exploring that any further. So I offered a method that I know works without any of the “dependency roulette” that they were concerned about, while also giving a disclaimer that it isn’t exactly noob-friendly. 🤷♂️
Fair point. I think my eyes glossed over the part where they said they where taking a second look at docker (but caught the rest about rebuilding the OS in general). My sincere apologies 😓😅
In my own experience, certain things should always be on their own dedicated machines.
My primary router/firewall is on bare metal for this very reason.
I do not want to worry about my home network being completely unusable by the rest of my family because I decided to tweak something on the server.
I could quite easily run OpnSense in a VM, and I do that, too. I run proxmox, and have OpnSense installed and configured to at least provide connectivity for most devices. (Long story short: I have several subnets in my home network, but my VM OpnSense setup does not, as I only had one extra interface on that equipment, so only devices on the primary network would work)
And tbh, that only exists because I did have a router die, and installed OpnSense into my proxmox server temporarily while awaiting new-to-me equipment.
I didn’t see a point in removing it. So it’s there, just not automatically started.
Same here. In particular I like small cheap hardware to act as appliances, and have several raspberry pi.
My example is home assistant. Deploying on its own hardware means an officially supported management layer, which makes my life easier. It is actually running containers but i don’t have to deal with that. It also needs to be always available so i use efficient “right sized” hardware and it works regardless whether im futzing with my “lab”
I would always run proxmox to set up docker VMs.
I found Talos Linux, which is a dedicated distro for kubernetes. Which aligned with my desire to learn k8s.
It was great. I ran it as bare-metal on a 3 node cluster. I learned a lot, I got my project complete, everything went fine.
I will use Talos Linux again.
However next time, I’m running proxmox with 2 VMs per node - 3 talos control VMs and 3 talos worker VMs.
I imagine running 6 servers with Talos is the way to go. Running them hyperconverged was a massive pain. Separating control plane and data/worker plane (or whatever it is) makes sense - it’s the way k8s is designed.
It wasn’t the hardware that had issues, but various workloads. And being able to restart or wipe a control node or a worker node would’ve made things so much easier.Also, why wouldn’t I run proxmox?
Overhead is minimal, get nice overview, get a nice UI, and I get snapshots and backupsWhat hardware are you running it on?
3x minisforums MS-01
Erm. I’d just say there’s no benefit in adding layers just for the sake of it.
It’s just different needs. Say I have a machine where I run a dedicated database on, I’d install it just like that because as said there’s no advantage in making it more complicated.
Are you concerned about your self-hosted bare metal machine being a single point of failure? Or, are you concerned it will be difficult to reproduce?
Considering I have a full backup, all services are Arch packages and all important data is on its own drive, I’m not concerned about anything
There’s one thing I’m hosting on bare metal, a WebDAV server. I’m running it on the host because it uses PAM for authentication, and that doesn’t work in a container.
I started hosting stuff before containers were common, so I got used to doing it the old fashioned way and making sure everything played nice with each other.
Beyond that, it’s mostly that I’m not very used to containers.