Hi! Im new to self hosting. Currently i am running a Jellyfin server on an old laptop. I am very curious to host other things in the future like immich or other services. I see a lot of mention of a program called docker.
search this on The internet I am still Not very clear what it does.
Could someone explain this to me like im stupid? What does it do and why would I need it?
Also what are other services that might be interesting to self host in The future?
Many thanks!
EDIT: Wow! thanks for all the detailed and super quick replies! I’ve been reading all the comments here and am concluding that (even though I am currently running only one service) it might be interesting to start using Docker to run all (future) services seperately on the server!
Docker enables you to create instances of an operating system running within a “container” which doesn’t access the host computer unless it is explicitly requested. This is done using a
Dockerfile
, which is a file that describes in detail all of the settings and parameters for said instance of the operating system. This might be packages to install ahead of time, or commands to create users, compile code, execute code, and more.This is instance of an operating system, usually a “server,” is great because you can throw the server away at any time and rebuild it with practically zero effort. It will be just like new. There are many reasons to want to do that; who doesn’t love a fresh install with the bare necessities?
On the surface (and the rabbit hole is deep!), Docker enables you to create an easily repeated formula for building a server so that you don’t get emotionally attached to a server.
It’s the platform that runs all of your services in containers. This means they are separated from your system.
Also what are other services that might be interesting to self host in The future?
Nextcloud, the Arr stack, your future app, etc etc.
Pretty good intro for absolute beginners here…
+1 for Techworld with Nana
EDIT: Wow! thanks for all the detailed and super quick replies! I’ve been reading all the comments here and am concluding that (even though I am currently running only one service) it might be interesting to start using Docker to run all (future) services seperately on the server!
This is pretty much what I’ve started doing. Containers have the wonderful benefit that if you don’t like it, you just delete it. If you install on bare metal (at least in Linux) you can end up with a lot of extra packages getting installed and configured that could affect your system in the future. With containers, all those specific extras are bundled together and removed at the same time without having any effect on your base system, so you’re always at your clean OS install.
I will also add an irritation with docker containers as well, if you create something in a container that isn’t kept in a shared volume, it gets destroyed when starting the container again. The container you use keeps the maintainers setup, for instance I do occasional encoding of videos in a handbrake container, I can’t save any profiles I make within that container because it will get wiped next time I restart the container since it’s part of the container, not on any shared volume.
A program isn’t just a program: in order to work properly, the context in which it runs — system libraries, configuration files, other programs it might need to help it such as databases or web servers, etc. — needs to be correct. Getting that stuff figured out well enough that end users can easily get it working on random different Linux distributions with arbitrary other software installed is hard, so developers eventually resorted to getting it working on their one (virtual) machine and then just (virtually) shipping that whole machine.
But why can I “just install a program” on my windows machine or on my phone and it is that easy?
You might notice that your Windows installation is like 30 gigabytes and there is a huge folder somewhere in the system path called WinSXS. Microsoft bends over backwards to provide you with basically all the versions of all the shared libs ever, resulting in a system that can run programs compiled from decades ago just fine.
In Linux-land usually we just recompile all of the software from source. Sometimes it breaks because Glibc changed something. Or sometimes it breaks because (extremely rare) the kernel broke something. Linus considers breaking the userspace API one of the biggest no-nos in kernel development.
Even so, depending on what you’re doing you can have a really old binary run on your Linux computer if the conditions are right. Windows just makes that surface area of “conditions being right” much larger.
As for your phone, all the apps that get built and run for it must target some kind of specific API version (the amount of stuff you’re allowed to do is much more constrained). Android and iOS both basically provide compatibility for that stuff in a similar way that Windows does, but the story is much less chaotic than on Linux and Windows (and even macOS) where your phone app is not allowed to do that much, by comparison.
In Linux-land usually we just recompile all of the software from source
That’s just incorrect. Apart from 3 guys who have no better things to do no one in “Linux-land” does that.
Docker is not a virtual machine, it’s a fancy wrapper around chroot
I’m aware of that, but OP requested “explain like I’m stupid” so I omitted that detail.
No, chroot is kind of its own thing
It is just a kernel namespace
Yes, technically chroot and jails are wrappers around kernel namespaces / cgroups and so is docker.
But containers were born in a post chroot era as an attempt at making the same functionality much more user friendly and focused more on bundling cgroups and namespaces into a single superset, where chroot on its own is only namespaces. This is super visible in early docker where you could not individually dial those settings. It’s still a useful way to explain containers in general in the sense that comparing two similar things helps you define both of them.
Also cgroups have evolved alongside containers at this point and work rather differently now compared to 18 years ago when cgroups were invented and this differentiation mattered more than now. We’re at the point where differentiation between VMs and Containers is getting really hard since both more and more often rely on the same kernel features that were developed in recent years on top of cgroups
Beat me to it.
So instead of having problems getting the fucking program to run, you have problems getting docker to properly build/run when you need it to.
At work, I have one program that fails to build an image because of a 3rd party package who forgot to update their pgp signature; one that builds and runs, but for some reason gives a 404 error when I try to access it on localhost; one that whoever the fuck made it literally never ran it, because the
Dockerfile
was missing some 7 packages in the apt install line.There are two ends here, as a user and as a developer. As a user Docker images just work, so you solve almost every problem you’re having which would be your users having them and giving up on using your software.
Then as a developer docker can get complicated, because you need to build a “system” from scratch to run your program. If you’re using an unstable 3d party package or missing packages it means that those problems would be happening in the deploy servers instead of your local machines, and each server would have its own set of problems due to which packages they didn’t have or had the wrong version, and in fixing that for your service you might be breaking other service already running there.
Yeah, it’s another layer, and so there definitely is an https://xkcd.com/927/ aspect to it… but (at least in theory) only having problems getting Docker (1 program) to run is better than having problems getting N problems to run, right?
(I’m pretty ambivalent about Docker myself, BTW.)
Building from source is always going to come with complications. That’s why most people don’t do it. A docker compose file that ‘just’ downloads the stable release from a repo and starts running is dramatically more simple than cross-referencing all your services to make sure there are no dependency conflicts.
There’s an added layer of complexity under the hood to simplify the common use case.
It’s a container service. Containers are similar to virtual machines but less separate from the host system. Docker excels in creating reproducible self contained environments for your applications. It’s not the simplest solution out there but once you understand the basics it is a very powerful tool for system reliability.
I’ve never posted on Lemmy before. I tried to ask this question of the greater community but I had to pick a community and didn’t know which one. This shows up as lemmy.world but that wasn’t an option.
Anyway, what I wanted to know is why do people self host? What is the advantage/cost. Sorry if I’m hijacking. Maybe someone could just post a link or something.
It usually comes down to privacy and independence from big tech, but there are a ton of other reasons you might want to do it. Here are some more:
- preservation - no longer have to care if Google kills another service
- cost - over time, Jellyfin could be cheaper than a Netflix sub
- speed - copying data on your network is faster than to the internet
- hobby - DIY is fun for a lot of people
For me, it’s a mix of several of reasons.
Anyway, what I wanted to know is why do people self host?
Wow. That’s a whole separate thread on it’s on. I selfhost a lot of my services because I am a staunch privacy advocate, and I really have a problem with corporations using my data to further bolster their profit margins without giving me due compensation. I also self host because I love to tinker and learn. The learning aspect is something I really get in to. At my age it is good to keep the brain active and so I self host, create bonsai, garden, etc. I’ve always been into technology from the early days of thumbing through Pop Sci and Pop Mech magazines, which evolved into thumbing through Byte mags.
We should probably make a poll thread here, it could be pretty interesting to see everyone’s primary and secondary motivations.
Containerized software. The main advantage of this is that every application, or stack of applications, runs in its own ecosystem. You can restart a container whenever without having to reboot your entire system. You can store all data off a container in a volume, so if you hit a snag, you can recreate the container without actually losing any of your configs.
You can also create networks so that apps run in different subnets than other apps.
Very simply put, a docker container is like a mini system that runs on your main system.
Something else I like about docker is docker compose. You can create a container or stack of containers with a single simple YAML file without actually having to install anything yourself. I manage my containers in Portainer.
If ‘but it works on my computer’ was a software service
I would start with a premade docker compose file. From there learn how to tweak it.
A little box you can put your app.
If the app does bad, it doesn’t sink your ship. Just throw the box over board and repackage the app.
I’m not sure most people need it, but it could be fun to use a new app inside a container. Also makes updating that needs a restarting without shutting down your other services.
deleted by creator