Back in my day, (shakes cane), Teamspeak and Ventrillo were the big voice chat platforms/tools. Both have text chat and channels/rooms; but their focus is voice chat for gaming.
🇨🇦
Back in my day, (shakes cane), Teamspeak and Ventrillo were the big voice chat platforms/tools. Both have text chat and channels/rooms; but their focus is voice chat for gaming.
Bit old, but pretty much everything Source Engine is self-hostable isn’t it? Most of them even come with a pre-configured SRCDS (SouRCe Dedicated Server) you can download and run right from the steam launcher.
I know I ran a GarrysMod server for quite a while; piling a shit ton of mods on it. Plus any source game you’ve got installed, Garrys Mod can and will use the resources/assets from.
:/ shit.
I’m pretty sure I saw this a few months ago and moved to the beatkind/watchtower fork, but it’s not been updated in 6mo either. (Devs only been active in private repos; so they’re still around, just not actively working on watchtower)
Guess I’ll find another solution. Hell, I might just put my own script on crontab. Looping through folders running docker compose down/pull/up isn’t too hard really.
My wife got very upset. Apparently she likes the ads.
Set static IPs for her devices, then whitelist that device IP past the block lists by adding it to a group, then regex allow domain: ‘*’ for that group.
A bit of redundancy is key.
I have my primary DNS, pihole, running on an RPI that’s dedicated to it; as well as a second backup version running in a docker container on my main server machine.
Nebula-Sync keeps the two synchronized with eachother, so if a change is made on one, it automatically syncs to the other. (things like local dns records or changes to blocklists).
If either one goes down (dead sd cards, me playing with things, power surges, whatever); the other picks up the slack until I fix the broken one, which is usually little more than re-install, then manually sync them using piholes ‘teleporter’ settings. Worse case, restore a backup (That you’re definitely taking. Regularly. Right?)
Both piholes use Cloudflared (here’s their guide *edit: I see I’ll have to find a new method for this… Just going to pin the containers to tag ‘2025.11.1’ for now) to translate ALL dns traffic into DOH traffic, encrypting it and using the provider of my choice, instead of my ISP or any other plain DNS. The router hands out both local DNS IPs with DHCP because Port 53 outbound (regular dns) is blocked at the router, so all LAN devices MUST use the local DNS or their own DOH config. Plain DNS won’t make it out.
DNS adblocking isn’t perfect, but it’s a really nice tool to have. Then having an internal DNS to resolve names for local-only services is super handy. Most of my subdomains are only used internally, so pihole handles those DNS records, while external DNS only has the records for publicly accessible things.
I have the same issue with Immich on android. It pretty much never uploads files until I manually open the app; then the app refuses to acknowledge it has uploaded those new files until it’s closed and re-opened :( (power saving is set to un-restricted in android, and background data usage is allowed. I’ve been through troubleshooting very thoroughly, it just doesn’t work)
FolderSync has been the only reliable (non-root) backup solution I’ve used. It’s set to monitor my image folders for changes and upload any new files as soon as they’re created; this works ~85% of the time. Then, It’s also set with a few schedules to check for changes every 3hrs, backing up everything on the phone the app can access; this catches anything the on-change/on-creation file detection misses, while also backing up more data than just my images. I have yet to see that fail after ~3 years.
Plex, Emby, and Jellyfin are all legal, and each have ways to serve liveTV alongside your own locally stored content, and DVR that liveTV if you want. You’d just have to purchase a liveTV subscription from your local provider (or go the Pirate route ofc).
Emby has what they call ‘Emby Connect’ which is entirely optional and is basically a glorified DNS service.
It doesn’t proxy connections, it just passes on the hostname to the client. The server is still required to setup port forwarding or other routing like tailscale or a proxy on a vps.
Emby Connect will let you sign into your local server using your emby.media credentials, but unlike Plex it’s completely optional and only works once explicitly linked to the local user of an Emby server.
Plex centralizes authentication at plex.tv
When a user wants to connect to a ‘private’ plex server, they must first sign into their plex.tv account, which then provides the auth token needed to login to the users server (even if both the client and server are on the same lan)
With this system, Plex can monitor and control every single connection to every plex server; limiting access to whatever they want. Even your own local content.
Betcha she’s got a delicious pie waiting for you though…
That’s what I’d already done as per the OP, but it leaves Sonarr/Radarr wanting manual intervention for the ‘complete’ download that doesn’t have any files to import.
This comment prompted me to look a little deeper at this. I looked at the history for each show where I’ve had failed downloads from those groups.
For SuccessfulCrab; any time a release has come from a torrent tracker (I only have free public torrent trackers) it’s been garbage. I have however had a number of perfectly fine downloads with that group label, whenever retrieved from NZBgeek. I’ve narrowed that filter to block the string ‘SuccessfulCrab’ on all torrent trackers, but allow NBZs. Perhaps there’s an impersonator trying to smear them or something, idk.
ELiTE on the other hand, I’ve only got history of grabbing their torrents and every one of them was trash. That’s going to stay blocked everywhere.
The block potentially dangerous setting is interesting, but what exactly is it looking for? The torrent client is already set to not download file types I don’t want, so will it recognize and remove torrents that are empty? (everything’s marked ‘do not download’) I’m having a hard time finding documentation for that.
Awesome. Thanks you two, I appreciate the help. :)
Awesome. Thanks you two, I appreciate the help. :)
Ok, I think I’ve got this right?
Settings > Profiles > Release Profiles.
Created one, setup ‘must not contain’ words, indexer ‘any’, enabled.
That should just apply globally? I’m not seeing anywhere else I’ve got to enable it in specific series, clients, or indexers.
To be perfectly honest, auto updates aren’t really necessary; I’m just lazy and like automation. One less thing I’ve gotta remember to do regularly.
I find it kind of fun to discover and explore new features on my own as they appear. If I need documentation, it’s (usually…) there, but I’d rather just explore. There are a few projects where I’m avidly following the forums/git pages so I’m at least aware of certain upcoming features, others update whenever they feel like it and I’ll see what’s new next time I happen to be messing with them.
Watchtower notifies me whenever it updates something so I’ve at least got a history log.
I’ve had Immich auto updating alongside around 36 other docker containers for at least a year now. I’ve very very rarely had issues, and just attach specific version tags to the things that have caused problems. Redis and postgres for example in both Immich and Paperless-NGX have fixed version tags because they take manual work to upgrade the old databases. The main projects though, have always auto updated just fine for me.
The reason I don’t really worry about it: Solid backups.
BorgBackup runs in the early AM, shortly before Watchtower updates almost all of my containers, making a backup of the entire system (not including bulk storage) first.
If I was to get up in the morning and find a service isn’t responding (Uptime-kuma notifies me via email if it can’t reach any container or service), I’ll mess with it and try to get the update working (I’ve only actually had to do this once so far, the rest has updated smoothly). Failing that, I can just extract yesterday’s data from the most recent backup and restore a previous version.
Because of Borgs compression and de-duplication, concurrent backups of the same system can be stored in an absurdly small amount of space. I currently have 22 backups of ~532gb each, going back a full year. They are stored in 474gb of disc space. Raw, that’d be 11.8TB

https://github.com/nicolargo/glances
I have a dashboard as well (Homepage), but this is a nice look at system resource usage and what’s running, at a glance.
Uptime-kuma emails me when services or critical LAN devices are unreachable for whatever reason.
Major version changes for any software from the OS right down to a simple notepad app should update as sequentially as possible (11>12>13>etc). Skipping over versions is just asking for trouble, as it’s rarely tested throughly.
It might work, but why risk it.
An example: if 12 makes a big database change but you skip over that version, 13 may not recognize the databases left by 11 because 12 had the code to recognize and reformat the old database while that code was seen as unnecessary and removed from 13.
Stuff like this is also why you can’t always revert to an older version while keeping the data/databases from the newer software.
If you have a static IP address, you can just use A records for each subdomain you want to use and not really worry about it.
If you do not have a static IP address, you may want to use one single A record, usually your base domain (example.com), then CNAME records for each of your subdomains.
A CNAME record is used to point one name at another name, in this case your base domain. This way, when your IP address changes, you only have to change the one A record and all the CNAME records will point at that new IP as well.
Example:
A example.com 1.2.3.4CNAME sub1.example.com example.comCNAME sub2.example.com example.comYou’d then use a tool like ACME.sh to automatically update that single A record when your IP changes.