🇨🇦
That’s what I’d already done as per the OP, but it leaves Sonarr/Radarr wanting manual intervention for the ‘complete’ download that doesn’t have any files to import.
This comment prompted me to look a little deeper at this. I looked at the history for each show where I’ve had failed downloads from those groups.
For SuccessfulCrab; any time a release has come from a torrent tracker (I only have free public torrent trackers) it’s been garbage. I have however had a number of perfectly fine downloads with that group label, whenever retrieved from NZBgeek. I’ve narrowed that filter to block the string ‘SuccessfulCrab’ on all torrent trackers, but allow NBZs. Perhaps there’s an impersonator trying to smear them or something, idk.
ELiTE on the other hand, I’ve only got history of grabbing their torrents and every one of them was trash. That’s going to stay blocked everywhere.
The block potentially dangerous setting is interesting, but what exactly is it looking for? The torrent client is already set to not download file types I don’t want, so will it recognize and remove torrents that are empty? (everything’s marked ‘do not download’) I’m having a hard time finding documentation for that.
Awesome. Thanks you two, I appreciate the help. :)
Awesome. Thanks you two, I appreciate the help. :)
Ok, I think I’ve got this right?
Settings > Profiles > Release Profiles.
Created one, setup ‘must not contain’ words, indexer ‘any’, enabled.
That should just apply globally? I’m not seeing anywhere else I’ve got to enable it in specific series, clients, or indexers.
To be perfectly honest, auto updates aren’t really necessary; I’m just lazy and like automation. One less thing I’ve gotta remember to do regularly.
I find it kind of fun to discover and explore new features on my own as they appear. If I need documentation, it’s (usually…) there, but I’d rather just explore. There are a few projects where I’m avidly following the forums/git pages so I’m at least aware of certain upcoming features, others update whenever they feel like it and I’ll see what’s new next time I happen to be messing with them.
Watchtower notifies me whenever it updates something so I’ve at least got a history log.
I’ve had Immich auto updating alongside around 36 other docker containers for at least a year now. I’ve very very rarely had issues, and just attach specific version tags to the things that have caused problems. Redis and postgres for example in both Immich and Paperless-NGX have fixed version tags because they take manual work to upgrade the old databases. The main projects though, have always auto updated just fine for me.
The reason I don’t really worry about it: Solid backups.
BorgBackup runs in the early AM, shortly before Watchtower updates almost all of my containers, making a backup of the entire system (not including bulk storage) first.
If I was to get up in the morning and find a service isn’t responding (Uptime-kuma notifies me via email if it can’t reach any container or service), I’ll mess with it and try to get the update working (I’ve only actually had to do this once so far, the rest has updated smoothly). Failing that, I can just extract yesterday’s data from the most recent backup and restore a previous version.
Because of Borgs compression and de-duplication, concurrent backups of the same system can be stored in an absurdly small amount of space. I currently have 22 backups of ~532gb each, going back a full year. They are stored in 474gb of disc space. Raw, that’d be 11.8TB

https://github.com/nicolargo/glances
I have a dashboard as well (Homepage), but this is a nice look at system resource usage and what’s running, at a glance.
Uptime-kuma emails me when services or critical LAN devices are unreachable for whatever reason.
Major version changes for any software from the OS right down to a simple notepad app should update as sequentially as possible (11>12>13>etc). Skipping over versions is just asking for trouble, as it’s rarely tested throughly.
It might work, but why risk it.
An example: if 12 makes a big database change but you skip over that version, 13 may not recognize the databases left by 11 because 12 had the code to recognize and reformat the old database while that code was seen as unnecessary and removed from 13.
Stuff like this is also why you can’t always revert to an older version while keeping the data/databases from the newer software.
I ran a setup like this for a couple years. Super handy being able to literally press the power button remotely; especially when/if the system hangs and becomes unresponsive.
If you use an RPI as the third device, you can use one of the GPIO pins to trigger a transistor connected in parallel with the servers power button. The pi can then (re)start the server on command.
It’s clunky, but I can open files in firefox by using a file browser app (I use x-plore), selecting ‘open with’, then selecting firefox. Sometimes it’s not in the list, but there’s a selector for what type of file (text, video, audio, ‘*’). ‘*’ lists all the apps.
Sometimes stuff still refuses to open, but things like pdfs and html files usually work
You have to explicitly enable directory indexing; but then it will automatically generate simple http pages listing directory contents.
https://nginx.org/en/docs/http/ngx_http_autoindex_module.html
FolderSync selectively syncs files/folders from my phone back to my server via ssh. Some folders are on a schedule, some monitor for changes and sync immediately; most are just one-way, some are two-way (files added to the server will sync back to the phone as well as uploading data to the server). There’s even one that automatically drops files into paperless-ngx’ consume folder for automatic document importing.
From there BorgBackup makes a daily backup of the data, keeping historical backups for years with absolutely incredible efficiency. I currently have 21 backups of about ~550gb each. Borg stores this in 447gb of total disc space.

If you’ve got an nvidia gpu+drivers installed, you’ve probably got ‘nvidia-smi’ already which will show you utilization and which processes are using it.
You could setup a user account like the share you’re describing. There’s a setting to prevent the user from changing their password.
Just pass out those credentials to anyone you want to collaborate with; they don’t need their own individual accounts.
I use https://filebrowser.org/ for this.
Nice lightweight filebrowsing/sharing with user management. Users can have their own dedicated directories, or collaborate.
You can also create share links that allow anyone with the link to view/download files. Optionally password protected.
Here’s a demo you can mess with: https://demo.filebrowser.org/ User: demo Pass: demo
Most of my web services are behind my vpn, but there are a couple I expose publicly for friends/family to use. Things like emby, ombi, and some generic file sharing with file browser.
One of these has a long custom path setup in nginx which, instead of proxying to the named service, will ask for http basic auth credentials. Use the correct host+path, then provide the correct user+pass, and you’ll be served an openvpn configuration file which includes an encrypted private key. Decrypt that and you’ve got backdoor vpn access.
I keep vaultwarden behind a vpn so it’s not exposed directly to the net. You don’t need a constant connection to the server; that’s only needed to add/change vault items.
This does require some planning though; it’s easy to lock yourself out of your accounts when you’re away, if you don’t incorporate a backdoor of some kind to let yourself in in an emergency. (lost your device while away from home for example)
My normal vpn connection requires a private key and a password that’s stored in my vault to decrypt it. I’ve setup a method for retrieving a backup set of keys using a series of usernames, emails, passwords, and undocumented paths (these are the only passwords I actually memorize); allowing me to reach vaultwarden where I can retrieve my vault with the data needed to login to everything else properly.
Betcha she’s got a delicious pie waiting for you though…