I would replace it. Sometimes I push my luck and for minor or unexpected errors I just clear the error and re-add the drive, but this many errors is likely a solid sign.
I would replace it. Sometimes I push my luck and for minor or unexpected errors I just clear the error and re-add the drive, but this many errors is likely a solid sign.
I similary only print useful things and tools, and my impression was that paid models seem to be mostly decorative things, whereas the most useful functional prints seem to usually be free. I don’t think I’ve ever paid for a model, but cults has a lot of paid stuff if you want to look.
Luckily they are on 2.0.1 now so there has been 2 stable version by now
If you search for pfsense alias script, you’ll find some examples on updating aliases from a script, so you’ll only need to write the part that gets the hostnames. Since it sounds like the hostnames are unpredictable, it might be hard as the only way to get them on the fly is to listen for what hostnames are being resolved by clients on the LAN, probably by hooking into unbound or whatever. If you can share what the service is it would make it easier to determine if there’s a shortcut, like the example I gave where all the subdomains are always in the same CIDR and if one of the hostnames is predictable (or if the subdomains are always in the same CIDR as the main domain for example, then you can have the script just look up the main domain’s cidr). Another possibly easier alternative would be to find an API that lets you search the certificate transparency logs for the main domain which would reveal all subdomains that have SSL certificates. You could then just load all those subdomains into the alias and let pfsense look up the IPs.
I would investigate whether the IPs of each subdomain follow a pattern of a particular CIDR or unique ASN because reacting to DNS lookups in realtime will probably mean some lag between first request and the routing being updated, compared to a solution that’s able to proactively route all relevant CIDRs or all CIDRs assigned to an ASN.
I think the way people do it is by making a script that gets the hostnames and updates the alias, then just schedule it in pfsense. I’ve also seen ASN based routing using a script, but that’ll only work on large services that use their own AS. If the service is large enough, they might predictably use IPs from the same CIDR, so if you spend some time collecting the relevant IPs, you might find that even when the hostnames are new and random, they always go to the same pool of IPs, that’s the lazy way I did selective routing to GitHub since it was always the same subnet.
I don’t have a bambu so this is just a shot in the dark - but did you maybe use a multi color profile for the single color print? I seem to remember at one point getting purge towers on single color prints when I forgot to switch back to the single color profile.
The new beta timeline is sooo smooth! I finally don’t hate scrolling back to find a specific old photo. The scrolling performance feels completely native to me now.
I don’t think this is a hot take at all. This is the main reason why I’ve printed fewer than like 5 non-functional prints in my entire life, and most of them were requests from friends. I make lots of custom mounts, replacement parts, custom cases, shims and jigs, custom measuring tools, etc.
Open webUI connected to ollama can do this. In openwebui, if you edit any one of your responses, it forks the conversation. You can flip between each branch using the arrows below any of your responses. If you click the 3 dot menu and click overview, it opens a graph view that shows the branches of the conversation visually.

Since you want to avoid supports and cleanup, what I would do is modify the model to flatten out the inside of the dome. You can do it easily in most slicers by adding a cylinder part and squishing it until it’s more like a disc.
A flat roof to the inside of the dome will cause it to switch to bridging for that section which won’t be perfect but will be a lot better than stringing, especially if you play around with thick bridges.
I think you can use Immich external libraries for this, also to be extra safe you can just mount your external images folder as read only by adding :ro to the docker volume mount so that the container won’t be able to modify anything as a precaution.
I use a .dev and it just works with letsencrypt. I don’t do anything special with wildcards, I just let traefik request a cert for every subdomain I use and it works. I use the tls challenge which works on port 443, so I don’t think HSTS or port 80 matters, but I still forwarded port 80 it so I can serve an http->https redirect since stuff like curl and probably other tools might not know about HSTS.
Gotcha thanks for the info! It looks like I would be fine with ocis or opencloud, but since my main use case and pain points are with document editing which is collabora, it probably wouldn’t change much besides simplifying the docker setup (I had to make a gross pile of nginx config stuff pieced together from many forum help posts to get the nextcloud fpm container to work smoothly). But it already works so unless it breaks there’s little incentive for me to change.
What are the apps that you would miss? I basically only use my NC as a Google drive and docs replacement, so all it has to do is store docx files and let me edit them on desktop or mobile without being glitchy and I’ve really wanted to consider OCIS or similar.
That second requirement for me seems hard because of how complex office suites are, but NC is driving me to my wit’s end with how slow and error prone it is, and how glitchy the NC office UI is (like glitches when selecting text or randomly scrolling you to the beginning).
Hmm, well it doesn’t seem to be any problem with the docker compose then as best as I can tell. I picked a random ext4 flash drive and replicated your setup with the UID and GID set and it seems to work fine:
# /etc/fstab
/dev/sda1 /home/<me>/mount/ext_hdd_01 ext4 defaults 0 2
~/mount % ls -an
total 12
drwxr-xr-x 3 1000 1000 4096 Mar 27 16:22 .
drwx------ 86 1000 1000 4096 Mar 27 16:31 ..
drwxrwxrwx 3 0 0 4096 Mar 27 16:26 ext_hdd_01
~/mount/ext_hdd_01 % ls -an
total 6521728
drwxrwxrwx 3 0 0 4096 Mar 27 16:26 .
drwxr-xr-x 3 1000 1000 4096 Mar 27 16:22 ..
-rw-r--r-- 1 1000 1000 6678214224 May 5 2024 PXL_20240504_233345242.mp4
drwxrwxrwx 2 0 0 16384 May 5 2024 lost+found
-rwxr--r-- 1 1000 1000 5 Mar 27 16:27 test.txt
# ~/samba/docker-compose.yml
services:
samba:
image: dockurr/samba
container_name: samba
environment:
NAME: "Data"
USER: "user"
PASS: "pass"
UID: "1000"
GID: "1000"
ports:
- 445:445
volumes:
- /home/<me>/mount:/storage
restart: always
I was able to play the PXL.mp4 video from my desktop and write back the test.txt file
Have you checked the logs with docker logs -f samba to see if there’s anything there?
Also you could try to access the HD from within the container, using docker exec -it samba bash and then cd into /storage and see what happens.
I would suggest adding “UID” and “GID” environment variables to the container, and set them to the numeric values for user and group numbers that show in place of your name when you use “ls -an” inside of the “mount” folder (they will probably be the same number).
For example, if inside your mount folder you see:
ls -an
total 12
drwx------ 2 1001 1001 4096 Mar 27 13:54 .
drwxr-xr-x 3 1000 1000 4096 Mar 27 13:51 ..
-rwx------ 1 1001 1001 0 Mar 27 13:54 hello.txt
-rwx------ 1 1001 1001 4 Mar 27 13:54 test.txt
Then set UID: 1001 and GID: 1001
I get the same error as you when I copy your docker-compose and try to access a folder owned by my user. When I add the UID and GID of my user id to the docker-compose (1001 for me), the error goes away.
Immich has a setting that does automatic photo backup over WiFi, I use the android app as a Google photos replacement. You can choose however many folders on your phone as you want (I just do camera roll) and enable only backup over WiFi and it backs up all the photos in original quality. I self-host the server on my Synology with a reverse proxy (can’t forward ports at my current place due to cgnat) so I can access it from anywhere.
I believe the app is cross platform so the iPhone version should be identical to the android one.
I’ve done a backup swap with friends a couple times. Security wasn’t much of a worry since we connected to each other’s boxes over ssh or wireguard or similar and used tools that allowed encryption. The biggest challenge for us was that in my selfhosting friend group we all prefer different protocols so we had to figure out what each of us wanted to use to connect and access filesystems and set that up. The second challenge was ensuring uptime and that the remote access we set up for each other stayed up - and that’s what killed the project as we all eventually stopped maintaining the remote access and nobody seemed to care - so if I were to do it again I would make sure all participants have alerts monitoring their shared endpoint.
Are you using sonarr/radarr to do your renaming? I have mine set to patterns that put the release group at the end. It usually has no problem picking up release groups at the beginning (especially for anime, that seems to be pretty common), so by the time it’s auto imported, the filenames have been normalized to standard format with release group at the end.