Duplicate post, please remove.
Duplicate post, please remove.
This will not be a fork of OpenRGB. While I plan to take a huge chunk of it (the reversed generiert device protocols)
How about opening an issue on OpenRGB asking what you need and why, maybe it can be abstracted away, headless, and that architecture change could be useful for them and other projects too then?
You can do that part yourself and let other use that new tool as their dependency but it means you’ll have to keep it up to date against OpenRGB itself as it supports more devices just because of its popularity.
Why fork OpenRGB rather than make it a dependency?
I’ll be honest : because people is ignorant.
They tried Debian once few years ago, it didn’t have the exact driver they wanted out of the box, they gave up. They think that’s the normal and current experience.
Reality is I use Debian every day on my servers, SBCs, laptop but also my desktop. I’ve been gaming on it since the first day of the installation and it just worked. Sure I had to follow https://wiki.debian.org/NvidiaGraphicsDrivers and basically follow those steps. It took me maybe 15min and 1 reboot but since then NO tinkering, 0, and I’m gaming nearly daily from indie to AAA, from 2D to 3D to VR. As I mentioned in another reply sure I might not have perfectly optimized all my performance but I don’t give a shit, I’m just gaming!
Also as I mentioned elsewhere the “cutting edge” is bullshit. You can have a Debian installation, stable, and cherry pick the packages you want. Heck you can even pull from a forge the software you want, built it, run it. That’s how “bleeding edge” it can be. Of course you can use VM (with GPU passthrough), distrobox, AppImage, Nix (different from NixOS), etc so they are many many ways to make sure you use the absolute latest without breaking your system.
TL;DR: Debian does not position itself as a gaming distribution. A lot of gamers want to optimize everything for gaming and consequently assume a specialized distribution will do better. Meanwhile people who JUST want to play can definitely do so on Debian.
Switch workplace.
There are countless ways to bypass that (e.g. https://docs.linuxserver.io/images/docker-webtop/ running on a server) but honestly if a workplace does not value your expertise to hone your own tools, they don’t really value you as an employee.
lol, sorry but in what world do you live in? NONE of the OS “just works”.
I’m sorry but this is such a trope. I watched someone using an up to date iOS phone. That thing is LOCKED down to no end, countless people claim that Apple are some kind of UX geniuses … well you look somebody trying to do anything as complex as watching a video on this and it’s a damn struggle.
Sorry for going on a rant here but the very concept is a lie. It’s like Windows being easier to use, it’s absolutely not BUT people have trained, at school (sigh) or at work, on how to use it. They somehow “forget” that they went through hours or even days of training and somehow they believe it feels “natural”. That’s entirely dishonest but why do I insist on this so much? Because it’s unfair to then compare Linux distributions to things that do not exist!
What “just works” but STILL is not perfect or flawless, is SteamOS on the SteamDeck not due to any “magic” from Valve but rather because :
and as soon as one start to tinker with SteamOS on SteamDeck by replacing part, adding USB-C devices, remote the r/w restriction on the OS, etc then again “just works” becomes “worked at some point”.
You’d have settings for when to stop seeding, e.g. 1:1 ratio minimum, duration of the track xN, etc with a reasonable default. Suggestions welcomed.
I would recommend against a new player when existing scriptable ones like vlc and mpv already exist.
Instead what I would do is a plugin for either, eventually repackaged as its own player (if somehow installing the script itself is too much for some) for which the script would
Because they literally wrote the book on lock-in https://fabien.benetou.fr/ReadingNotes/InformationRules and they tried with all their might to stop free software https://en.wikipedia.org/wiki/An_Open_Letter_to_Hobbyists so beside the money and power they have been strategically at it for decades. Dependency is deep in the product.
Funny I have the opposite experience.
I use KDE Plasma, Firefox, konsole, etc and sometimes, no idea when and why, I just pick a file then drop it somewhere else, including ON the terminal… and it works?! Like it brings the full path for that file and then I can compose with CLI tools, amazing!
I’m quite used to the terminal so I rarely use drag&drop (mv, cp, scp, rsync, etc just work) but when I do I’m actually often positively surprise that totally different software made with different interaction paradigms (e.g. GUI vs CLI) do work well together. Overall I think https://specifications.freedesktop.org/ is quite impressive.
Gosh… wish I could upvote twice. Feels like we just gave a low cost (for now) chainsaw to anybody who wish they had a pocket knife then say “There, you can cut anything with that!” and somehow they forgot they can just buy some OK stuff from Ikea or a nice artisan. The need to “build” anything without taking a minute to know, not even the state of the art, whatever already exist out there and “fix” it by “personalizing” it is nuts.
Let’s not “vibe code” anything when existing reliable solutions already exist!
Neat, made me curious, seems to rely on https://ffmpeg.org/ffmpeg-filters.html#scdet-1
It’s all just speculations, both what you suggested and what others said.
You are on the right path with your screenshots but you might not be measuring the right thing.
So, you need a (paper) notebook to record objectively (not your biased feeling assuming a pattern that might not exist) when it happens and for how long. Only from then can you backtrack to WHAT causes it. Sure you can have some hypothesis (update related, screen attach/detach, BIOS, RAM, etc) but that should NOT lead to your data acquisition.
So you htop is nice but AFAICT it’s just about CPU and memory, it’s not about e.g. IO so consider instead iotop, in particular if one process is some indexing (e.g. locatedb). Theoretically if it’s not CPU/memory (which you are saying it’s not the case) then it basically just leaves IO, that can be again indexing, some heavy process that is bottlenecked on disk access, but can also be a bug, e.g. BT pairing/unpairing that happens faster than you can notice.
Think of this as a fun investigation that leads you to better understanding of your setup, good luck.
play around with local LLMs and image upscaling
FWIW I did that for a bit https://fabien.benetou.fr/Content/SelfHostingArtificialIntelligence and I stopped doing it. I did it mostly from FOMO and that, maybe, truly, it wasn’t just hype. Well I stopped. Sure most of those (then state of the art) models are impressive. Yes there are some radical progresses on all fronts, from software to hardware to mathematics underpinning ALL this… and yet, that is ACTUALLY useful in there? IMHO not much.
Once you did try models and confirm that yes indeed it makes “something” then the usefulness is so rare it make the whole endeavor not worth it for me. I would still do it again in retrospect because it helps to learn but… honestly NOT doing it and leaving others to benchmark, review, etc or “just” spending 10 bucks on a commercial model will save you a LOT of time.
So… do what you want but I’d argue gaming remains by far the best usage of a local GPU.
Well I get why you stick to a hardware device you like… but honestly that’s 15 years old. You can get something better and cheaper delivered to your door tomorrow.
I personally went down a similar path while discovering https://www.rockbox.org/ was still a thing, looking for old iPod or Archos I could refurbish, checking 2nd hand market, etc. As much as it pains me to say, unless you are a collector it’s not “worth” it. You can get something ridiculously smaller, with more memory, more features, etc for the price of a meal.
IMHO it’s better to get rid of Windows by purchasing new hardware that is genuinely interroperable by supporting standards.
Ideally you’d check something like https://www.hanselman.com/blog/how-to-update-the-firmware-on-your-zune-without-microsoft-dammit but it might be more work than you want to put it. Maybe your local HackerSpace could help though.
My point finally is that freedom is quite important and feeling trapped daily is not worth ~$50.
Nothing you (nor I) know of but that doesn’t mean it’s the case. I can’t evaluate but https://www.openimagedenoise.org/ is publishing by Intel and in 2026 so maybe it’s good.
Right, then I can’t help you.
To clarify for others though as I guess I wasn’t clear based on the downvotes : I’m not suggesting a single piece of software is a viable alternative to Lightroom. Rather I’m saying Lightroom itself is a collection of algorithms dedicated to photo editing wrapped in a UX one is familiar with. On the other hand ImageMagick (just to pick one I know relatively well) is a set of command line tools for image editing. It’s mostly used as a backend with other tools as interface. I imagine there are plenty of alternatives to ImageMagick too, probably some that can include arXiv STOA algorithms for photo editing, maybe some even with a GUI but my point again is to reconsider the workflow to understand how the tools one rely on actual work.
So to hopefully express myself better this time, ImageMagick + Gimp + Krita + some script in a Github repository based on an arXiv publication + I don’t know what + … all together or in part might be better for some people but no I don’t know an all-in-one open source alternative that cover ALL needs without them being expressed first.
Not.
Now to be slightly more helpful (apologies for the provocation) I suggest you consider alternatives to Lightroom. I know that instantly you will receive countless comments on how alternatives are just nowhere near as good as Lightroom… and that’s OK. IMHO it’s OK because I bet YOUR usage of Lightroom isn’t the usage of others. So… I recommend you forget the brand “Adobe” or the product “Lightroom” and instead you list here the actual function of a tool you need.
This way, by listing actual needs rather than a bundle product with branding and specific UX, you go back to the root of your problem, namely WHY do you need such a piece of software in the first place.
Sure, you might end up with an entirely different workflow. Sure it will probably be absolutely alien at first… but so was learning how to use that piece of software in the first place too. Right now you do have the concepts, so replacing one click by a command line tool, or 1 piece of software by 10, is IMHO acceptable. What you will hopefully have in the end if YOUR workflow that is even more adapted compared to what you had first. It will be “weird” and maybe nobody else will get it but for you it will be exactly what you need.
I have genuinely no idea how that could work.
I believe I get the genuine intent (protecting children) but I have so far never encountered any device or software or both that didn’t relatively easily bypass user authentication.
The closest I’ve tried are (expensive) XR headsets like the Apple Vision Pro or the Microsoft HoloLens both thanks to eye tracking. Basically for these you have to validate you are who you claim to be when you put the headset on. If you remove it, put it back (or on someone else head) you have to do it again. Nobody else (unless you explicitly share) can then see what you are looking it.
Every other devices I’ve seen, including mobile phones with banking apps, typically ask you to authenticate then assume than you are the one who keeps using the device. Meanwhile anybody else can grab the device from your hand and be “you”. Typically specific action (e.g. password change) do require to authenticate again but “normal” usage does not.