I agree. This seems so similar to SO many games that were around 40 years ago. It's mario, and a bit of donkey kong. Why are people going nuts over this? I genuinely want to know.
I find it amazing that ARM hasn't come up with an auto-discovery mechanism for their platforms. They're the only ones in a position to do it, and they've done exactly nothing about it. The device tree junk we have to live with is here to stay.
The U-Boot situation is really no better. Nearly every vendor ships with U-Boot, and it's always, always a fork that never gets updated. ARM should have taken that situation in hand decades ago, too.
This policy, and most of the comments on here, miss the point entirely.
Open source is supposed to be primarily a labor of love, scratching your own itch, done for pride and a sense of accomplishment of having done it right.
Using an AI chatbot achieves none of that. It's an admission that you don't have the skills, and you're not wanting to learn them. Using AI to contribute to an open source project makes NO sense whatsoever. It's an admission of incompetence, which should be profoundly embarrassing to you.
I realize I'm screaming into the void here on HN, where so many people have bought into the "AI is just a tool" bullshit. But I know I'm not alone, even here. And I know that e.g. Linus Torvalds sees right through the bullshit as well.
It's simply the next step in the 8-interview nonsense that seems to be common at least in tech: it selects for people willing to put up with huge amounts of corporate nonsense. Competence at the job is entirely irrelevant.
The article is a little wide-eyed about how new this kind of censorship is.
Belgium broadly has a duopoly, with the first two ISPs listed having the vast majority of the market. Both of them have been doing blocking of pirate sites for decades, with at least one of them actually resolving + blocking by IP address, not just DNS blocking.
Needless to say, both have video on demand services to protect.
That's a bit of a myth. Lots of electronics can still be repaired just fine. Check out some electronics repair channels on youtube. I've repaired plenty of electronics myself.
Replacing instead of repairing really is mostly a cultural thing.
It's not (only) a cultural thing. It's primarily an economic one! Getting a $THING fixed requires typically:
1) Getting a technician out to identify the fault
2) Order the required replacement part and wait for it to be shipped
3) Getting the technician out again to actually fix the fault
The technician needs to be sufficiently skilled, wants vacations and sick days. We can somewhat trade-off utilisation vs. travel time, but I would be shocked if we can get a single turn out to be reliable below an hour. Even if we are optimistic and calculate with 100 bucks / hour this means the cheapest possible replacement is 200 bucks. That means it's _strictly_ cheaper to immediately throw everything below 200 away. And for everything more expensive we still need to beat opportunity costs and failed-to-actually-fix-the-problem fixes.
If you do these calculations in earnest I'd be shocked if _economically_ it's worth to repair anything below 500 bucks.
Well, maybe. In cases where they don't either reduce the electronics to a single custom IC or cover the electronics with resin it might be possible from a technical perspective. It still isn't probably worth it from an economic perspective, but it might still be worth it if you value the environment over cost.
This has long been considered one of Goodreads' big advantages: its massive publication database, ISBN and all. But recently Anna's Archive has been making quite a bit of noise about their considerable ISBN database:
I didn't realize how bad it was until I recently looked into doing a Python Qt application. There's two competing Python language bindings for Qt: PyQt and PySide. The latter appears more free and active, so that seemed the way to go.
How do you even make your stuff dependent on/broken with specific Python versions? I mean how in hell?
The fact that venv is so widely used in Python was always an indication that not all was well dependency-wise, but it doesn't seem like it's optional anymore.
My guess would be that because it's an integration with a sprawling native library, there is a lot of code which touches Python's own C API. That is not entirely stable, and it has a lot more source stability than binary stability:
All I see is that newer library versions drop support for old Python versions (seems natural, unless you have unlimited resources). For example, If you need Python 2 support, use corresponding old versions.