The autopilots in aircraft have predictable behaviors based on the data and inputs available to them.
This can still be problematic! If sensors are feeding the autopilot bad data, the autopilot may do the wrong thing for a situation. Likewise, if the pilot(s) do not understand the autopilot's behaviors, they may misuse the autopilot, or take actions that interfere with the autopilot's operation.
Generative AI has unpredictable results. You cannot make confident statements like "if inputs X, Y, and Z are at these values, the system will always produce this set of outputs".
In the very short timeline of reacting to a critical mid-flight situation, confidence in the behavior of the systems is critical. A lot of plane crashes have "the pilot didn't understand what the automation was doing" as a significant contributing factor. We get enough of that from lack of training, differences between aircraft manufacturers, and plain old human fallibility. We don't need to introduce a randomized source of opportunities for the pilots to not understand what the automation is doing.
It started out as, "AI can make more errors than a human. Therefore, it is not useful to humans." Which I disagreed with.
But now it seems like the argument is, "AI is not useful to humans because its output is non-deterministic?" Is that an accurate representation of what you're saying?
My problem with generative AI is that it makes different errors than humans tend to make. And these errors can be harder to predict and detect than the kinds of errors humans tend to make, because fundamentally the error source is the non-determinism.
Remember "garbage in, garbage out"? We expect technology systems to generate expected outputs in response to inputs. With generative AI, you can get a garbage output regardless of the input quality.
RE: the calculator screenshot - it's still reliable because the same answer will be produced for the same inputs every time. And the behavior, though possibly confusing to the end user at times, is based on choices made in the design of the system (floating point vs integer representations, rounding/truncating behavior, etc). It's reliable deterministic logic all the way down.
Speaking of nuance, I find it rather unintuitive how it often seems like it's harder for people to have a nuanced opinion of other people than to have a nuanced opinion about a policy or software feature or specific situation.
You'd think given how complicated and faceted people are it would be especially easy to find both good and bad things to say about them, but online at least it almost seems to be the opposite: there's even less nuance when discussing people than there is discussing other topics. (Case in point.)
I'm not required to find the good in a person like Musk. I'm allowed to look at the many shitty things he's done and terrible opinions he expresses and say "that is a shit man, and I do not like him or trust him."
He has probably done something for someone somewhere that wasn't terrible. Does it counterbalance the rest? Not really!
There's that (possibly apocryphal) saying, "and Magda Goebbels made a great strudel." Just because a nazi has a redeeming quality somewhere does not undo them being a nazi.
You're not required to do anything. Consider though that if you refuse to see the good in people you disagree with, you have little room to complain when they refuse to see the good in you.
There's a lot of overlap between those two groups. Half of the country voted for Trump in the last election, a few of them are probably your neighbors. They control the presidency and a majority in the house and senate. You better hope they don't all decide they feel the same way about you that you apparently do about them.
The largest share of the eligible voting population was the 'did not vote' group.
I'm OK with calling fascists what they are. I'm also OK with recognizing a neighbor who has been consumed by fascist propaganda.
The fascist is not one that can be negotiated with. As Sartre said:
"They know that their remarks are frivolous, open to challenge. But they are amusing themselves, for it is their adversary who is obliged to use words responsibly, since he believes in words."
I can negotiate with the propaganda poisoned neighbor. There is no negotiating with the people who are running the fascist show. Giving a fascist the benefit of the doubt is playing into their strategy.
To follow topics on Bluesky, add feeds for those topics.
The "Following" tab is literally that - chronologically ordered posts and replies from accounts you follow. The "Discover" and "Popular with Friends" tabs give you algorithm-sourced stuff that is somewhat connected to who you follow.
When I click on the tab for the Game Dev feed, I see nothing but posts about game dev. When I click on the Astronomy feed, I only see telescopes and pictures taken with telescopes.
The reality is that microblogging, whether it be on X or bluesky or mastadon or even facebook posts, will ALWAYS be lower signal, lower value than real, curated or effort filled content.
I like John Green a lot, including his vlogs that are just him speaking about stuff he doesn't know for half an hour, but I still do not go read what he posts on Bluesky, because it's as low quality, low signal, low intent, and low effort as comments here on HN.
It's just not useful. It's not a good use of my time to read random tweets from people.
When I first got a twitter account in like 2010, I very very instantly recognized it was not for me. If something is important, someone will take the effort to make an actual piece of real content about it, like a blog or video or essay or book. Hell, even a thorough reddit post is better than microblogging.
If it's not worth going through that effort to get the message out to people, why should I consider that a valuable message?
It's emblematic of the past 20 years of social development in my opinion. If the only thing stopping you from getting the word about something super duper important is that writing a page essay is too hard, nobody really needs to care about that, because writing an essay is so easy we make children do it
It's all noise. The signal doesn't go on twitter, it goes on real platforms where you might make money from good signal, or like, a freaking scientific paper, or the front page of a news org.
Earlier, I would have agreed that microblogging pales next to long-form blogging. But then so much long-form blogging moved to Substack that has an overall culture as full of pathologies as microblogging: post regularly even if you don't have anything new to say, hustle a brand that can be monetized, accept a comments section with a broken UI full of people shamelessly trying to hustle their own brand. People doing long-form video content will often speak openly about how they feel forced to change their content in order to avoid being punished by the YouTube algorithm.
Personally, I'm pessimistic that there are many remaining sources of substantial discourse and discussion at all. I just pirate a lot more university-press books from Anna's Archive.
I noticed the same thing with Angela Collier. I love her videos, but her Bluesky posts have less subtlety than I would expect from someone of her intelligence and scientific training.
That's just what it's meant for, low effort swipes, shitposting, retweets out of context etc.
It is notable that in order to actually accomplish their "We want a platform where a celebrity says something and you instantly get that something", Twitter had to do a lot of work and pain curating who "celebrities" are. The alternative is everyone getting a waterfall of shit, because the vast majority of people do not have PR agencies between them and their tweet button, and do not have anything important or meaningful to say that is better said fast and short than long and naunced. The entire point of microblogging is to eschew nuance.
That's absurd full stop.
Why would you ever want to know whatever low effort comment sparked thanksgiving dinner arguments at other people's thanksgivings?
> I love her videos, but her Bluesky posts have less subtlety than I would expect from someone of her intelligence and scientific training.
Please tell me which of "Water fluoridation is a well understood treatment, and people who are telling you it's bad for you are just lying", "<Knitting trivia>" or "Target is doing poorly as a business right now" or "ICE doing gestapo things" is "unsubtle", or why any of that should be "subtle", which is a strange choice of word.
Status pages usually start as a human-updated thing because it's easy to implement.
Some time later, you might add an automated check thing that makes some synthetic requests to the service and validates what's returned. And maybe you wire that directly to the status page so issues can be shown as soon as possible.
Then, false alarms happen. Maybe someone forgot to rotate the credentials for the test account and it got locked out. Maybe the testing system has a bug. Maybe a change is pushed to the service that changes the output such that the test thinks the result is invalid. Maybe a localized infrastructure problem is preventing the testing system from reaching the service. There's a lot of ways for false alarms to appear, some intermittent and some persistent.
So then you spread out. You add more testing systems in diverse locations. You require some N of M tests to fail, and if that threshold is reached the status page gets updated automatically. That protects you from a few categories of false alarms, but not all of them.
You could go further to continue whacking away at the false alarm sources, but as you go you run into the same problem of service reliability, where each additional "9" costs much more than the one that came before. You reach a point where you realize the cost of making your automatic status page updates fully automatically accurate becomes prohibitive.
So you go back to having a human assess the alarm and authorize a status page update if it is legitimate.
FYI, this kind of garbage truck has been around for >50 years [0], so any wide-scale impact on employment from this technology has likely already settled out.
The waste collection companies in my area don't use them because it's rural and the bins aren't standardized. The side loaders don't work for all use cases of garbage trucks.
>In 1969, the city of Scottsdale, Arizona introduced the world's first automated side loader. The new truck could collect 300 gallon containers in 30 second cycles, without the driver exiting the cab
The school bus' stop sign was extended and had red lights flashing. With the proximity to the intersection, it's most appropriately treated as an all-way stop.
Regardless of whether the bus' stop sign applies to cross streets, at some point in the turn the car is now in parallel with the bus, and the sign would apply at that point.
Also, you're blind to anyone who may be approaching the bus from the opposite side of the intersection.
This is an affront to the rule of law and equal protection under the law. It is not okay for congress or the courts to acquiesce. We are supposed to be a nation of laws.
Congress and the courts are derelict in their responsibility to honor the rule of law.
No. No, it does not.
reply