Do we know if this violence is politically motivated yet? (Other common motivations are mental health issues, paranoia, revenge, desire for fame etc). Of course it seems likely, but it also seems premature to jump to trying to use this as proof of a particular personal position.
I definitely believe that people should be more understanding of each other, and less quick to jump to insults and othering, but we know so little about this situation, to be so confident that it was caused by speech seems extreme.
I am also aware that a lot of the political violence of the last few years ended up not being motivated by the reasons one might naturally expect.
> Do we know if this violence is politically motivated yet?
How many long-range rifle shot assassinations do you know of that were not politically motivated? Jilted lovers and such don't do that. In context it's hard to take this assassination as anything other than politically motivated.
I guess that largely depends on how one qualifies "politically motivated". By some definitions it's easy to include any of what you listed as also part of a politically motivated attack, by a narrower definition one could just as easily choose to exclude them. E.g. whether an attacker is paranoid is orthogonal to whether the attack involved the victim's political views/activity in some way.
At the root I agree in principal though. It's, for example, still possible he picked a bad fight with an unstable individual in a bar last night (over something not politically related) and they followed him to the event he was speaking at to shoot him. I'm not as convinced I've seen that kind of thing happen "a lot", but it's true we don't have post validation yet.
my basic guess would be that its epstein related, which is still politically motivated in some sense, but "killed him for protecting pedophiles" is quite different from "killed him for being right wing"
Isn't it more likely that this is a false flag operation designed to distract from the Epstein birthday card signed by Trump? The timing is suspicious and there's certainly a lot of bandwidth given over to a single shooting, compared to the school shooting on the same day (three shot).
Kirk would seem like an ideal target as he has a high media profile and is not involved in running the government. I would guess that the aim is to promote civil war and thus provide an excuse for martial law.
You start your comment saying we should avoid making apocalyptic statements and end it by saying "the cycle is going to destroy our society".
My conclusion is that you don't mind making apocalyptic statements about actions you think are dangerous to society, which sits uncomfortably with your asking other people not to.
I'd say the appropriate read there is to slip the word "unjustified" into a few key slots. The view is nearly impossible to avoid in context. How do you see society surviving if the prevailing view is that anyone with a different belief is trying to bring on the end times? To the point where assassinating political opponents is justified?
It would bring on the end of a society. It might well happen in the US case, they've been heading in a pretty dangerous direction rhetorically. If we take the Soviet Union as a benchmark they probably have a long way to go but that sort of journey seems unnecessary and stupid.
Well, yes. We expect most religious people to put up with society at large damning souls to an eternity of torment and whatever. And people are forever pushing economic schemes that result in needless mass suffering. Not to mention that for reasons mysterious warmongers are usually treated with respect and tolerance in the public discourse.
An idea being "harmful" isn't a very high bar, we have lots of those and by and large people are expected to put up with them. Society is so good at overlooking them it is easy to lose track of just how many terrible beliefs are on the move at any moment. Someone being a threat to democracy isn't actually all that close to the top of the list, although moving away from democracy is generally pretty stupid and a harbinger of really big problems.
> How do you see society surviving if the prevailing view is that anyone with a different belief is trying to bring on the end times?
The point I was trying to make is: this is not what’s happening. It’s not “anyone with a different belief”. But some people, Kirk included, literally advocated for, e.g., stoning gay people. That’s not a reasonable position we can just compromise on. That’s reprehensible dehumanization.
I think they're politely asking for the far left to stop with the language inflation. Use words with appropriate and proportionate meanings. Do not try to gradually be more and more dramatic and impactful.
It's not clear that "existential" threat and "destruction of society" are the same. A society can be "destroyed" via a lapse in the social contract, turning it into a "society" or a different nature, or a non social population.
> My conclusion is that you don't mind making apocalyptic statements about actions you think are dangerous to society, which sits uncomfortably with your asking other people not to.
This is a nonsense argument. It is possible that constantly making apocalyptic statements can result in an apocalypse, and saying that people should stop doing that is not contradictory.
The words you use matter. If trump is an existential threat to democracy, he should be assassinated. If you're not advocating for murderous escalation, then stop using those words (for example).
> If trump is an existential threat to democracy, he should be assassinated.
Who/what is defining assassination as a reasonable response to that threat, who/what maintains the list of words which can replace "democracy" in that section, and what happens when someone disagrees with the maintainer of that list?
Those are all great questions, and why the point under discussion is whether or not we should choose our words more carefully and stop making apocalyptic predictions.
I wholeheartedly disagree - we need to be less concerned with who might say something and more concerned with how we teach society to react to it. Whether or not someone is making apocalyptic predictions should not define our ability to hold back from assassinating.
I'd agree there are aspects of how we say things which can reinforce how to react about it, but I don't think that's a good primary way to teach how to engage with polarizing content and certainly not via the way of avoidance of the types of statements bigstrat2003 laid out. I.e. there are very reasonable, particularly historical, examples of belief of potential threats to democracy which turned out to be true, so I don't inherently have a problem with that kind of discussion. I actually think calling that kind of statement as the problem would actually drive more extremism.
At the same time, I do believe there are ways to share such statements while also reinforcing healthy ways to react at the same time. kryogen1c's example ending in "he should be assassinated" crosses the line from bigstrat2003's talk of apocalyptic claims to direct calls to violence about them - the latter of which I agree is bad teaching (but I'd still rather people be encouraged to openly talk about those kinds of statements too, rather than be directly pressured to internalize or echo chamber them).
This is why the first question posed about the statement from kryogen1c was "Who/what is defining assassination as a reasonable response to that threat". The follow on questions were only added to help highlight there is no reasonable answer to that question because it's the call to assassination which is inherently problematic, not the claim someone is a threat to the democracy here. The latter (talking about perceived threats) is good, if not best, to talk about directly and openly. It's the former (calling for assassination about it) which is inherently incompatible with a stable society.
I agree with this, and, as a result, I don't believe there is any possible approach which results in 0 people assassinating political figures for what other people say. I think the same conclusion can even be reached if people were supposed to be expected to be perfectly rational beings.
I do believe education on how to effectively engage against an idea which feels threatening is better equipped to handle this apparent fact than bigstrat2003's approach of teaching people to not say certain beliefs because they'd be worth killing about. That doesn't mean it results in a perfect world though. Some may perhaps even agree with both approaches at the same time, but I think the implication from teaching the silencing of certain beliefs from being said for fear they are worth assassinating over if believed true ends up driving the very problem it sets out against. Especially once you add in malicious actors (internal or external).
For fun over the last few days, I've built a compressor / decompressor that uses the logits from an LLM, for each token in the input, then takes the ranks and exponential goolomb encodes them. Then you work in reverse to regenerate the original
It took me ages to get the prediction for the second token after "hello" to match the same as the prediction for the second token when running the model on the string "hello world", despite the fact that I was using a causal model. I tried all kinds of things before discovering that `quantized: false` was the important setting.
What's the Weissman score? Or more seriously :) did it perform well. Sounds like it should. If more and more text is AI slop it should do well.
I dont fully understand what you said but I guess higher probability logits are encoded with fewer bits. If your text is the LLM output then you may need a bit or two per token?
I used exponential golomb coding, so the rank 0 logit is encoded with a single bit, ranks 1 and 2 are encoded with three bits, ranks 3-6 are encoded with 5 bits, etc.
In terms of performance, I've not done any serious testing, but e.g. the wikipedia article on volcanos compresses to about 20% using GPT2. I've seen other strings compress even further.
The big issue is that while encoding is not unreasonable, decoding any significant amount of data is incredibly slow, since I'm doing a model run for every token in the output. It's bad enough that the scheme is probably unworkable as it is. I'm thinking about changing my code so that it streams out the tokens as it decodes them, so you're not just left there waiting for ages.
I don't know about golomb coding, but with Arithmetic coding you can do stream decoding(AC), if I remember correctly.
I supervised a student's project whose goal was exactly that : implement compression with LLMs using AC.
Since AC is optimal, if your LLM has an average cross entropy x on some dataset, you can expect that the compression will compress data using x nats per token on average!
Arithmetic coding looks like an extremely interesting approach, given that you can use the model at each step to give you the probabilities of each token.
I've become convinced that the real problem is probably impossible to get away from.
Ultimately we want a nice set of reusable UI components that can be used in many different situations. We also want a nice set of business logic components that don't have any kind of coupling with the way they get represented.
In order to make this work, we're going to need some code to connect the two, whether it's a 'controller' or a 'view model' or some other piece of code or architecture.
However we choose to achieve this task, it's going to feel ugly. It's necessarily the interface between two entirely different worlds, and it's going to have to delve into the specifics of the different sides. It's not going to be the nice clean reusable code that developers like to write. It's going to be ugly plumbing, coupled code that we are trying to sweep into one part of the codebase so that we can keep the rest of it beautiful.
These seemingly inescapable tradeoffs are almost always actually quite escapable if you look at it from a different perspective. You have to stop thinking about pages and button-clicks, and you have to stop using frameworks that try to do everything, box you in, and force you to architect your state-flow logic based on your visual hierarchy. This is the biggest problem with almost all UI frameworks: all your logic has to be partitioned along the lines that are set up by how your screen looks, or you're swimming upstream to prevent it. Instead, have your domain models declare how they work and interact using intermediate services, and consume those services to generate the UI as a consequence of those declarations. It's very hard, and I don't have all the answers yet, but I've tasted enough to know that it genuinely avoids this otherwise seemingly inescapable tradeoff. I'm not planning to ever build another UI (above some complexity) differently again.
Speaking of a different perspective, you and GP are describing it with different viewpoints and I'm not sure you're fully aware it is different:
GP is describing the Model and View as two co-equal things with something in between that links then.
You're describing the Model more as a foundation that the View builds on top of (and this is also how I look at it).
I like the stacked view more because it more closely matches the flow of data, making it almost explicit that what's presented to the user is a projection/transformation of the stored data. Views can't really exist independent of that data, they rely on it, while models can exist independent of the view.
I'm not saying it's not simple to write, I'm saying it's ugly and contingent and you can't really avoid that. It's exactly this reason that has led to a proliferation of MV* patterns, including the one you describe.
But to try to explain myself more clearly - in the architecture you describe, who is that it is implementing TableData and TableCallback? Is it your beautiful clean business logic classes that have no coupling to their representation - in which case that is weirdly coupled in an ugly way, or is it some other class that acts as the bridge between the two worlds, in which case, that's where your ugly code is living.
Ideally a bridge (e.g. a typeclass) with nice language support like Scala has. I'm not seeing what's so ugly. In math, you have an abstract interface for e.g. monoids (your interface like a Table). Then you have e.g. complex numbers as a set (your business model). The you can identify how C is a monoid via addition. And how it's also separately a monoid via multiplication. Same idea. There's nothing "ugly" about having to write down what you mean.
I've been using a boox note air for many years and you definitely can zoom on that.
Android is great for this use case because it lets me syncthing notes and use sheet music apps and use both kindle and kobo and calibre library and offline wikipedia and my own tools. As far as I'm concerned if you try to use it as a generic android tablet you're doing it wrong, but android is a massive step above what everyone else is offering (i.e. none of that)
I know this probably doesn’t exactly fit your use case, but I’ve actually been able to do this with a Kindle Touch (yes, from 2011)! It was a super serene experience to have your books synced over into KOReader.
> if you try to use it as a generic android tablet you're doing it wrong
I agree, but I felt that’s what the system invited me to do (may just be my tinkerer genes though). Update notifications, etc, web browsing, hoops to jump through to share files ...
How‘s your note sync workflow? Can you reasonably easily and quickly access your handwritten notes from a laptop? Last I checked there was some manual export step to jump through.
> For an airplane wing (airfoil), the top surface is curved and the bottom is flatter. When the wing moves forward:
> * Air over the top has to travel farther in the same amount of time -> it moves faster -> pressure on the top decreases.
> * Air underneath moves slower -> pressure underneath is higher
> * The presure difference creates an upward force - lift
Isn't that explanation of why wings work completely wrong? There's nothing that forces the air to cover the top distance in the same time that it covers the bottom distance, and in fact it doesn't. https://www.cam.ac.uk/research/news/how-wings-really-work
Very strange to use a mistake as your first demo, especially while talking about how it's phd level.
It appears to me like the linked explanation is also subtly wrong, in a different way:
“This is why a flat surface like a sail is able to cause lift – here the distance on each side is the same but it is slightly curved when it is rigged and so it acts as an aerofoil. In other words, it’s the curvature that creates lift, not the distance.”
But like you say flat plates can generate lift at positive AoA, no curvature (camber) required. Can you confirm this is correct? Kinda going crazy because I'd very much expect a Cambridge aerodynamicist to get this 100% right.
Yes, it is wrong. The curvature of the sail lowers the leading angle of attack which promotes attachment, i.e. reduces the risk of stalling at high angles of attack, but it is not responsible for lift in the sense you mean.
It could be argued that preventing a stall makes it responsible for lift in an AoA regime where the wing would otherwise be stalled -- hence "responsible for lift" -- but that would be far fetched.
More likely the author wanted to give an intuition for the cuvature of the airflow. This is produced not by the shape of the airfoil but the induced circulation around the airfoil, which makes air travel faster on the side of the far surface of an airfoil, creating the pressure differential.
Sorry, I know nothing about this topic, but this is how it was explained to me every time it's come up throughout my life. Could you explain a bit more?
I've always been under the impression that flat-plate airfoils can't generate lift without a positive angle-of-attack - where lift is generated through the separate mechanism of the air pushing against an angled plane? But a modern airfoil can, because of this effect.
And that if you flip them upside down, a flat plate is more efficient and requires less angle-of-attack than the standard airfoil shape because now the lift advantage is working to generate a downforce.
I just tried to search Google, but I'm finding all sorts of conflicting answers, with only a vague consensus that the AI-provided answer above is, in fact, correct. The shape of the wing causes pressure differences that generate lift in conjunction with multiple other effects that also generate lift by pushing or redirecting air downward.
The core part, which is incorrect and misleading, is 'the air needs to take an equal time to transit the top and bottom of the wing'. From that you can derive the correct statement that 'the air traveling across the top of the wing is moving faster', but you've not correctly explained why that is the case. And in fact, it's completely wrong that the transit time is equal: the videos from the page something linked above show that usually the air above the top takes less time than the bottom, and it's probably interesting to work out why that's the case!
(Also, once you've got the 'moving faster' you can then tell a mostly correct story through bernuolli's principle to get to lower pressure on the top and thus lift, but you're also going to confuse people if you say this is the one true story and any other explaination, like one that talks about momentum, or e.g. the curvature of the airflow causing the pressure gradient instead is wrong, because these are all simply multiple paths through the same underlying set of interactions which are not so easy to fundamentally seperate into cause and effect. But 'equal transit time' appears in none of the correct paths as an axiom, nor a necessary result, and there's basically no reason to use it in an explanation, because there's simpler correct stories if you want to dumb it down for people)
>Air over the top has to travel farther in the same amount of time
There is no requirement for air to travel any where. Let alone in any amount of time. So this part of the AI's response is completely wrong. "Same amount of time" as what? Air going underneath the wing? With an angle of attack the air under the wing is being deflected down, not magically meeting up with the air above the wing.
But this just sounds like a simplified layman explanation, the same way most of the ways we talk about electricity are completely wrong in terms of how electricity actually works.
If you look at airflow over an asymmetric airfoil [1], the air does move faster over the top. Sure, it doesn't arrive "at the same time" (it goes much faster than that) or fully describe why these effects are happening, but that's why it's a simplification for lay people. Wikipedia says [2]:
> Although the two simple Bernoulli-based explanations above are incorrect, there is nothing incorrect about Bernoulli's principle or the fact that the air goes faster on the top of the wing, and Bernoulli's principle can be used correctly as part of a more complicated explanation of lift.
But from what I can tell, the root of the answer is right. The shape of a wing causes pressure zones to form above and below the wing, generating extra lift (on top of deflection). From NASA's page [3]:
> {The upper flow is faster and from Bernoulli's equation the pressure is lower. The difference in pressure across the airfoil produces the lift.} As we have seen in Experiment #1, this part of the theory is correct. In fact, this theory is very appealing because many parts of the theory are correct.
That isn't to defend the AI response, it should know better given how many resources there are on this answer being misleading.
And so I don't leave without a satisfying conclusion, the better layman explanation should be (paraphrasing from the Smithsonian page [4]):
> The shape of the wing pushes air up, creating a leading edge with narrow flow. This small high pressure region is followed by the decline to the wider-flow trailing edge, which creates a low pressure region that sucks the air on the leading edge backward. In the process, the air above the wing rapidly accelerates and the air flowing above the top of the wing as a whole forms of a lower pressure region than the air below. Thus, lift advantage even when horizontal.
Someone please correct that if I've said something wrong.
Shame the person supposedly with a PHD on this didn't explain it at all.
The bottom line is that a curved airfoil will not generate any more lift than a non-curved airfoil (pre-stall) that has its trailing edge at the same angle.
The function of the curvature is to improve the wing's ability to avoid stall at a high angle of attack.
According to NASA, the Air and Space Museum, and Wikipedia: you are wrong. Nor does what you're a saying making any sense to anyone who has seen an airplane fly straight.
Symmetric airfoils do not generate lift without a positive angle of attack. Cambered airfoils do, precisely because the camber itself creates lift via Bernoulli.
I stated "has its trailing edge at the same angle", not "is at the same angle of attack". Angle of attack is defined by the angle of the chord line, not the angle of the trailing edge. Cambered airfoils have their trailing edges at higher angles than the angle of attack.
Again, not an expert, but how does that jive with the existence of reflex cambered airfoils? Positive lift at zero AoA with a negative trailing edge AoA.
And that seems to directly conflict with the models shown by the resources above? They state that cambered wings do have increased airspeed above the wing, which generates lift via pressure differential (thus why the myth is so sticky).
Reflex cambered airfoils generate lift because most of the wing is still pointed downwards.
The crucial thing you need to explain is this: why doesn't extending leading edge droop flaps increase the lift at a pre-stall angle of attack? (See Figure 13 from this NASA study for example: https://ntrs.nasa.gov/citations/19800004771)
Im quite sure the "air on the top has to travel faster to meet the air at the bottom " is false. Why would they have to meet at the same time? What would cause air on the top to accelerate?
I did a little more research and explain it above. The fundamentals are actually right.
The leading edge pressurizes the air by forcing air up, then the trailing edge opens back up, creating a low pressure zone that sucks air in the leading edge back. As a whole, the air atop the wing accelerates to be much faster than the air below, creating a pressure differential above and below the wing and causing lift.
The AI is still wrong on the actual mechanics at play, of course, but I don't see how this is significantly worse than the way we simplify electricity to lay people. The core "air moving faster on the top makes low pressure" is right.
That explanation doesn’t work if the wing is completely flat (with nothing to force the air up), which if you ever made a paper airplane flies just fine. All these explanations miss a very significant thing: air is a fluid where every molecule collides with _billions_ of other molecules every second, and the wing distorts the airflow all around it, with significant effects up to a wingspan away in all directions.
It's both lower pressure above the wing (~20% of lift) and the reaction force from pushing air down (give or take the remaining 80% of lift). The main wrong thing is that the air travels faster because it has to travel farther causing the air to accelerate causing the lower pressure that's double plus wrong. It's a weird old misunderstanding that gets repeated over and over because it's a neat connection to attach to the Bernoulli Principal when it's being explained to children.
a classic example of how LLM's mislead people. They don't know right from wrong, they know what they have been trained on. Even with reasoning capabilities
That's one of my biggest hang ups on the LLMs to AGI hype pipeline, no matter how much training and tweaking we throw at them they still don't seem to be able to not fall back to repeating common misconceptions found in their training data. If they're supposed to be PhD level collaborators I would expect better from them.
Not to say they can't be useful tools but they fall into the same basic traps and issues despite our continues attempts to improve them.
How can you create a pocket of 'lower pressure' without deflecting some of the air away? At the end of the day, if the aircraft is moving up, it needs to be throwing something down to counteract gravity.
Exactly. The speed phenomenon (airflow speeding up due to getting sucked into the lower pressure space above the wing) is certainly there, but it's happening because the wing is shaped to deflect air downwards.
The point isn't about how the low pressure is created just that the low pressure is a separate source of lift from the air being pushed down by the bottom of the wing.
No, what still matters (when explaining why the wing is shaped the way it is) is how the low pressure is created. In this case it's being pulled down by the top of the wing.
Angle of attack is a big part but I think the other thing going on is air “sticks” to the surface of the top of the wing and gets directed downward as it comes off the wing. It also creates a gap as the wing curves down leaving behind lower pressure from that.
The "wrong" answers all have a bit of truth to them, but aren't the whole picture. As with many complex mathematical models, it is difficult to convert the math into English and maintain precisely the correct meaning.
> The "wrong" answers all have a bit of truth to them, but aren't the whole picture. As with many complex mathematical models, it is difficult to convert the math into English and maintain precisely the correct meaning.
Exactly. The comments in this subthread are turning imprecision in language into all-or-nothing judgments of correctness. (Meanwhile, 80% of the comments advance their own incorrect/imprecise explanations of the same thing...)
It's really not. The wing is angled so it pushes the air down. Pushing air down means you are pushing the plane up. A wing can literally be a flat sheet at an angle and it would still fly.
It gets complex if you want to fully model things and make it fly as efficiently as possible, but that isn't really in the scope of the question.
Planes go up because they push air down. Simple as that.
It's both that simple and not. Because it's also true that the wing's shape creates a pressure differential and that's what produce lift. And the pressure differential causes the momentum transfer to the wing, the opposing force to the wing's lift creates the momentum transfer, and pressure difference also causes the change in speed and vice-versa. You can create many correct (and many more incorrect) straightforward stories about the path to lift but in reality cause and effect are not so straightforward and I think it's misleading to go "well this story is the one true simple story".
Sure but it creates a pressure differential by pushing the air down (in most wings). Pressure differentials are an unnecessarily detailed description of what is going on that just confuses people.
You wouldn't explain how swimming works with pressure differentials. You'd just say "you push water backwards and that makes you go fowards". If you start talking about pressure differentials... maybe you're technically correct, but it's a confusing and unnecessarily complex explanation that doesn't give the correct intuitive idea of what is happening.
Sure. If you're going for a basic 'how does it work', then 'pushing air down' is a good starting point, but you'll really struggle with follow-up questions like 'then why are they that shape?' unless you're willing to go into a bit more detail.
How can you create a 'pressure differential' without deflecting some of the air away? At the end of the day, if the aircraft is moving up, it needs to be throwing something down to counteract gravity. If there is some pressure differential that you can observe, that's nice, but you can't get away from momentum conservation.
The pressure differential is created by the leading edge creating a narrow flow region, which opens to a wider flow region at the trailing edge. This pulls the air at the leading edge across the top of the wing, making it much faster than the air below the wing. This, in turn, creates a low pressure zone.
Air molecules travel in all directions, not just down, so with a pressure differential that means the air molecules below the wing are applying a significant force upward, no longer balanced by the equal pressure usually on the top of the wing. Thus, lift through boyancy. Your question is now about the same as "why does wood float in water"?
The "throwing something down" here comes from the air molecules below the wing hitting the wing upward, then bouncing down.
All the energy to do this comes from the plane's forward momentum, consumed by drag and transformed by the complex fluid dynamics of the air.
Any non-zero angle of attack also pushes air down, of course. And the shape of the wing with the "stickiness" of the air means some more air can be thrown down by the shape of the wing's top edge.
You can't, but you also can't get away from a pressure differential. Those things are linked! That's my main point, arguing over which of these explanations is more correct is arguing over what exactly the shape of an object's silhouette is: it depends on what direction you're looking at it from.
That page is arguing against a straw man. Nobody is claiming that the full dynamics of a wing are exactly that of a flat sheet at an angle (with full flow separation etc).
The point is that a flat plane with full flow separations is the minimum necessary physics to explain lift. It would obviously make a terrible wing, and it doesn't explain everything about how real wings are optimised. That's not the point.
In any case, I only said the wing pushes the air down. I didn't say it only uses its bottom surface to push the air down.
Except it isn't "completely wrong". The article the OP links to says it explicitly:
> “What actually causes lift is introducing a shape into the airflow, which curves the streamlines and introduces pressure changes – lower pressure on the upper surface and higher pressure on the lower surface,” clarified Babinsky, from the Department of Engineering. “This is why a flat surface like a sail is able to cause lift – here the distance on each side is the same but it is slightly curved when it is rigged and so it acts as an aerofoil. In other words, it’s the curvature that creates lift, not the distance.”
The meta-point that "it's the curvature that creates the lift, not the distance" is incredibly subtle for a lay audience. So it may be completely wrong for you, but not for 99.9% of the population. The pressure differential is important, and the curvature does create lift, although not via speed differential.
I am far from an AI hypebeast, but this subthread feels like people reaching for a criticism.
the wrongness isn't germane to most people but it is a specific typology of how LLMs get technica lthings wrong that is critically important to progressing them. It gets subtle things wrongby being biased towards lay understandings that introduce vagueness because greater precision isn't useful.
That doesn't matter for lay audieces and doesn't really matter at all until we try and use them for technical things.
The wrongness is germane to someone who is doing their physics homework (the example given here). It's actually difficult for me to imagine a situation where someone would ask ChatGPT 5 for information about this and it not be germane if ChatGPT 5 gave an incorrect explanation.
The predicate for that is you know it is wrong, that wrongness is visible and identifiable. With knowledge that is intuitive but incorrect you multiply risk.
I would still say its completely wrong, given that this explanation makes explicit predictions that are falsifiable, eg, that airplanes could not fly upside down (they can!).
I think its valid to say its wrong even if it reaches the same conclusion.
If I lay out a chain of thought like
Top and bottom are different -> god doesnt like things being diffferent and applies pressure to the bottom of the wing -> pressure underneath is higher than the top -> pressure difference creates lift
Then I think its valid to say thats completely inaccurate, and just happens to share some of the beginning and end
It's the "same amount of time" part that is blatantly wrong. Yes geometry has an effect but there is zero reason to believe leading edge particles, at the same time point, must rejoin at the trailing edge of a wing. This is a misconception at the level of "heavier objects fall faster." It is non-physical.
The video in the Cambridge link shows how the upper surface particles greatly overtake the lower surface flow. They do not rejoin, ever.
Again, you're not wrong, it's just irrelevant for most audiences. The very fact that you have to say this:
> Yes geometry has an effect but there is zero reason to believe leading edge particles, at the same time point, must rejoin at the trailing edge of a wing.
...implicitly concedes that point that this is subtle. If you gave this answer in a PhD qualification exam in Physics, then sure, I think it's fair for someone to say you're wrong. If you gave the answer on a marketing page for a general-purpose chatbot? Meh.
(As an aside, this conversation is interesting to me primarily because it's a perfect example of how scientists go wrong in presenting their work to the world...meeting up with AI criticism on the other side.)
Saw you were a biologist. Would you be ok if I said, "Creationism got life started, but after that, we evolved via random mutations..."? The "equal transit time" is the same as a supernatural force compelling the physical world act in a certain way. It does not exist.
right, the other is that if you remove every incorrect statement from the AI "explanation", the answer it would have given is "airplane wings generate lift because they are shaped to generate lift".
> right, the other is that if you remove every incorrect statement from the AI "explanation", the answer it would have given is "airplane wings generate lift because they are shaped to generate lift".
...only if you omit the parts where it talks about pressure differentials, caused by airspeed differences, create lift?
Both of these points are true. You have to be motivated to ignore them.
But using pressure differentials is also sort of tautological. Lift IS the integral of the pressure on the surface, so saying that the pressure differentials cause lift is... true but unsatisfying. It's what makes the pressure difference appear that's truly interesting.
Funnily enough, as an undergraduate the first explanation for lift that you will receive uses Feynman's "dry water" (the Kutta condition for inviscid fluids). In my opinion, this explanation is also unsatisfying, as it's usually presented as a mere mathematical "convenience" imposed upon the flow to make it behave like real physics.
Some recent papers [1] are shedding light on generalizing the Kutta condition on non-sharp airfoils. In my opinion, the linked papers gives a way more mathematically and intuitively satisfying answer, but of course it requires some previous knowledge, and would be totally inappropriate as an answer by the AI.
Either way I feel that if the AI is a "pocket PhD" (or "pocket industry expert") it should at least give some pointers to the user on what to read next, using both classical and modern findings.
The Kutta condition is insufficient to describe lift in all regimes (e.g. when the trailing edge of the wing isn't that sharp), but fundamentally you do need to fall back to certain 2nd law / boundary condition rules to describe why an airfoil generates lift, as well as when it doesn't (e.g. stall).
There's nothing in the Navier-Stokes equations that forces an airfoil to generate lift - without boundary conditions the flowing air could theoretically wrap back around at the trailing edge, thus resulting in zero lift.
The fact that you have to invoke integrals and the Kutta condition to make your explanation is exactly what is wrong with it.
Is it correct? Yes. Is it intuitive to someone who doesn’t have a background in calculus, physics and fluid dynamics? No.
People here are arguing about a subpoint on a subpoint that would maybe get you a deduction on a first-year physics exam, and acting as if this completely invalidates the response.
How is the Kutta condition ("the fluid gets deflected downwards because the back of the wing is sharp and pointing downwards") less intuitive to someone without a physics background than wrongly invoking the Bernoulli principle?
I would say a wing with two sides of different length is more difficult to understand than one shape with two sides of opposites curvatures but same length
To me, it's weird to call it "PhD-level". That, to me, means to be able to take in existing information on a certain very niche area and able to "push the boundary". I might be wrong but to date I've never seen any LLM invent "new science", that makes PhD, really PhD. It also seems very confusing to me that many sources mention "stone age" and "PhD-level" in the same article. Which one is it?
People seem to overcomplicate what LLM's are capable of, but at their core they are just really good word parsers.
Most of the phd’s I know are studying things that I guarantee GPT-5 doesn’t know about… because they’re researching novel stuff.
Also, LLMs don’t have much consistency with how well they’re able to apply the knowledge that they supposedly have. Hence the “lots of almost correct code” stereotype that’s been going around.
I was using the fancy new Claude model yesterday to debug some fast-check tests (quickcheck-inspired typescript lib). Claude could absolutely not wrap its head around the shrinking behavior, which rendered it useless for debugging
It's an extremely famous example of a widespread misconception. I don't know anything about aeronautical engineering but I'm quite familiar with the "equal transit time fallacy."
Yeah, the explanation is just shallow enough to seem correct and deceive someone who doesn't grasp really well the subject.
No clue how they let it pass, that without mentioning the subpar diagram it created, really didn't seem like something miles better than what previous models can do already.
It’s very common to see AI evangelists taking its output at face value, particularly when it’s about something that they are not an expert in. I thought we’d start seeing less of this as people get burned by it, but it seems that we’re actually just seeing more of it as LLMs get better at sounding correct. Their ability to sound correct continues to increase faster than their ability to be correct.
During the demo they quickly shuffled off of, the air flow lines completely broke. It was just a few dots moving left to right, changing the angle of the surface showed no visual difference in airflow.
They couldn't find a more apt demnonstration of what an LLM is and does if they tried.
An LLM doesn't know more than what's in the training data.
In Michael Crichton's The Great Train Robbery (published in 1975, about events that happened in 1855) the perpetrator, having been caught, explains to a baffled court that he was able to walk on top of a running train "because of the Bernoulli effect", that he misspells and completely misunderstands. I don't remember if this argument helps him get away with the crime? Maybe it does, I'm not sure.
> At this point, the prosecutor asked for further elucidation, which Pierce gave in garbled form. The summary of this portion of the trial, as reported in the Times, was garbled still further. The general idea was that Pierce--- by now almost revered in the press as a master criminal--- possessed some knowledge of a scientific principle that had aided him.
> An LLM doesn't know more than what's in the training data.
Post-training for an LLM isn't "data" anymore, it's also verifier programs, so it can in fact be more correct than the data. As long as search finds LLM weights that produce more verifiably correct answers.
I know that some specific parts of what's in my training data is false, even though it was in there often. I am not just the average-by-volume of everything I've read.
It's a good question, but there are things I figured out by myself, that weren't in my training data, some, even, where my training data said the exact opposite.
Yeah me too, so it's found in many authoritative places.
And I might be wrong but my understanding is that it's not wrong per-se, it's just wildly incomplete. Which, is kind of like the same as wrong. But I believe the airfoil design does indeed have the effect described which does contribute to lift somewhat right? Or am I just a victim of the misconception.
This honestly mirrors many of my interactions with credentialed professionals too. I am not claiming LLMs shouldn't be held to a higher standard, but we are already living in a society built on varying degrees of blind trust.
Majority of us are prone to believe whatever comes our way, and it takes painstaking science to debunk much of that. In spite of the debunking, many of us continue to believe whatever we wish, and now LLMs will average all of that and present it in a nice sounding capsule.
> Isn't that explanation of why wings work completely wrong?
This is an LLM. "Wrong" is not a concept that applies, as it requires understanding. The explanation is quite /probable/, as evidenced by the fact that they thought to use it as an example…
I think the original commenter meant that the LLM can't be called wrong because the concept requires understanding. However, I think it would be fine to call the LLM's response incorrect.
It’s a common misconception, I doubt they know themselves and GPT 5 doesn’t tell them otherwise because it’s the mist common in explanation in the training data.
Do you think a human response is much better? It would be foolish to blindly trust what comes out of the mouths of biological LLMs too -- regardless of credentials.
I’m incredibly confident that any professor of aerospace engineering would give a better response. Is it common for people with PhDs to fall for basic misconceptions in their field?
This seems like a reasonable standard to hold GPT-5 to given the way it’s being marketed. Nobody would care if OpenAI compared it to an enthusiastic high school student with a few hours to poke around Google and come up with an answer.
> I’m incredibly confident that any professor of aerospace engineering would give a better response.
Do you think there could be a depth vs. breadth difference? Perhaps that PhD aerospace engineer would know more in this one particular area but less across an array of areas of aerospace engineering.
I cannot give an answer for your question. I was mainly trying to point out that we humans are highly fallible too. I would imagine no one with a PhD in any modern field knows everything about their field nor are they immune to mistakes.
Was this misconception truly basic? I admittedly somewhat skimmed those parts of the debate because I am not knowledgeable enough to know who is right/wrong. It was clear that, if indeed it was a basic concept, there is quite some contention still.
> This seems like a reasonable standard to hold GPT-5 to given the way it’s being marketed.
All science books and papers (pre-LLMs) were written by people. They got us to the moon and brought us the plane and the computer and many other things.
Many other things like war, animal cruelty, child abuse, wealth disparity, etc.. Hell, we are speed-running the destruction of the environment of the one and only planet we have. Humans are quite clever, though I fear we might be even more arrogant.
Regardless, my claim was not to argue that LLMs are more capable than people. My point was that I think there is a bit of a selection bias going on. Perhaps conjecture on my part, but I am inclined to believe that people are more keen to notice and make a big fuss over inaccuracies in LLMs, but are less likely to do so when humans are inaccurate.
Think about the everyday world we live in: how many human programmed bugs make it past reviews, tests, QA, and into production? How many doctors give the wrong diagnosis or make a mistake that harms or kills someone? How many lawyers give poor legal advice to clients?
Fallible humans expecting infallible results from their fallible creations is quite the expectation.
> Fallible humans expecting infallible results from their fallible creations is quite the expectation.
We built tools to accomplish things we cannot do well or at all. So we do expect quite a lot from them, even though we know they're not perfect. We have writings and books to help our memory and knowledge transfer. We have cars and planes to transport us faster than legs ever could... Any apparatus that doesn't help us do something better is aptly called a toy. A toy car can be faster than any human, but it's still a toy.
Its a particular type of mistake that is really interesting and telling. It is a misconception - and a common socially disseminated simplifcation. In students, these don't come from a lack of knowledge but rather from places where knowledge is structured incorrectly. Often because the phenomenon are difficult to observe or mislead when observed. Another example is heat and temperature. Heat is not temperature, but it is easy to observe them always being the same in your day to day life and so you bring that belief into a college thermodynamics course where you are learning that heat and temperature are different for the first time. It is a commonsense observation of the world that is only incorrect in technical circles
These are places where common lay discussions use language in ways that is wrong, or makes simplifcations that are reasonable but technically incorrect. They are especially common when something is so 'obvious' that experts don't explain it, the most frequent version of the concepts being explained
These, in my testing, show up a lot in LLMs - technical things are wrong when the most language of the most common explanations simplifies or obfuscates the precise truth. Often, it pretty much matches the level of knowledge of a college freshman/sophmore or slightly below, which is sort of the level of discussion of more technical topics on the internet.
>In fact, theory predicts – and experiments confirm – that the air traverses the top surface of a body experiencing lift in a shorter time than it traverses the bottom surface; the explanation based on equal transit time is false.
So the effect is greater than equal time transit.
I've seen the GPT5 explanation in GCSE level textbooks but I thought it was supposed to be PhD level;)
Its not fully wrong but its a typical example of how simplified scientific explanations have spread everywhere without personal verification of each person involved in the chinese whisper
As a complete aside I’ve always hated that explanation where air moves up and over a bump, the lines get closer together and then the explanation is the pressure lowers at that point. Also the idea that the lines of air look the same before and after and yet somehow the wing should have moved up.
You're right - this is the "equal transit time" fallacy; lift is primarily generated by the wing deflecting air downward (Newton's Third Law) and the pressure distribution resulting from airflow curvature around the wing.
It's wrong because it's a theory that you can still find on the internet and among experienced amateur pilots too! I went to a little aviation school and they teached exactly that
Oh my God, they were right, ChatGPT5 really is like talking to a bunch of PhD. You let it write an answer and THEN check the comments on Hacker News.
Truly innovative.
Your link literally says pressure differential is the reason, and that curvature matters:
> “What actually causes lift is introducing a shape into the airflow, which curves the streamlines and introduces pressure changes – lower pressure on the upper surface and higher pressure on the lower surface,” clarified Babinsky, from the Department of Engineering. “This is why a flat surface like a sail is able to cause lift – here the distance on each side is the same but it is slightly curved when it is rigged and so it acts as an aerofoil. In other words, it’s the curvature that creates lift, not the distance.”
So I'd characterize this answer as "correct, but incomplete" or "correct, but simplified". It's a case where a PhD in fluid dynamics might state the explanation one way to an expert audience, but another way to a room full of children.
Pressure differential is absolutely one of the main components of lift (although I believe conservation of momentum is another - the coanda effect changes the direction of the airflows and there's 2nd law stuff happening on the bottom edge too), but the idea that the pressure differential is caused by the fact that "air over the top has to travel farther in the same amount of time" because the airfoil is curved is completely incorrect, as the video in my link shows.
It's "completely incorrect" only if you're being pedantic. It's "partially correct" if you're talking casually to a group of regular people. It's "good enough" if you're talking to a classroom of children. Audience matters.
The hilarious thing about this subthread is that it's already getting filled with hyper-technical but wrong alternative explanations by people eager to show that they know more than the robot.
"air over the top has to travel farther in the same amount of time" is just wrong, it doesn't have to, and in fact it doesn't.
It's called the "equal transit-time fallacy" if you want to look it up, or follow the link I provided in my comment, or perhaps the NASA link someone else offered.
I'm not saying that particular point is wrong. I'm saying that for most people, it doesn't matter, and the reason the "fallacy" persists is because it's a good enough explanation for the layman that is easy to conceptualize.
Pretty much any scientific question is fractal like this: there's a superficial explanation, then one below that, and so on. None are "completely incorrect", but the more detailed ones are better.
The real question is: if you prompt the bot for the better, deeper explanation, what does it do?
So I worry that you think that the equal transit time thing is true, but is just one effect among others. This is not the case. There are a number of different effects, including bernoulli and coanda and newtons third law that all contribute to lift, but none of the things that actually happen have anything to do with equal transit time.
The equal transit time is not a partially correct explanation, it's something that doesn't happen. It's not a superficial explanation, it's a wrong explanation. It's not even a good lie-to-children, as it doesn't help predict or understand any part of the system at any level. It instead teaches magical thinking.
As to whether it matters? If I am told that I can ask my question to a system and it will respond like a team of PhDs, that it is useful to help someone with their homework and physical understanding, but it gives me instead information that is incorrect and misleading, I would say the system is not working as it is intended to.
Even if I accept that "audience matters" as you say, the suggested audience is helping someone with their physics homework. This would not be a suitable explanation for someone doing physics homework.
> So I worry that you think that the equal transit time thing is true,
Wow. Thanks for your worry, but it's not a problem. I do understand the difference, and yet it doesn't have anything to do with the argument I'm making, which is about presentation.
> It's not even a good lie-to-children, as it doesn't help predict or understand any part of the system at any level.
...which is irrelevant in the context. I get the meta-point that you're (sort of) making that you can't shut your brain off and just hope the bot spits out 100% pedantic explanations of scientific phenomenon. That's true, but also...fine?
These things are spitting out probable text. If (as many have observed) this is a common enough explanation to be in textbooks, then I'm not particularly surprised if an LLM emits it as well. The real question is: what happens when you prompt it to go deeper?
You're missing that this isn't an issue of granularity or specificity; "equal time" is just wrong.
If this is "right enough" for you, I'm curious if you tell your bots to "go deeper" on every question you ask. And at what level you expect it to start telling you actual truths and not some oft-repeated lie.
This is an LLM advertised as functioning at a "doctorate" level in everything. I think it's reasonable to expect more than the high school classroom "good enough" explanation.
No, it's never good enough, because it's flat-out wrong. This statement:
> Air over the top has to travel farther in the same amount of time
is not true. The air on top does not travel farther in the same amount of time. The air slows down and travels a shorter distance in the same amount of time.
It's only "good enough for a classroom of children" in the same way that storks delivering babies is—i.e., if you're content to simply lie rather than bothering to tell the truth.
I'm not actually sure how horrifying this is. It sounds like it's just a better executive planner to achieve your goals. As long as they are still your goals, surely you'd want the best executive planner available. I would say it's the goals that are important, not the limited way in which I work out how to achieve them.
It would certainly be horrifying if I were slowly tricked into giving up my goals and values, but that doesn't seem to be what is happening in this story.
Perhaps if I were to put the earring on it would tell me it would be better for me to keep wearing it.
You surrender your self in exchange for your goals. With the right goals, that could be a worthy sacrifice. But of course it is a sacrifice.
Imagine doing a crossword while a voice whispers the correct letter to enter for each cell. You'd definitely finish it a lot faster and without making mistakes. Crossword answers are public knowledge, and people still work them out instead of looking them up. They don't just want to solve them; they want to solve them theirselves. That's what is lost here.
This is associating the self with the thing that decides how best to achieve goals (the earring / the part of your brain that works out how to achieve a goal), while I'm saying that I think I would associate the self much more with the thing that decides what the goals are.
> they don't just want to solve them; they want to solve them theirselves. That's what is lost here.
I think in this story, the earring would not solve the crossword for you, if for some reason your goal was to solve the crossword yourself.
> I'm not actually sure how horrifying this is. It sounds like it's just a better executive planner to achieve your goals. As long as they are still your goals, surely you'd want the best executive planner available.
The goals you form depend on your values, and your values are formed by learning from trial and error by what you like and what you regret. If the earring removes all chance of regret, then you also remove all chance of learning and all possibility of forming values or meaningful goals. You effectively erase yourself, hence why the brains were atrophied.
Doing computation that can happen at end points at the end points is massively more scaleable. Even better, its done by compute you usually aren't paying for if you're the company providing the service.
I saw an interview with the guy who made photopea where he talked about how tiny his costs were because all compute was done in the user's browser. Running a saas in a cloud is expensive.
It's an underrated aspect of what we used to call "software".
And that's leaving aside questions of latency and data privacy.
Net can be done with reasoning rather than mindless iteration. You start by locking in end points surrounded by other end points except for one free space. if you have a straight line that can connect two end points then you lock it in the other orientation. If a line is locked next to a T pipe, the back of the t pipe goes against the line. If a corner piece is next to a locked pipe, you know that the side opposite the incoming pipe is empty, so it could be the back of a T or the side of a line piece, etc.
i only play Net (largest size or bigger, wrapping) using the locks; I disconnect the surrounding pipes from the center so nothing is lit up, and then start locking squares based on their surroundings. some of them I can't even solve. I can see the answer, but my head can't contain the logic necessary to lock them down
Yeah, that's what I meant. On the other hand, something like Towers has you trying different configurations because there's not always enough information to motivate the next step.
I haven't tried Towers, but I had thought that every game in his collection was such that guessing was never required. The logic/rules might not always be obvious, but supposedly they are there.
Ancillary Justice is told from the point of view of a character of a culture that doesn't draw the distinction in language. It is occasionally remarked on, but generally from the point of view of the main character being always a bit paranoid that they'll cause offense by not referring to characters of other cultures and languages correctly.
I definitely believe that people should be more understanding of each other, and less quick to jump to insults and othering, but we know so little about this situation, to be so confident that it was caused by speech seems extreme.
I am also aware that a lot of the political violence of the last few years ended up not being motivated by the reasons one might naturally expect.
reply