Please share this with someone who doesn't know the story yet. Ingenuity alone can't save our species. We also need the will to do good. We are living through a moment of deep cynicism about our ability to solve existential problems. Let this be a reminder of what we are capable of.
Cultivating optimism is the first step. Optimism is irrational, you can just choose to have it (of course thinking about good things that have happened helps). Optimism is the precondition for doing good.
So what if there’s a low collective will at the moment. Do your part to be part to grow the collective will to good. Go volunteer for a good cause (food bank, community organizations, etc.), donate to good causes, just be friendly to other people you see.
I mostly agree with what you said, but disagree on one point:
> Optimism is the precondition for doing good.
It is still possible to do good when things are bleak and there is no possible way out - just because doing good is the right thing[1]. Optimism helps a lot for morale, but is not a precondition.
1. e.g. the 2 people who were pictured comforting each other while trapped at the top of a burning wind turbine.
> the 2 people who were pictured comforting each other while trapped at the top of a burning wind turbine
Optimism doesn't necessarily mean hope. It can mean belief in an afterlife. An end to a suffering. Or gratitude for having someone else in a terrible moment.
I think OP is correct. You can't have good without optimism. Your point, which is also correct, is you can do good without hope.
The philosophical definition just opens up bigger cans of worms that can't be adequately addressed in an HN thread, and have been debated for thousands of years: what is "good"? Perhaps we need a moral framework to answer that, but then, what are morals? "You can't have good without optimism" is a declaration that has to be contextualized, and is far from universal.
I suspect answers couched in terms of individualism will always sound inadequate to questions that are inherently collectivist, such as why people do things "for the greater good" detrimental to their own well-being.
I had a lot of optimism as a teenager in the 80s. And maybe even more during Obama's presidency. Then 2016, 2020, 2024-2026 hit, and I'm at like -89% for optimism.
That is an argument of the pessimists and enemies of the good.
Pessimism is clearly irrational: Look at the world we live in; look what humanity has achieved since the Enlightenment, and in the last century - freedom, peace, and prosperity have swept the world. Diseases are wiped out, we visit the moon and (robotically) other planets, the Internet, etc. etc. etc.
To be pessimistic about our ability to build a better world is bizarre.
Pessimism and optimism are philosophical perspectives (dispositions) and do not necessarily have anything do with doing good or doing bad. Why do you think optimism only precipitates good things? Surely you can imagine a situation (or many) where thinking more positively about a situation than the data warrants leads to bad outcomes?
None of your examples above tie directly to an optimistic disposition. How could you possibly know the disposition of the thousands of humans involved in those endeavors? You are letting your personal disposition color your view of the world (as we all do) and mistaking this for some sort of absolute truth.
> So what if there’s a low collective will at the moment. Do your part to be part to grow the collective will to good. Go volunteer for a good cause (food bank, community organizations, etc.), donate to good causes, just be friendly to other people you see.
The problem is, that way of thinking is just like the "co2 footprint" - individualise responsibility from where it belongs (=the government) to individual people, and let's be real, outside of the very last action item many people don't have the time and/or the money.
At some point, we (as in: virtually all Western nations) have to acknowledge that our governments are utter dogshit and demand better. Optimism requires trust in that what you work for doesn't get senselessly destroyed the next election cycle.
Okay but also we all still live in democracies, and people are fairly obviously getting what they vote for a lot of the time.
Extrrnalising that to "the government" is to pretend you had no say, or to collectively try and pretend everyone else is with you & which they observably are not.
Edit: and before anyone responds with to me with a quip about money and corporations - money in politics buys advertising and campaigning. It doesn't buy votes directly, and when it does that's corruption and what's done about that is still largely on you the voter to set your priorities at the ballot box.
> I am just increasingly pessimistic about our collective desire to do so.
It's not just a lack of desire (apathy). People who want to solve big, collective problems are increasingly up against groups who actively want to not solve the problems and/or make the problems worse. COVID, for example, was so much worse than it had to be, purely from people actively fighting efforts meant to contain it. Efforts to reverse or mitigate Climate Change are routinely and vigorously opposed.
For news about things that are going right, I suggest https://fixthenews.com/. You can get a free weekly email about progress in energy and the environment, national economies, health and medicine, crime etc (or pay for a longer weekly email).
When the the only thing CEOs talk about for every new technology is how many people they are going to put out of work because of it, the collective desire for new technology and progress is understandably lessened.
That you have the mental capacity/structures/language to form the thought should indicate the trajectory you're caught up within. It's disappointing that everything not's resolved during the blip you're you but even a moderately long view provides evidence for optimism.
It will rewire the hard sacrifice of limiting individual wealth to less than a billion dollars per person. Trajectory of present indicates we won't be doing that soon.
It is interesting, I wonder is it possible to get so rich and be kind, probably examples. I'm the kind poor person myself even what money I have I have given too much of it away. In which case I'm a dumbass for doing so but yeah.
> Maybe I need to to separate the art from the artist?
Yes. We die but the consequences of our actions resonate indefinitely. Ideas make good idols and people do not. Better Родина-мать зовёт! (a statue in Stalingrad approximately "Motherland [ie Russia] calls") and Liberty, which are both definitely statues about ideas than the Lincoln Memorial for example, or even arguably the "Statue of Unity" which is named for Unity but in practice is explicitly a statue of a specific man - Sardar Patel.
His relationship with Epstein and the alleged secret dosing of his wife with antibiotics to clear an STD he gave Melinda from the escorts.
I hadn’t seen Bill’s denial of the STD claim when I made my comment and what went on there is murky according to the below. Bill denies and Melinda expresses sadness. What actually happened?
"Oh he cheated on his wife so he's gonna cheat on the country"
If anything the Halloween files are more of a preoccupation as it pertains to the foundation and the ability to keep its mission intact or the fact that of course it's very autocratic when one guy has all the money and everybody else is an employee
In the US one can retire comfortably on $3 million without relying on Social Security. From the downvotes, it's crazy to me that people think a limit of 300 "ordinary people's" retirements is unreasonable.
I really don't think people understand how little difference there is between having $1 billion and $10 billion or even $100 billion. It makes no difference whatsoever to have that much money; they can't enjoy it.
It's always been that way. People have wanted to do things and others have said "You want to do that? Before you do this?" and so on. The US moon landing was contemporaneous with Whitey On The Moon. There are people who constantly care about things and work on incremental improvements to them that slowly collectively yield an outcome. That's just the mechanism that works.
As an example, consider the Guinea Worm Eradication Program. In theory, sheer bloodymindedness and mass effort could have yielded the majority of the initial effects for great suppression. But the application of modern technology (and I include incentive system design in this category) brings the cost down sufficiently for successful eradication.
Suppression of the disease is possible with old techniques: case maps, word of mouth reporting, logbooks. Now detection to containment is far faster because of digital technology. You can't just dump temephos on everything. You need to target application.
The transmission of data specifically is a problem that most people discount the difficulty of. As an example that more people will be able to relate to, there was a delay in the October 2025 jobs report and it was finally released without an unemployment rate. Many people didn't get why it was hard.
One viral tweet (mirrored by others) went:
> Can't we just...
> (rubs temples)
> Can't we just divide the number of unemployed workers by the work force population? Isn't that the unemployment rate?
But you don't know what those two numbers are. You need machinery to get it. The machinery has a lot of middle management. It cannot function without.
Society today is a complex thing. To get insight into it you need a lot of infrastructure. The fact that we all have electric power, that roads across the country are reliable, that bridges are all up, that planes fly and trains run, is a marvel. It's a marvel enabled by all the bits that people work on, all the boring bits: yes, even procurement software. And yes, corporate law and bureaucracy. All of these things make this possible.
I think a very common thing in online forums is to look at a flowering tree and say "Oh, look at the flowers. They are so beautiful. Instead of such ugly bark and wood why don't we make more flowers?". Building the society that has the muscle to do this is part of making things like this happen.
Have you noticed you don't have Guinea Worms where you live?
It's trivial to not have this problem, the fact that a relatively large fraction of the world's population needed intervention to fix this is an indictment on our collective will.
You may have read, or at least heard about John Green's book "Everything is Tuberculosis". Treating TB is, by comparison to Guinea Worm, really hard. When medics tell John that - all being equal - nobody should die of TB because we could just fix it, they mean with like a hospital full of doctors to diagnose and prescribe treatment, pharmaceutical companies to make the drugs, stuff that looks like technology to you.
To eradicate Guinea Worm Disease you need basic clean water. I'm not talking "Wait, does this tap water meet current national standards for UV treatment?" clean water, I'm talking like, "don't drink directly out of the village pond" clean water. That's really what it takes for this to just go away on its own. The interventions are because crazily in 2026 large numbers of humans do not have ready access to clean drinking water.
This is a Western-centric and specifically Americentric viewpoint. There are plenty in the East for example who are not cynical about their ability to solve existential problems and are instead plowing ahead on solving them, such as massive investment in non-petroleum-based energy sources like solar, wind, and nuclear.
"South Korea is second from bottom on our list in terms of the proportion of people saying their country “is heading in the right direction”, with only 15% stating so. A similar sentiment is also felt about the economy. Pessimism is usually the standard for South Korea; however, their economic indicator score has been particularly low in recent times, with just 8% believing the economy is “good”."
I'm not talking about how people feel about their life or country but about the concrete actions their governments are taking to improve their quality of life. For example, they all have high speed rail, something that is essentially impossible to build in the US, whether it be due to budget, regulations or sheer political will.
Florida's Brightline contradicts that, no matter how slow California's HSR project is going. Trust in greed if nothing else. The next one to go up will be LA to Las Vegas.
Having worked with both kinds, I have generally preferred three-dimensional human beings to cut-outs from a compliance training manual. Being fundamentally kind and collaborative is prerequisite, of course. But so is having a modicum of spite, misanthropy, pettiness, irony, and dark humor. An appreciation for the tragic sense of life. How do you get through the day if all you get from your coworkers are patriotic slogans?
> you might not understand what the problem is for working and middle class people quickly finding themselves surrounded by a sea of people with dramatically different cultures, values, and religions
Of course. That is why Trump received the highest voter support in counties with the lowest levels of immigration.
After governors in southern states sent migrants north, support for immigration evaporated, just in time for election season. Causing swings of like +20 for Trump in staunch blue states like NJ and IL. If that study were accurate, then those stunts should have resulted in an increase in support for immigration.
I am interested in this topic, but this textbook is too daunting for me. What I'd love is a crash course on Bayesian methods for the working systems performance engineer. If you, dear reader, happen to be familiar with both domains: what would you include in such a course, and can you recommend any existing resources for self-study?
My go to for teaching statistics is Statistical Rethinking. It’s basically a course in how to actually thing about modeling: what you’re really looking for is analyzing a hypothesis, and a model may be consistent with a number of hypotheses, figuring out what hypotheses any given model implies is the hard/fun part, and this book teaches you that. The only drawback is that it’s not free. (Although there are excellent lectures by the author available for free on YouTube. These are worth watching even if you don’t get the book.)
I also recommend Gelman’s (one of the authors of the linked book) Regression and Other Stories as a more approachable text for this content.
Think Bayes and Bayesian Methods for Hackers are introductory books from a beginner coming from a programming background.
If you want something more from the ML world that heavily emphasizes the benefits of probabilistic (Bayesian) methods, I highly recommend Kevin Murphy’s Probabilistic Machine Learning. I have only read the first edition before he split it into two volumes and expanded it, but I’ve only heard good things about the new volumes too.
The level of intellectual engagement with Chomsky's ideas in the comments here is shockingly low. Surely, we are capable of holding these two thoughts: one, that the facility of LLMs is fantastic and useful, and two, that the major breakthroughs of AI this decade have not, at least so far, substantially deepened our understanding of our own intelligence and its constitution.
That may change, particularly if the intelligence of LLMs proves to be analogous to our own in some deep way—a point that is still very much undecided. However, if the similarities are there, so is the potential for knowledge. We have a complete mechanical understanding of LLMs and can pry apart their structure, which we cannot yet do with the brain. And some of the smartest people in the world are engaged in making LLMs smaller and more efficient; it seems possible that the push for miniaturization will rediscover some tricks also discovered by the blind watchmaker. But these things are not a given.
> AI this decade have not, at least so far, substantially deepened our understanding of our own intelligence and its constitution
I would push back on this a little bit. While it has not helped us to understand our own intelligence, it has made me question whether such a thing even exists. Perhaps there are no simple and beautiful natural laws, like those that exists in Physics, that can explain how humans think and make decisions. When CNNs learned to recognize faces through a series of hierarchical abstractions that make intuitive sense it's hard to deny the similarities to what we're doing as humans. Perhaps it's all just emergent properties of some messy evolved substrate.
The big lesson from the AI development in the last 10 years from me has been "I guess humans really aren't so special after all" which is similar to what we've been through with Physics. Theories often made the mistake of giving human observers some kind of special importance, which was later discovered to be the cause of theories not generalizing.
> The big lesson from the AI development in the last 10 years from me has been "I guess humans really aren't so special after all"
Instead I would take the opposite take.
How wonderful is it, that with naturally evolved processes and neural structures, have we been able to create what we have. Van Gogh’s paintings came out of the human brain. The Queens of the Skies - hundreds of tons of metal and composites - flying across continents in the form of a Boeing 747 or an A380 - was designed by the human brain. We went to space, have studied nature (and have conservation programs for organisms we have found to need help), took pictures the pillars of creation that are so incredibly far… all with such a “puny” structure a few cm in diameter? I think that’s freaking amazing.
What initially drew me to David Hume was a quote from his discussions of miracles in "An Enquiry Concerning Human Understanding" (name of chapter is "Of Miracles").
That said, I began with "A Treatise of Human Nature" around the age of 17, translated to my native language (his works are not an easy read in English, IMO), due to my interest in both philosophy and psychology.
If you haven't read them yet, I would certainly recommend them. I would recommend the latter I mentioned even if you are not interested in psychology (but may be interested in epistemology, philosophy of mind, and/or ethics), as he gets into detail about his "impressions" vs "ideas".
Additionally, he is famously known for his "problem of induction" which you may already know.
You know how many old sci-fi settings pictured aliens as bipedal furry animals or lizards? Even to go from that to realistically-intelligent swarms of insects is already difficult.
(Of course, there’s plenty of sci-fi where conscious entities manifest themselves as abstract balls of pure energy or the like; except for some reason those balls still think in the same way we do, get assigned the same motivations, sometimes even speak our language, etc., which makes it, in a way, even less realistic than the walking and talking human-cat hybrid you’d see in Elder Scrolls.)
Whenever we ponder questions of intelligence and consciousness, the same pitfall awaits.
Since we don’t have an objective definition of consciousness or intelligence (and in all likelihood we can’t have one, because any formal attempt at such wouldn’t get very far due to being attempted by the same thing that’s being defined), the only one that makes sense is, in crude language, “something like what we are”. There’s a vague feeling that it has to do with free will, self-awareness, etc.; however, all of it is also influenced by the nature of us being all parts of some big figurative anthill—assuming your sense of self only arises as you model yourself against the other (starting with your parents/caretakers and on), a standalone human cannot be self-aware in the way we are if it evolved in an emptiness without others—i.e., it would not possess human intelligence; supported by our natural-scientific observations rejecting the possibility of a being of this shape and form ever evolving in the first place.
In other words, the more different some kind of intelligence is from ours, the less it would look like intelligence to us—which makes the search for alien intelligence in space somewhat tragically futile (if it exists, we wouldn’t recognize it unless it just happens to be like us), but opens up exciting opportunities for finding alien but not-too-alien intelligence right on this planet (almost Douglas Adams style, minus dolphins speaking English).
There’s an extra trick when it comes to LLMs. In case of alien life, the possibility of a radically different kind of consciousness producing output that closely mimics our own is almost impossible (if our prior assumption is correct, then for all intents and purposes truly alien, non-meatbag-scale kind of intelligence might not be able to recognize ours in the first place, just like we wouldn’t recognize alien intelligence). However, the LLMs are designed to mimic the most social aspect of our behavior, our communication aimed at fellow humans; so when an LLM produces sufficiently human-like output—even if it has a very different kind of consciousness[0] or no consciousness at all (more likely, though as we concluded above we can’t distinguish between the two cases anyway)—our minds are primed to see it as a manifestation of [which would be human-like] intelligence, even if there’s nothing that would suggest such judging by the way it’s created (which is radically different from the way we’ve been creating intelligent life so far, wink-wink), by the substrate it runs on, if not by the way it actually works (which per our conclusion above we might never be able to conclusively determine about our own minds, without resorting to unfalsifiable philosophical assumptions for at least some aspects of it).
So yes, I’d say humans are special, if nothing else then because by the only usable (if somewhat circular) definition of what we are there’s absolutely nothing like us around, and in all likelihood can never be. (That’s not to say that something not like us isn’t special in its own way—I mean, think of the dolphins!—but given we, due to not being it, would not be able to properly understand it, it just never hits the same.)
[0] Which if true would be completely asocial (given it neither exists in groups nor depends on others for survival) and therefore drastically different from ours.
Well, most sci-fi still fits the bill. Vinge is a bit interesting in that he plays around with the idea with Tines where an “individual” (in human sense) is a pack of 5 of them[0] or with civilizations that “transcend” and then no one has any idea of what are about anymore, and how a bunch of civilizations evolved from humans which explains how they all just happen to operate on equivalent human meatbag scale.
[0] Genuinely not unlike how a congregation of gelled-together humans is an entity that can achieve much more than an individual human.
"Brain_s_". I find we (me included) generally overlook/underestimate the distributed nature of human intelligence, included in the AI field. That's why when I first heard of mixture of experts I was thrilled about the idea and the potential. (One could also see similarities in random forest).
I believe a path to AGI(tm) would be to reproduce the evolution of human intelligence artificially. Start with small models training bigger and bigger models and let the bigger successfull models (insert RL, genetic algos, etc.) "reproduce" and teach newer models from scratch. Having different model architecture cohabit could maybe even lead to the kind of specializations we see in parts of the brain
I think it is important to realize, that we need to understand language on our own terms. The logic of LLMs is not unlike alien technology to us. That being said, the minimalist program of Chomsky lead to nowhere, because just like programming, it found edge case after edge case, reducing it further and further, until there was no program anymore that resembled a real theory. But it is wrong to assume that the big progress in linguistics is in vain, the same reason Prolog, Theorem provers, type theory, category theory is in vain, when we have LLMs that can produce everything in C++. We can use the technology of linguistics to ground our knowledge, and in some dark corner of the LLM it might already have integrated this. I think the original divide between the sciences and the humanities might be deeper and more fundamental than we think. We need linguistic as a discipline of the humanities, and maybe huge swaths of Computer Science is just that.
I agree with you. I think the fundamental problem is we don't have a good unified theory of fuzzy reasoning. We have a lot of different formal approaches but they all have flaws.
Now LLMs made a big breakthrough that they showed we can do decent fuzzy reasoning in practice. But at the cost of nobody understanding the underlying process formally.
If we had a good unified (formal) theory of fuzzy reasoning, we could build models that reason better (or at least more predictably). But we won't get a better theory by scaling the existing models, I think Chomsky is right about that.
We lack the goal, not the means. If I am asking LLM a question, what answer do I want? A playfully creative one? A strictly logical one? A pleasingly sycophantic one? A harshly critical one? An out of the box devil's advocate one? A beautiful one? A practical one? We have no clue how to express these modes in logical reasoning.
By way of analogy, the result of the theorem prover is usually actionable (i.e. we can replace one kind of expression with its proven equivalent for some end like optimizing code-size or code-run-time), but mathematicians _still_ endeavor to translate the unwieldy and verbose machine-generated proofs into concise human-readable proofs, because those readable proofs are useful to our understanding of mathematics even long after the "productive action" has been taken.
In a way, this collaboration between the machine and the human is better than what came before, because now productive actions can be taken sooner, and mathematicians do not have to doubt whether they are searching for a proof that exists.
>That being said, the minimalist program of Chomsky lead to nowhere, because just like programming, it found edge case after edge case, reducing it further and further, until there was no program anymore that resembled a real theory
As someone who has worked in linguistics, I don't really see what you're talking about. Minimalism is not full of exceptions (please elaborate on a specific example if you have one). Minimalism was created to make the old theory, Government and Binding, simpler.
Yes, and the project can be criticised by reducing until there's no value anymore. Well known instances of this process:
- Predicate Fronting in Free Relatives: In sentences like “What John saw was a surprise,” labeling the fronted predicate is not without problems, Merge doesn’t yield a clear head.
- Optional Verb Movement in Persian: Yes-no questions where verbs can optionally move (e.g., “Did you go?” vs. “You went?”) messes up feature-checking’s binary mode.
- Non-Matching Free Relatives with Pied-Piping: Structures like “In whichever city you live, you’ll find culture” mess up standard labeling, needs extra stipulations.
- Some Subjects in Finnish: Nominative vs. non-nominative subjects (e.g., “Minua kylmä” [me-ACC cold]) complicate that Minimalist case assignment.
but we don't have llms that can "produce everything in c++".
We have LLMs that can get some boilerplate right if you use it in a greenfield project, and will repeatedly mess up your code once it grows enough for you to actually need assistance grokking it.
> Perhaps there are no simple and beautiful natural laws, like those that exists in Physics, that can explain how humans think and make decisions.
Isn't Physics trying to describe the natural world? I'm guessing you are taking two positions here that are causing me confusion with your statement: 1) that our minds can be explained strictly through physical processes, and 2) our minds, including our intelligence, are outside of the domain of Physics.
If you take 1) to be true, then it follows that Physics, at least theoretically, should be able to explain intelligence. It may be intractably hard, like it might be intractably hard to have physics decribe and predict the motions of more than two planetary bodies.
I guess I'm saying that Physical laws ARE natural laws. I think you might be thinking that natural laws refer solely to all that messy, living stuff.
I think their emphasis is on simple and beautiful; not that human intelligence is outside the laws of physics, but that there will never be a “Maxwell’s equations” modelling the workings of human intelligence, it will just be a big pile of hacks and complex interactions of many distinct parts; nothing like the couple of recursive LISP macros people of the 1960s might have hoped to find.
> Perhaps there are no simple and beautiful natural laws, like those that exists in Physics, that can explain how humans think and make decisions...Perhaps it's all just emergent properties of some messy evolved substrate.
Yeah, it is very likely that there are not laws that will do this, it's the substrate. The fruit fly brain (let alone human) has been mapped, and we've figured out that it's not just the synapse count, but the 'weights' that matter too [0]. Mind you, those weights adjust in real time when a living animal is out there.
You'll see in literature that there are people with some 'lucky' form of hydranencephaly where their brain is as thin as paper. But they vote, get married, have kids, and for some strange reason seem to work in mailrooms (not a joke). So we know it's something about the connectome that's the 'magic' of a human.
My pet theory: We need memristors [2] to better represent things. But that takes redesigning the computer from the metal on up, so is unlikely to occur any time soon with this current AI craze.
> The big lesson from the AI development in the last 10 years from me has been "I guess humans really aren't so special after all" which is similar to what we've been through with Physics.
Yeah, biologists get there too, just the other way abouts, with animals and humans. Like, dogs make vitamin C internally, and humans have that gene too, it's just dormant, ready for evolution (or genetic engineering) to reactivate. That said, these neuroscience issues with us and the other great apes are somewhat large and strange. I'm not big into that literature, but from what little I know, the exact mechanisms and processes that get you from tool using ourangs to tool using humans, well, those seem to be a bit strange and harder to grasp for us. Again, not in that field though.
In the end though, humans are special. We're the only ones on the planet that ever really asked a question. There's a lot to us and we're actually pretty strange in the end. There's many centuries of work to do with biology, we're just at the wading stage of that ocean.
>You'll see in literature that there are people with some 'lucky' form of hydranencephaly where their brain is as thin as paper. But they vote, get married, have kids, and for some strange reason seem to work in mailrooms (not a joke). So we know it's something about the connectome that's the 'magic' of a human.
These cases seem totally fascinating. Have you any links to examples or more information (i'm also curious about the curious detail of them tending to work in mail rooms)?
> it has made me question whether such a thing even exists
I was reading a reddit post the other day where the guy lost his crypto holdings because he input his recovery phrase somewhere. We question the intelligence of LLMs because they might open a website, read something nefarious, and then do it. But here we have real humans doing the exact same thing...
> I guess humans really aren't so special after all
No they are not. But we are still far from getting there with the current LLMs and I suspect mimicking the human brain won't be the best path forward.
I think a system too perfect will not show any creativity. Maybe wild new ideas require taking risks which means a system that can invent new things will end up making bad choices.
> one, that the facility of LLMs is fantastic and useful
I didn't see where he was disagreeing with this.
I'm assuming this was the part you were saying he doesn't hold, because it is pretty clear he holds the second thought.
| is it likely that programs will be devised that surpass human capabilities? We have to be careful about the word “capabilities,” for reasons to which I’ll return. But if we take the term to refer to human performance, then the answer is: definitely yes.
I have a difficult time reading this as saying that LLMs aren't fantastic and useful.
| We can make a rough distinction between pure engineering and science. There is no sharp boundary, but it’s a useful first approximation. Pure engineering seeks to produce a product that may be of some use. Science seeks understanding.
This seems to be the core of his conversation. That he's talking about the side of science, not engineering.
It indeed baffles me how academics overall seem so dismissive of recent breakthroughs in sub-symbolic approaches as models from which we can learn about 'intelligence'?
It is as if a biochemist looks at a human brain, and concludes there is no 'intelligence' there at all, just a whole lot of electro-chemical reactions.
It fully ignores the potential for emergence.
Don't misunderstand me, I'm not saying 'AGI has arrived', but I'd say even current LLM's do most certainly have interesting lessons for Human Language development and evolution in science. What can the success in transfer learning in these models contribute to the debates on universal language faculties? How do invariants correlated across LLM systems and humans?
There's two kinds of emergence, one scientific, the other a strange, vacuous notion in the absence of any theory and explanation.
The first case is emergence when we for example talk about how gas or liquid states, or combustibility emerge from certain chemical or physical properties of particles. It's not just that they're emergent, we can explain how they're emergent and how their properties are already present in the lower level of abstraction. Emergence properly understood is always reducible to lower states, not some magical word if you don't know how something works.
In these AI debates that's however exactly how "emergence" is used, people just assert it, following necessarily from their assumptions. They don't offer a scientific explanation. (the same is true with various other topics, like consciousness, or what have you). This is pointless, it's a sort of god of the gaps disguised as an argument. When Chomsky talks about science proper, he correctly points out that these kinds of arguments have no place in it, because the point of science is to build coherent theories.
>not some magical word if you don't know how something works.
I'd disagree, emergence is typically what we don't understand. When we understand it, it's rarely considered an emergent concept, just something that is.
>They don't offer a scientific explanation.
Correct, because we don't have the tooling necessary to explain it yet. Emergence as you stated came from simpler concepts at first, for example burning hydrogen and oxygen and water emerges from that.
Ecosystems are an emergent property of living systems, ones that we can explain rather well these days after we realized there were gaps in our knowledge. It's taken millions and millions of hours of research to piece all these bits together.
Now we are at the same place in large neural nets. What you say is pointless is not pointless at all. It's pointing at the exact things we need to work on if we want to have understanding of it. But at the same time understanding isn't necessary. We have made advancements in scientific topics that we don't understand.
I am not aware of any scientific kind of emergence. There's philosophical emergence, and its counterpoint - ontological reductionism.
Most people have an intuitive sense that philosophical emergence is true, and that bubbles up in their writing, taken as an axiom that we're all supposed to go along with.
On closer inspection, it is not clear to me that this isn't simply a confusion or illusion caused by the tendency of the human mind to apply abstractions and socially constructed categories on top of complicated phenomena, and those abstractions are confused for actual effects that are different from the underlying base-level phenomena being described.
Nobody claims mistical gaps. There is no deus ex machina claim in emergence. However e.g. stable phenomena at a higher model level might be fully dynamic at a lower level model.
> the major breakthroughs of AI this decade have not, at least so far, substantially deepened our understanding of our own intelligence and its constitution
People's illusions and willingness to debase their own authority and control to take shortcuts to optimise towards lowest effort / highest yield (not dissimilar to something you would get with... auto regressive models!) was an astonishing insight to me.
Well said. It's wild when you think of how many "AI" products are out there that essentially entrust an LLM to make the decisions the user would otherwise make. Recruitment, trading, content creation, investment advice, medical diagnosis, legal review, dating matches, financial planning and even hiring decisions.
At some point you have to wonder: is an LLM making your hiring decision really better than rolling a dice? At least the dice doesn't give you the illusion of rationality, it doesn't generate a neat sounding paragraph "explaining" why candidate A is the obvious choice. The LLM produces content that looks like reasoning but has no actual causal connection to the decision - it's a mimicry of explanation without true substance of causation.
You can argue that humans do the same thing. But post-hoc reasoning is often a feedback loop for the eventual answer. That's not the case for LLMs.
> it doesn't generate a neat sounding paragraph "explaining" why candidate A is the obvious choice.
Here I will argue that humans do the same thing. For any business of any size recruitment has been pretty awful in recent history. The end user, that is the manager the employee will be hired under is typically a later step after a lot of other filters, some automated some not.
At the end of the day the only way is to measure the results. Do LLMs produce better hiring results than some outside group?
Also, LLMs seem very good at medical pre-diagnosis. If you accurately portray your symptoms to them they come back with a decent list of possible candidates. In barbaric nations like the US where medical care can easily lead to bankruptcy people are going to use it as a filter to determine if they should go in for a visit.
Chompsky's central criticism of LLMs is that they can learn impossible languages just as easily as they learn possible languages. He refers to this repeatedly in the linked interview. Therefore, they cannot teach us about our own intelligence.
However, a paper published last year (Mission: Impossible Language Models, Kallini et al.) proved that LLMs do NOT learn impossible languages as easily as they learn possible languages. This undermines everything that Chompsky says about LLMs in the linked interview.
I'm not that convinced by this paper. The "impossible languages" are all English with some sort of transformation applied, such as shuffling the word order. It seems like learning such languages would require first learning English and then learning the transformation. It's not surprising that systems would be worse at learning such languages than just learning English on its own. But I don't think these sorts of languages are what Chomsky is talking about. When Chomsky says "impossible languages," he means languages that have a coherent and learnable structure but which aren't compatible with what he thinks are innate grammatical facilities of the human mind. So for instance, x86 assembly language is reasonably structured and can express anything that C++ can, but unlike C++, it doesn't have a recursive tree-based syntax. Chomsky believes that any natural language you find will be structured more like C++ than like assembly language, because he thinks humans have an innate mental facility for using tree-based languages. I actually think a better test of whether LLMs learn languages like humans would be to see if they learn assembly as well as C++. That would be incomplete of course, but it would be getting at what Chomsky's talking about.
Also, GPT-2 actually seems to do quite well on some of the tested languages, including word-hop, partial reverse, and local-shuffle. It doesn't do quite as well as plain English, but GPT-2 was designed to learn English, so it's not surprising that it would do a little better. For instance, they tokenization seems biased towards English. They show "bookshelf" becoming the tokens "book", "sh", and "lf" – which in many of the languages get spread throughout a sentence. I don't think a system designed to learn shuffled-English would tokenize this way!
The authors of that paper misunderstand what "impossible languages" refers to. It doesn't refer to any language a human can't learn. It refers to computationally simple plausible alternative languages that humans can't learn, in particular linear-order (non-hierarchical structure) languages.
What exactly do you mean, "analogous to our own" and, "in a deep way" without making an appeal to magic or non-yet discovered fields of science? I understand what you're saying but when you scrutinize these things you end up in a place that's less scientific than one might think. That kind of seems to be one of Chomsky's salient points; we really, really need to get a handle on when we're doing science in the contemporary Kuhnian sense and philosophy.
The AI works on English, C++, Smalltalk, Klingon, nonsense, and gibberish. Like Turing's paper this illustrates the difference between, "machines being able to think" and, "machines being able to demonstrate some well understood mathematical process like pattern matching."
> not, at least so far, substantially deepened our understanding of our own intelligence
Science progresses in a manner that when you see it happen in front of you it doesn't seem substantial at all, because we typically don't understand implications of new discoveries.
So far, in the last few years, we have discovered the importance of the role of language behind intelligence. We have also discovered quantitative ways to describe how close one concept is from another. More recently, from the new reasoning AI models, we have discovered something counterintuitive that's also seemingly true for human reasoning--incorrect/incomplete reasoning can often reach the correct conclusion.
In my opinion it will or already has redefined our conceptual models of intelligence - just like physical models of atoms or gravitational mechanics evolved and newer models replace the older. The older models aren't invalidated (all models are wrong, after all), but their limits are better understood.
People are waiting for this Prometheus-level moment with AI where it resembles us exactly but exceeds our capabilities, but I don't think that's necessary. It parallels humanity explaining Nature in our own image as God and claiming it was the other way around.
OK, I will be the useful idiot. I don't fully understand your anecdote. Could you explain what is exactly it that you perceived and that the other engineer failed to see?
Periodic and regular cycles (sine wave) suggests that there is over correction happening in the system, preventing a stable point. More regular measurements or tempered corrections may be called for.
Concrete example: the system sees it's queue utilization as high so it throttles incoming requests, but for too long, the queue looks super healthy on the next check, so the throttle removed but too long until checking again, and now the util is too high again.
reply