Hacker Newsnew | past | comments | ask | show | jobs | submit | like_any_other's commentslogin

> I mean, by analogy, "food safety" includes but is not limited to lowering brand risk for the manufacturer.

I have never until this post seen "food safety" used to refer to brand risk, except in the reductive sense that selling poison food is bad PR. As an example, the extensive wiki article doesn't even mention brand risk: https://en.wikipedia.org/wiki/Food_safety


Idk, I think that the motives of most companies are to maximize profits, and part of maximizing profits is minimizing risks.

Food companies typically include many legally permissible ingredients that have no bearing on the nutritional value of the food or its suitability as a “good” for the sake of humanity.

A great example is artificial sweeteners in non-diet beverages. Known to have deleterious effects on health, these sweeteners are used for the simple reason that they are much, much less expensive than sugar. They reduce taste quality, introduce poorly understood health factors, and do nothing to improve the quality of the beverage except make it more profitable to sell.

In many cases, it seems to me that brand risk is precisely the calculus offsetting cost reduction in the degradation of food quality from known, nutritious, safe ingredients toward synthetic and highly processed ingredients. Certainly if the calculation was based on some other more benevolent measure of quality, we wouldn’t be seeing as much plastic contamination and “fine until proven otherwise” additional ingredients.


> A great example is artificial sweeteners in non-diet beverages.

Do you have an example? Every drink I've seen with artificial sweeteners is because their customers (myself included) want the drinks to have less calories. Sugary drinks is a much clearer understood health risk than aspartame or sucralose.


Google "aspartame rumsveld" I haven't fact checked the horror story but makes a good one for the campfire.


I don’t know what is happening in the rest of the world, but here in the Dominican Republic (where a major export is sugar, ironically) almost all soft drinks are laced with sucralose. This includes the not-labeled-as-reduced-calorie offerings from Coca Cola, PepsiCo, and nestle.

The Coca Cola labeling specifically appears intentionally deceptive. It is labeled “Coca Cola Sabor Original” with a tiny note near the fluid ounces that says “menos azucar”. On the back, it repeats the large “original flavor” label, with a subtext (larger Than the “less sugar” label) that claims that Coca Cola-less sugar contains 30 percent less sugar than the (big label again) “original flavor”. The upshot is that to understand that what you are buying is not, in fact, “original flavor” Coca Cola you have to be willing to look through the fine print and do some mental gymnastics, since the bottle is clearly labeled “Original Flavor”.

It tastes almost the same as straight up Diet Coke. All of the other local companies have followed suit with no change at all In labeling, which is nominally less dishonest than intentionally deceptive labeling.

Since I have a poor reaction to sucralose, including gut function and headache, I find this incredibly annoying. OTOH it has reduced my intake of soft drinks to nearly zero, so I guess it is indeed healthier XD?


That may sadly be so, but it does not change the plain meaning of the term "food safety".

Agreed.

Its application perhaps pushes the boundaries.

For example if a regulatory body establishes “food safety” limits, they tend to be permissive up to the point of known harm, not a guide to wholesome or healthy food, and that is perhaps a reasonable definition of “food safety” guidelines.

Their goals are not so much to ensure that food is safe, for which we could easily just stick to natural, unprocessed foods, but rather to ensure that most known serious harms are avoided.

Surely it is a grey area at best, since many additives may be in general somewhat deleterious but offer benefits in reducing harmful contamination and aiding shelf life, which actually may introduce more positive outcomes than the negative offset.

The internal application of said guidelines by a food manufacturer, however, may very well be incentivized primarily by the avoidance of brand risk, rather than the actual safety or beneficial nature of their products.

So I suppose it depends on if we are talking about the concept in a vacuum or the concept in application. I’d say in application, brand risk is a serious contender for primary motive. However I’m sure that varies by company and individual managers.

But yeah, the term is unambiguous. Words have meanings, and we should respect them if we are to preserve the commons of accurate and concise communication.

Nuance and connotation are not definitions.


> except in the reductive sense that selling poison food is bad PR

Yes, and?

Saying "AI may literally kill all of us" is bad PR, irregardless of if the product is or isn't safe. AI encouraging psychotic breaks is bad PR in the reductive sense, because it gets in the news for this. AI being used by hackers or scammers, likewise.

But also consider PR battles about which ingredients are safe. Which additives, which sweeteners, GMOs, vat-grown actual-meat, vat-grown mycoprotein meat substitute, sugar free, fat free, high protein, soy, nuts, organic, etc., many of which are fought on the basis of if the contents is as safe as it's marketed as.

Or at least, I thought saying "it will kill us all if we get this wrong" was bad PR, until I saw this quote from a senator interviewing Altman, which just goes to show that even being extraordinarily blunt somehow still goes over the heads of important people:

--

Sen. Richard Blumenthal (D-CT):

I alluded in my opening remarks to the jobs issue, the economic effects on employment. I think you have said in fact, and I'm gonna quote, development of superhuman machine intelligence is probably the greatest threat to the continued existence of humanity. End quote. You may have had in mind the effect on, on jobs, which is really my biggest nightmare in the long term. Let me ask you what your biggest nightmare is, and whether you share that concern,

- https://www.techpolicy.press/transcript-senate-judiciary-sub...

--

So, while I still roll my eyes at the idea this was just a PR stunt… if people expected reactions like Blumenthal's, that's compatible with it just being a PR stunt.


Nothing to worry about - life in the year 1990 was good, and that was with an inflation-adjusted GDP per capita just 59% of current (2023) value [1]. So we would need 82 of such 0.5% drops until things got as "bad" as in the year 1990.

Because GDP is a meaningful quality of life indicator.

[1] https://data.worldbank.org/indicator/NY.GDP.PCAP.KD?location...


Yes, but this data does not show that the "upper-middle class" (ie the professional class), which I imagine everyone here is a part of, has had significantly more gains than every other class aside from the very wealthy. For people with college degrees, and especially advanced degrees, that wealth has been realized quite clearly (and I don't know a whole lot of people in that position who are remotely interested in leaving the US, aside from those who are having trouble in academia and don't realize how easy it is to move to the private sector). For those with jobs in sectors that are less technologically advanced (who tend to have less education), thus less productive overall, their compensation is matched by the equivalent thereof in the global market, which is much more fiercely competitive than it was even 30-40 years ago.

Unpaywalled: https://archive.is/dxBo0

Apparently, there was a government report, that "warned that “instances of China’s espionage, interference in our democracy and the undermining of our economic security have increased in recent years”"

However, most of the report is classified, so we can only speculate what those acts of interference and undermining were.


Huawei et al espionaging on competitors and shipping backdoors with support by PRC, news at 5

Yeah, if you want to pump oil, you better also build your own railways to distribute it, because you won't like what Standard Oil will charge you for their trains.

>Yeah, if you want to pump oil, you better also build your own railways to distribute it

You're being facetious, but OP is right. For software platforms, this has been a constant. It happened with Twitter, Facebook, Google (Search/Ads, Maps, Chat), Reddit, LinkedIn - basically ever major software platform started off with relatively open APIs that were then closed-off as it gained critical mass and focused on monetization.


I'm not being facetious, I'm pointing out a real problem - the market fraction accessible to a new business, that isn't reliant on the good will of some giant incumbent, is shrinking. This time it's Discord, another time it's Google ads/search blacklist, or Microsoft flagging your website or program as malicious, or Facebook shadowbanning you (or charging to show your posts even to people who explicitly followed you [1]), or Walmart extorting you for shelf space access, VISA and PayPal rejecting you..

If your move is to simply retreat, and give up all this ground, what market is left for you? People who get their news and ads by paper mail, shop only at tiny independent stores, paying in cash? How many businesses can survive with ~5% (a generous estimate of the described market's relative size) of their current traffic?

[1] https://www.bentbusinessmarketing.com/why-your-fans-arent-se...


And it's bigger than software. This is just vertical integration; both your suppliers and your customers will ask if they can replace you. As they should. If your only value is as a middleman that your upstream supplier can easily replace... well, that's not a lot of value.

You're hardly safe on operating system platforms either. Look at the long history of Apple sherlocking independent vendors.

That’s actually solid advice. At a certain point it’s cheaper to build your own datacenter than to rent servers…

> once you voluntarily give your data to a third party-- e.g. when you sent it to OpenAI-- it's not yours anymore and you have no reasonable expectation of privacy about it.

The 3rd party doctrine is worse than that - the data you gave is not only not yours anymore, it is not theirs either, but the governments. They're forced to act as a government informant, without any warrant requirements. They can say "we will do our very best to keep your data confidential", and contractually bind themselves to do so, but hilariously, in the Supreme Court's wise and knowledgeable legal view, this does not create an "expectation of privacy", despite whatever vaults and encryption and careful employee vetting and armed guards standing between your data and unauthorized parties.


I don't think it is accurate to say that the data becomes the government's or they have to act as an informant (I think that implies a bit more of an active requirement than responding to a subpoena), but I agree with the gist.

This clearly seems counter to the spirit of the 4th amendment.

> What people say they want and what people choose to buy are very different things.

As the mac & cheese box featuring Super Mario in the article hints, a big chunk of these people are children. Is it any surprise they don't make the most rational of choices?

On the other hand, this is like asking an alcoholic if he wishes to quit drinking. He'll say yes, but then go into a bar on his way home from work... People claim to want to be healthy, yet their discipline isn't perfect and their will is not iron - what hypocrites!

On the third hand - people do vote and lobby for what they say they want (in this case banning artificial dyes). Why should we give preference to their decisions in the market, vs. their decisions in the voting booth? Or in other words - why do purchasing decisions reveal preference, but voting decisions do not?


Because it's based on physics, which is based on mathematics. Alternately, even if we one day learn that physics is not reducible to mathematics, both humans and computers are still based on the same physics.

And the soul?

So far, we have found no need for this hypothesis.

(Aside from "explaining" why AI couldn't ever possibly be "really intelligent" for those who find this notion existentially offensive.)


"emergent superintelligent AI" is as much superstition as believing in imaterial souls. One company literally used the term "people spirits" to refer to how LLMs behave in their official communications.

It's a cult. Like many cults, it tries to latch on science to give itself legitimacy. In this case, mathematics. It has happened before many times.

You're trying to say that, because it's computers and stuff, it's science and therefore based on reason. Well, it's not. It's just a bunch of non sequitur.


I didn't say anything about "emergent superintelligent AI".

I'm confused.

We are on a comment section about a post with AGI in the title.

The term is scientifically vague, but it is estabilished in the popular culture that it is related to superintelligence and emerging behavior. If you don't agree, you owe the reader a better definition.

Given this context, if you're not talking about that, what are you talking about then?


The thread though is more broadly about AI in general. My remark was wrt the fact that talk of "souls" in context of AI usually boils down to drawing a red line between what AI could be even in principle, as humans (i.e. "no soul" -> "it's not actually intelligent" etc; some folk use "no qualia" to the same effect, and it's the same argument in disguise). The problem with it is that there's nothing about either AI or human intelligence so far that requires a concept such as soul, so religion aside, the only reason to reach for it is if you're trying to draw that line and running out of arguments as to why it should be there.

I just asked some user "what makes you think the human brain is based on mathematics?"

All this nonsense about souls was filled up by people trying to predict what my reasoning was, instead of _actually answering the question_ (which, apparently, can only be answered here _in opposition_ to something, not with plain honest words).

I left the line to be drawn by whoever answered it, and the answers show an abundance of misunderstanding about science.


That would be nice. But as far as I know, this paper makes no supernatural claims.

You're mistaking the thing for the tool we use to describe the thing.

Physics gives us a way to answer questions about nature, but it is not nature itself. It is also, so far (and probably forever), incomplete.

Math doesn't need to agree with nature, we can take it as far as we want, as long as it doesn't break its own rules. Physics uses it, but is not based on it.


So does the human brain transcend math, or are humans not generally intelligent?

Humans use soul juice to connect to the understandome. Machines can't connect to the understandome because of Gödels incompleteness, they can only make relationships between tokens. Not map them to reality like we can via magic.

Stochastic parrots all the ways down

https://ai.vixra.org/pdf/2506.0065v1.pdf


Hi and thanks for engaging :-)

Well, it in fact depends on what intelligence is to your understanding:

-If it intelligence = IQ, i.e. the rational ability to infer, to detect/recognize and extrapolate patterns etc, then AI is or will soon be more intelligent than us, while we humans are just muddling through or simply lucky having found relativity theory and other innovations just at the convenient moment in time ... So then, AI will soon also stumble over all kind of innovations. None of both will be able to deliberately think beyond what is thinkable at the respective present.

- But If intelligence is not only a level of pure rational cognition, but maybe an ability to somehow overcome these frame-limits, then humans obviously exert some sort of abilities that are beyond rational inference. Abilities that algorithms can impossibly reach, as all they can is compute.

- Or: intelligence = IQ, but it turns out to be useless in big, pivotal situations where you’re supposed to choose the “best” option — yet the set of possible options isn’t finite, knowable, or probabilistically definable. There’s no way to defer to probability, to optimize, or even to define what “best” means in a stable way. The whole logic of decision collapses — and IQ has nothing left to grab onto.

The main point is: neither algorithms nor rationality can point beyond itself.

In other words: You cannot think out of the box - thinking IS the box.

(maybe have a quick look at my first proof -last chapter before conclusion- - you will find a historical timeline on that IQ-Thing)


Let me steal another users alternate phrasing: Since humans and computers are both bound by the same physical laws, why does your proof not apply to humans?

Why? 1. Basically because physical laws obviously allow more than algorithmic cognition and problem solving. (And also: I am bound by thermodynamics as my mother in Law is, still i get disarranged by her mere presence while I always have to put laxatives in her wine to counter that)

2. human rationality is equally limited as algorithms. Neither an algorithm nor human logic can find itself a path from Newton to Einsteins SR. Because it doesn't exist.

3. Physical laws - where do they really come from? From nature? From logic? Or from that strange thing we do: experience, generate, pattern, abstract, express — and try to make it communicable? I honestly don’t know.

In a nutshell: there obviously is no law that forbids us to innovate - we do this, quite often. There only is a logical boundary, that says that there is no way to derive something out of a something that is not part of itself - no way for thinking to point beyond what is thinkable.

Imagine little Albert asking his physics teacher in 1880: "Sir - for how long do I have to stay at high speed in order to look as grown up as my elder brother?" ... i guess "interesting thought" would not have been the probable answer... rather something like "have you been drinking? Stop doing that mental crap - go away, you little moron!"


> Why? 1. Basically because physical laws obviously allow more than algorithmic cognition and problem solving.

You seem to be laboring under the mistaken idea that "algorithmic" does not encompass everything allowed by physics. But, humoring this idea, then if physical laws allow it, why can this "more than algorithmic" cognition not be done artificially? As you say - we can obviously do it. What magical line is preventing an artificial system from doing the same?


If by algorithmic you just mean anything that a Turing machine can do, then your theorem is asserting that the Church-Turing thesis isn't true.

Why not use that as the title of your paper? That a more fundamental claim.


The lack of mention of the Church-Turing thesis in both papers suggest he hasn't even considered that angle.

But it is the fundamental objection he would need to overcome.

There is no reasonable way to write papers claiming to provide proofs in this space without mentioning Church even once, and to me it's a red flag that suggests a lack of understanding of the field.


> Basically because physical laws obviously allow more than algorithmic cognition and problem solving.

This is not obvious at all. Unless you can prove that humans can compute functions beyond the Turing computable, there is no basis for thinking that humans embody and physics that "allow more than algorithmic cognition".

Your claim here also goes against the physical interpretation of the Church-Turing thesis.

Without rigorously addressing this, there is no point taking your papers seriously.


No problem here is you proof - although a bit long:

1. THEOREM: Let a semantic frame be defined as Ω = (Σ, R), where

Σ is a finite symbol set and R is a finite set of inference rules.

Let Ω′ = (Σ′, R′) be a candidate successor frame.

Define a frame jump as: Frame Jump Condition: Ω′ extends Ω if Σ′\Σ ≠ ∅ or R′\R ≠ ∅

Let P be a deterministic Turing machine (TM) operating entirely within Ω.

Then: Lemma 1 (Symbol Containment): For any output L(P) ⊆ Σ, P cannot emit any σ ∉ Σ.

(Whereas Σ = the set of all finite symbol strings in the frame; derivable outputs are formed from Σ under the inference rules R.)

Proof Sketch: P’s tape alphabet is fixed to Σ and symbols derived from Σ. By induction, no computation step can introduce a symbol not already in Σ. ∎

2. APPLICATION: Newton → Special Relativity

Let Σᴺ = { t, x, y, z, v, F, m, +, · } (Newtonian Frame) Let Σᴿ = Σᴺ ∪ { c, γ, η(·,·) } (SR Frame)

Let φ = “The speed of light is invariant in all inertial frames.” Let Tᴿ be the theory of special relativity. Let Pᴺ be a TM constrained to Σᴺ.

By Lemma 1, Pᴺ cannot emit any σ ∉ Σᴺ.

But φ ∈ Tᴿ requires σ ∈ Σᴿ \ Σᴺ

→ Therefore Pᴺ ⊬ φ → Tᴿ ⊈ L(Pᴺ)

Thus:

Special Relativity cannot be derived from Newtonian physics within its original formal frame.

3. EMPIRICAL CONFLICT Let: Axiom N₁: Galilean transformation (x′ = x − vt, t′ = t) Axiom N₂: Ether model for light speed Data D: Michelson–Morley ⇒ c = const

In Ωᴺ, combining N₁ and N₂ with D leads to contradiction. Resolving D requires introducing {c, γ, η(·,·)}, i.e., Σᴿ \ Σᴺ But by Lemma 1: impossible within Pᴺ. -> Frame must be exited to resolve data.

4. FRAME JUMP OBSERVATION

Einstein introduced Σᴿ — a new frame with new symbols and transformation rules. He did so without derivation from within Ωᴺ. That constitutes a frame jump.

5. FINALLY

A: Einstein created Tᴿ with Σᴿ, where Σᴿ \ Σᴺ ≠ ∅

B: Einstein was human

C: Therefore, humans can initiate frame jumps (i.e., generate formal systems containing symbols/rules not computable within the original system).

Algorithmic systems (defined by fixed Σ and R) cannot perform frame jumps. But human cognition demonstrably can.

QED.

BUT: Can Humans COMPUTE those functions? (As you asked)

-> Answer: a) No - because frame-jumping is not a computation.

It’s a generative act that lies outside the scope of computational derivation. Any attempt to perform frame-jumping by computation would either a) enter a Goedelian paradox (truth unprovable in frame),b) trigger the halting problem , or c) collapse into semantic overload , where symbols become unstable, and inference breaks down.

In each case, the cognitive system fails not from error, but from structural constraint. AND: The same constraint exists for human rationality.


Whoa there boss, extremely tough for you to casually assume that there is a consistent or complete metascience / metaphysics / metamathematics happening in human realm, but then model it with these impoverished machines that have no metatheoretic access.

This is really sloppy work, I'd encourage you to look deeper into how (eg) HOL models "theories" (roughly corresponding to your idea of "frame") and how they can evolve. There is a HOL-in-HOL autoformalization. This provides a sound basis for considering models of science.

Noncomputability is available in the form of Hilbert's choice, or you can add axioms yourself to capture what notion you think is incomputable.

Basically I don't accept that humans _do_ in fact do a frame jump as loosely gestured at, and I think a more careful modeling of what the hell you mean by that will dissolve the confusion.

Of course I accept that humans are subject to the Goedelian curse, and we are often incoherent, and we're never quite surely when we can stop collecting evidence or updating models based on observation. We are computational.


The claim isn’t that humans maintain a consistent metascience. In fact, quite the opposite. Frame jumps happen precisely because human cognition is not locked into a consistent formal system. That’s the point. It breaks, drifts, mutates. Not elegantly — generatively. You’re pointing to HOL-in-HOL or other meta-theoretical modeling approaches. But these aren’t equivalent. You can model a frame-jump after it has occurred, yes. You can define it retroactively. But that doesn’t make the generative act itself derivable from within the original system. You’re doing what every algorithmic model does: reverse-engineering emergence into a schema that assumes it. This is not sloppiness. It’s making a structural point: a TM with alphabet Σ can’t generate Σ′ where Σ′ \ Σ ≠ ∅. That is a hard constraint. Humans, somehow, do. If you don’t like the label “frame jump,” pick another. But that phenomenon is real, and you can’t dissolve it by saying “well, in HOL I can model this afterward.” If computation is always required to have an external frame to extend itself, then what you’re actually conceding is that self-contained systems can’t self-jump — which is my point exactly...

> It’s making a structural point: a TM with alphabet Σ can’t generate Σ′ where Σ′ \ Σ ≠ ∅

This is trivially false. For any TM with such an alphabet, you can run a program that simulates a TM with an alphabet that includes Σ′.


> Let a semantic frame be defined as Ω = (Σ, R)

But if we let an AGI operate on Ω2 = (English, Science), that semantic frame would have encompassed both Newton and Einstein.

Your argument boils down into one specific and small semantic frame not being general enough to do all of AGI, not that _any_ semantic frame is incapable of AGI.

Your proof only applies to the Newtonian semantic frame. But your claim is that it is true for any semantic frame.


Yes, of course — if you define Ω² as “English + All of Science,” then congratulations, you have defined an unbounded oracle. But you’re just shifting the burden.

No sysem starting from Ω₁ can generate Ω₂ unless Ω₂ is already implicit. ... If you build a system trained on all of science, then yes, it knows Einstein because you gave it Einstein. But now ask it to generate the successor of Ω² (call it Ω³ ) with symbols that don’t yet exist. Can it derive those? No, because they’re not in Σ². Same limitation, new domain. This isn’t about “a small frame can’t do AGI.” It’s about every frame being finite, and therefore bounded in its generative reach. The question is whether any algorithmic system can exeed its own Σ and R. The answer is no. That’s not content-dependent, that’s structural.


None of this is relevant to what I wrote. If anything, they sugget that you don't understand the argument.

If anything, your argument is begging the question - it's a logical fallacy - because your argument rests on humans exceeding the Turing computable, to use human abilities as evidence. But if humans do not exceed the Turing computable, then everything humans can do is evidence that something is Turing computable, and so you can not use human abilities as evidence something isn't Turing computable.

And so your reasoning is trivially circular.

EDIT:

To go into more specific errors, this is fasle:

> Let P be a deterministic Turing machine (TM) operating entirely within Ω.

>

> Then: Lemma 1 (Symbol Containment): For any output L(P) ⊆ Σ, P cannot emit any σ ∉ Σ.

P can do so by simulating a TM P' whose alphabet includes σ. This is fundamental to the theory of computability, and holds for any two sets of symbols: You can always handle the larger alphabet by simulating one machine on the other.

When your "proof" contains elementary errors like this, it's impossible to take this seriously.


You’re flipping the logic.

I’m not assuming humans are beyond Turing-computable and then using that to prove that AGI can’t be. I’m saying: here is a provable formal limit for algorithmic systems ->symbolic containment. That’s theorem-level logic.

Then I look at real-world examples (Einstein is just one) where new symbols, concepts, and transformation rules appear that were not derivable within the predecessor frame. You can claim, philosophically (!), that “well, humans must be computable, so Einstein’s leap must be too.” Fine. But now you’re asserting that the uncomputable must be computable because humans did it. That’s your circularity, not mine. I don’t claim humans are “super-Turing.” I claim that frame-jumping is not computation. You can still be physical, messy, and bounded .. and generate outside your rational model. That’s all the proof needs.


No, I'm not flipping the logic.

> I’m not assuming humans are beyond Turing-computable and then using that to prove that AGI can’t be. I’m saying: here is a provable formal limit for algorithmic systems ->symbolic containment. That’s theorem-level logic.

Any such "proof" is irrelevant unless you can prove that humans can exceed the Turing computable. If humans can't exceed the Turing computable, then any "proof" that shows limits for algoritmic systems that somehow don't apply to humans must inherently be incorrect.

And so you're sidestepping the issue.

> But now you’re asserting that the uncomputable must be computable because humans did it.

No, you're here demonstrating you failed to understand the argument.

I'm asserting that you cannot use the fact that humans can do something as proof that humans exceed the Turing computable, because if humans do not exceed the Turing computable said "proof" would still give the same result. As such it does not prove anything.

And proving that humans exceed the Turing computable is a necessary precondition for proving AGI impossible.

> I don’t claim humans are “super-Turing.”

Then your claim to prove AGI can't exist is trivially false. For it to be true, you would need to make that claim, and prove it.

That you don't seem to understand this tells me you don't understand the subject.

(See also my edit above; your proof also contains elmentary failures to understand Turing machines)


You’re misreading what I’m doing, and I suspect you’re also misdefining what a “proof” in this space needs to be.

I’m not assuming humans exceed the Turing computable. I’m not using human behavior as a proof of AGI’s impossibility. I’m doing something much more modest - and much more rigorous.

Here’s the actual chain:

1. There’s a formal boundary for algorithmic systems. It’s called symbolic containment. A system defined by a finite symbol set Σ and rule set R cannot generate a successor frame (Σ′, R′) where Σ′ introduces novel symbols not contained in Σ. This is not philosophy — this is structural containment, and it is provable.

2. Then I observe: in human intellectual history, we find recurring examples of frame expansion. Not optimization, not interpolation — expansion. New primitives. New rules. Special relativity didn’t emerge from Newton through deduction. It required symbols and structures that couldn’t be formed inside the original frame.

3. That’s not “proof” that humans exceed the Turing computable. That’s empirical evidence that human cognition appears to do something algorithmic systems, as formally defined, cannot do.

4. This leads to a conclusion: if AGI is an algorithmic system (finite symbols, finite rules, formal inference)then it will not be capable of frame jumps.And it is not incapable of that, because it lacks compute. The system is structurally bounded by what it is.

So your complaint that I “haven’t proven humans exceed Turing” is misplaced. I didn’t claim to. You’re asking me to prove something that I simply don’t need to assert .

I’m saying: algorithmic systems can’t do X (provable), and humans appear to do X (observed). Therefore, if humans are purely algorithmic, something’s missing in our understanding of how those systems operate. And if AGI remains within the current algorithmic paradigm, it will not do X. That’s what I’ve shown.

You can still believe humans are Turing machines, fine for me. But if this belief is to be more than some kind of religious statement, then it is you that would need to explain how a Turing machine bounded to Σ can generate Σ′ with Σ′ \ Σ ≠ ∅. It is you that would need to show how uncomputable concepts emerge from computable substrates without violating containment (->andthat means: witout violating its own logic - as in formal systems, logic and containment end up as the same thing: Your symbol set defines your expressive space, step outside that, and you’re no longer reasoning — you’re redefining the space, the universe you’re reasoning in).

Otherwise, the limitation stands — and the claim that “AGI can do anything humans do” remains an ungrounded leap of faith.

Also: if you believe the only valid proof of AGI impossibility must rest on metaphysical redefinition of humanity as “super-Turing,” then you’ve set an artificial constraint that ensures no such proof could ever exist, no matter the logic.

That’s intellectually trading epistemic rigor for insulation.

As for your claim that I misunderstand Turing machines, please feel free to show precisely which part fails. The statement that a TM cannot emit symbols not present in its alphabet is not a misunderstanding — it’s the foundation of how TMs are defined. If you think otherwise, then I would politely suggest you review the formal modl again.


There's nothing rigorous about this. It's pure crackpottery.

As long as you claim to disprove AGI, it inherently follows that you need to prove that humans exceed the Turing computable to succeed. Since you specifically state that you are not trying to prove that humans exceed the Turing computable, you're demonstrating a fundamental lack of understanding of the problem.

> 3. That’s not “proof” that humans exceed the Turing computable. That’s empirical evidence that human cognition appears to do something algorithmic systems, as formally defined, cannot do.

This is only true if humans execeed the Turing computable, as otherwise humans are proof that this is something that an algorithmic system can do. So despite claiming that you're not trying to prove that humans execeed the Turing computable, you are making the claim that humans can.

> I’m saying: algorithmic systems can’t do X (provable), and humans appear to do X (observed).

This is a direct statement that you claim that humans are observed to exceed the Turing computable.

> then it is you that would need to explain how a Turing machine bounded to Σ can generate Σ′ with Σ′ \ Σ ≠ ∅

This is fundamental to Turing equivalence. If there exist any Turing machine that can generate Σ′, then any Turing machine can generate Σ′.

Anything that is possible with any Turing machine, in fact, is possible with a machine with as few as 2 symbols (the smallest (2,3) Turing machine is usually 2 states and 3 symbols, but per Shannon you can always trade states for symbols, and so a (3,2) Turing machine is also possible). This is because you can always simulate an environment where a larger alphabet is encoded with multiple symbols.

> As for your claim that I misunderstand Turing machines, please feel free to show precisely which part fails. The statement that a TM cannot emit symbols not present in its alphabet is not a misunderstanding — it’s the foundation of how TMs are defined. If you think otherwise, then I would politely suggest you review the formal modl again.

This is exactly the part that fails.

Any TM can simulate any other, and that by extension, any TM can be extended to any alphabet through simulation.

If you don't understand this, then you don't understand the very basics of Turing Machines.


“Imagine little Albert asking his physics teacher in 1880: "Sir - for how long do I have to stay at high speed in order to look as grown up as my elder brother?"”

Is that not the other way around? “…how long do I have to stay at high speed in order for my younger brother to look as grown up as myself?”


Staying at high speed is symmetric! You'd both appear to age slower from the other's POV. It's only if one brother turns around and comes back, therefore accelerating, that you get an asymmetry.

Indeed. One of my other thoughts here on the Relativity example was "That sets the bar high given most humans can't figure out special relativity even with all the explainers for Einstein's work".

But I'm so used to AGI being conflated with ASI that it didn't seem worth it compared to the more fundamental errors.


Given rcxdude’s reply it appears I am one of those humans who can’t figure out special relativity (let alone general)

Wrt ‘AGI/ASI’, while they’re not the same, after reading Nick Bostrom (and more recently https://ai-2027.com) I hang towards AGI being a blib on the timeline towards ASI. Who knows.


The standard model is computable, so no. Physical law does not allow for non-computable behavior.

Humans are fallible in a way computers are not. One could argue any creative process is an exercise in fallibility.

More interestingly, humans are capable of assessing the results of their "neural misfires" ("hmm, there's something to this"), whereas even if we could make a computer do such mistakes, it wouldn't know its Penny Lane from its Daddy's Car[0], even if it managed to come up with one.

[0]https://www.youtube.com/watch?v=LSHZ_b05W7o


Hang on, hasn't everyone spent the past few years complaining about LLMs and diffusion models being very fallible?

And we can get LLMs to do better by just prompting them to "think step by step" or replacing the first ten attempts to output a "stop" symbolic token with the token for "Wait… "?


I think humans have some kind of algorithm for deciding what's true and consolidating information. What that is I don't know.

I guess so too... but whatever it is: it cannot possibly be something algorithmic. Therefore it doesn't matter in terms of demonstrating that AI has a boundary there, that cannot be transcended by tech, compute, training, data etc.

Why can't it be algorithmic? If the brain uses the same process on all information, then that is an algorithmic process. There is some evidence that it does do the same process to do things like consolidating information, processing the "world model" and so on.

Some processes are undoubtedly learned from experience but considering people seem to think many of the same things and are similar in many ways it remains to be seen whether the most important parts are learned rather than innate from birth.


Explain what you mean by "algorithm" and "algorithmic". Be very precise. You are using this vague word to hinge on your entire argument and it is necessary you explain first what it means. Since from reading your replies here it is clear you are laboring under a defitnition of "algorithm" quite different from the accepted one.

Why can't it be algorithmic?

Why do you think it mustn't be algoritmic?

Why do you think humans are capable of doing anything that isn't algoritmic?

This statement, and your lack of mention of the Church-Turing thesis in your papers suggests you're using a non-standard definition of "algoritmic", and your argument rests on it.


This paper is about the limits in current systems.

Ai currently has issues with seeing what's missing. Seeing the negative space.

When dealing with complex codebases you are newly exposed to you tackle an issue from multiple angles. You look at things from data structures, code execution paths, basically humans clearly have some pressure to go, fuck, I think I lost the plot, and then approach it from another paradigm or try to narrow scope, or based on the increased information the ability to isolate the core place edits need to be made to achieve something.

Basically the ability to say, "this has stopped making sense" and stop or change approach.

Also, we clearly do path exploration and semantic compression in our sleep.

We also have the ability to transliterate data between semantic to visual structures, time series, light algorithms (but not exponential algorithms, we have a known blindspot there).

Humans are better at seeing what's missing, better at not closuring, better at reducing scope using many different approaches and because we operate in linear time and there are a lot of very different agents we collectively nibble away at complex problems over time.

I mean on a 1:1 teleomere basis, due to structural differences people can be as low as 93% similar genetically.

We also have different brain structures, I assume they don't all function on a single algorithmic substrate, visual reasoning about words, semantic reasoning about colors, synesthesia, the weird handoff between hemispheres, parts of our brain that handle logic better, parts of our brain that handle illogic better. We can introspect on our own semantic saturation, we can introspect that we've lost the plot. We get weird feelings when something seems missing logically, we can dive on that part and then zoom back out.

There's a whole bunch of shit the brain does because it has a plurality of structures to handle different types of data processing and even then the message type used seems flexible enough that you can shove word data into a visual processor part and see what falls out, and this happens without us thinking about it explicitly.


Yep definitely agree with this.

First of all, math isn’t real any more than language isn’t real. It’s an entirely human construct, so it’s possible you cannot reach AGI using mathematical means, as math might not be able to fully express it. It’s similar to how language cannot fully describe what a color is, only vague approximations and measurements. If you wanted to create the color green, you cannot do it by describing various properties, you must create the actual green somehow.

As a somewhat colorblind person, I can tell you that the "actual green" is pretty much a lie :)

It's a deeply philosophical question what constitutes a subjective experience of "green" or whatever... but intelligence is a bit more tractable IHO.


I don't think it would be unfair to accept the brain state of green as an accurate representation of green for all intents and purposes.

Similar to how "computer code" and "video game world" are the same thing. Everything in the video game world is perfectly encoded in the programming. There is nothing transcendent happening, it's two different views of the same core object.


Fair enough. But then, AGI wouldn't really be based on math, but on physics. Why would an artificially-constructed physical system have (fundamentally) different capabilities than a natural one?

My take is that it transcends any science that we'll understand and harness in the lifetime of anyone living today. It for all intents and purposes transcends science from our point of view, but not necessarily in principle.

I think the latter fact is quite self-demonstrably true.

Colloquially anything that matches humans in general intelligence and is built by us is by definition an agi and generally intelligent.

Humans are the bar for general intelligence.


I would really like to see your definition of general intelligence and argument for why humans don't fit it.

How so?

> are humans not generally intelligent?

Have you not met the average person on the street? (/s)


Noted /s, but truly this is why I think even current models are already more disruptive than naysayers are willing to accept that any future model ever could be.

I'm noting the high frequency of think pieces from said naysayers. It's every day now: they're all furiously writing about flaws and limitations and extrapolating these to unjustifiable conclusions, predicting massive investment failures (inevitable, and irrelevant,) arguing AGI is impossible with no falsifiable evidence, etc.

Humans do a lot of things that computers don't, such as be born, age (verb), die, get hungry, fall in love, reproduce, and more. Computers can only metaphorically do these things, human learning is correlated with all of them, and we don't confidently know how. Have some humility.

TFA presents an information-theoretic argument forAGI being impossible. My reading of your parent commenter is that they are asking why this argument does not also apply to humans.

You make broadly valid points, particularly about the advantages of embodyment, but I just dont think theyre good responses to the theoretical article under discussion (or the comment that you were responding to).


The point is that if it's mathematically possible for humans, than it naively would be possible for computers.

All of that just sounds hard, not mathematically impossible.

As I understand it, this is mostly a rehash on the dated Lucas Penrose argument, which most Mind Theory researches refute.


We don’t even know how LLMs work. But we do know the underlying mechanisms are governed by math because we have a theory of reality that governs things down to the atomic scale and humans and LLMs are made out of atoms.

So because of this we know reality is governed by maths. We just can’t fully model the high level consequence of emergent patterns due to the sheer complexity of trillions of interacting atoms.

So it’s not that there’s some mysterious supernatural thing we don’t understand. It’s purely a complexity problem in that we only don’t understand it because it’s too complex.

What does humility have to do with anything?


> we have a theory of reality that governs things down to the atomic scale and humans and LLMs are made out of atoms.

> So because of this we know reality is governed by maths.

That's not really true. You have a theory, and let's presume so far it's consistent with observations. But it doesn't mean it's 100% correct, and doesn't mean at some point in the future you won't observe something that invalidates the theory. In short, you don't know whether the theory is absolutely true and you can never know.

Without an absolutely true theory, all you have is belief or speculation that reality is governed by maths.

> What does humility have to do with anything?

Not the GP but I think humility is kinda relevant here.


>That's not really true. You have a theory, and let's presume so far it's consistent with observations. But it doesn't mean it's 100% correct, and doesn't mean at some point in the future you won't observe something that invalidates the theory. In short, you don't know whether the theory is absolutely true and you can never know.

Let me repharse it. As far as we know all of reality is governed by the principles of logic and therefore math. This is the most likely possibility and we have based all of our technology and culture and science around this. It is the fundamental assumption humanity has made on reality. We cannot consistently demonstrate disproof against this assumption.

>Not the GP but I think humility is kinda relevant here.

How so? If I assume all of reality is governed by math, but you don't. How does that make me not humble but you humble? Seems personal.


I guess it's kinda hubris on my part to question your ability to know things with such high certainty about things that philosophers have been struggling to prove for millenia...

What you said is only true for the bits of humanity you have decided to focus upon -- capitalist, technology-driven modern societies. If you looked beyond that, there are cultures that build society upon other assumptions. You might think those other modes are "wrong", but that's your personal view. For me, I personally don't think any of these are "true" in the absolute sense, as much as I don't think yours is "true". They're just ways humans with our mortal brains try to grapple with a reality that we don't understand.

As a sidenote, probability does not mean the thing you think it means. There's no reasonable frequentist interpretation for fundamental truth of reality, so you're just saying your Bayesian subjective probability says that math is "the most likely possibility". Which is fine, except everyone has their own different priors...


I never made a claim for absolute truth. I said it’s the most likely truth given the fact that you get up every morning and drive a car or turn on your computer and assume everything will work. Because we all assume it, we assume all of logic behind it to be true as well.

Whatever probability is, whatever philosophers say about it any of this it doesn’t matter. You act like all of it is true including the usage of the web technology that allows you to post your idea here. You are acting as if all the logic, science and technology that was involved in the creation of that web technology is real and thus I am simply saying because the entire world claims this assumption by action then my claim is inline with the entire world.

You can make a philosophical argument but your actions aren’t inline with that. You may say no one can prove math or probability to be real but you certainly don’t live your life that way. You don’t think that science logic and technology will suddenly fall apart and not work when oh turn on your computer. In fact you live your life as if those things are fundamentally true. Yet you talk as if they might not be.


> the entire world claims this assumption by action then my claim is inline with the entire world.

That's not what you claimed and that's not what I replied to.

You said you have a theory, and because of that you know something.

The explanation or the theory does not have to be right for something to work. The fact that I'm using modern technology does not mean that whatever theory of reality in vogue is fundamentally right. It just needs to work under certain conditions.

> You may say no one can prove math or probability to be real but you certainly don’t live your life that way. You don’t think that science logic and technology will suddenly fall apart and not work when oh turn on your computer.

That's a really strong claim to make, especially with "you". You don't know how I live. It's like seeing somebody appear in Church and denigrating them for not believing in Jesus.

No, I believe the world could fall apart at any time. Most people call it death. The fact that 99.9% people believe in death and continue their lives without panicking is probably something you want to think about as well. Heck, even a sufficiently strong solar flare could bring down this entire modern technology stack. Am I wrong to continue to use the web and debate about metaphysics given this knowledge? I don't think so, and neither do I think that my presence says anything about my belief in mathematics or whatever else governing reality.


This exactly what I said:

This is the most likely possibility and we have based all of our technology and culture and science around this.

And that’s the summary of my claim and what I meant by this:

the entire world claims this assumption by action then my claim is inline with the entire world.

I assumed it was obvious because when does the world make a claim? The world doesn’t make any singular claim. But they do take a singular action of acting on the assumption the theories are true.

> That's a really strong claim to make, especially with "you". You don't know how I live. It's like seeing somebody appear in Church and denigrating them for not believing in Jesus.

Yeah and you know what’s crazy? I’d bet a million dollars on it. It’s insane how confident I am about it right? And you know what’s even crazier? You know that I’d win that bet even though you didn’t volunteer any information about your stance. Did I know this information through my psychic powers or what? No. I didn’t. But you also have a good idea how I know.


> We don’t even know how LLMs work

Speak for yourself. LLMs are a feedforward algorithm inferring static weights to create a tokenized response string.

We can compare that pretty trivially to the dynamic relationship of neurons and synapses in the human brain. It's not similar, case closed. That's the extent of serious discussion that can be had comparing LLMs to human thought, with apologies to Chomsky et. al. It's like trying to find the anatomical differences between a medieval scribe and a fax machine.


> Speak for yourself. LLMs are a feedforward algorithm inferring static weights to create a tokenized response string.

If we're OK with descriptions so lossy that they fit in a sentence, we also understand the human brain:

A electrochemical network with external inputs and some feedback loops, pumping ions around to trigger voltage cascades to create muscle contractions as outputs.


Yes. As long as we're confident in our definitions, that makes the questions easy. Is that the same as a feedforward algorithm inferring static weights to create a tokenized response string? Do you necessarily need an electrochemical network with external stimuli and feedback to generate legible text?

No. The answer is already solved; AI is not a brain, we can prove this by characteristically defining them both and using heuristic reasoning.


> The answer is already solved; AI is not a brain, we can prove this by characteristically defining them both and using heuristic reasoning.

That "can" should be "could", else it presumes too much.

For both human brains and surprisingly small ANNs, far smaller than LLMs, humanity collectively does not yet know the defining characteristics of the aspects we care about.

I mean, humanity don't agree with itself what any of the three initials of AGI mean, there's 40 definitions of the word "consciousness", there are arguments about if there is either exactly one or many independent G-factors in human IQ scores, and also if those scores mean anything beyond correlating with school grades, and human nerodivergence covers various real states of existance that many of us find incomprehensible (sonetimes mutually, see e.g. most discussions where aphantasia comes up).

The main reason I expect little from an AI is that we don't know what we're doing. The main reason I can't just assume the least is because neither did evolution when we popped out.


George Hinton the person largely responsible about the AI revolution has this to say:

https://www.reddit.com/r/singularity/comments/1lbbg0x/geoffr...

https://youtu.be/qrvK_KuIeJk?t=284

In that video above George Hinton, directly says we don't understand how it works.

So I don't speak just for myself. I speak for the person who ushered in the AI revolution, I speak for Experts in the field who know what they're talking aboutt. I don't speak for people who don't know what they're talking about.

Even though we know it's a feedforward network and we know how the queries are tokenized you cannot tell me what an LLM would say nor tell me why an LLM said something for a given prompt showing that we can't fully control an LLM because we don't fully understand it.

Don't try to just argue with me. Argue with the experts. Argue with the people who know more than you, Hinton.


"In that video above George Hinton, directly says we don't understand how it works."

That isn't what Hinton said in the first link. He says essentially:

People don't understand A so they think B.

But actually the truth is C.

This folksy turn of phrase is about a group of "people" who are less knowledgeable about the technology and have misconceptions.

Maybe he said something more on point in the second link, but your haphazard use of urls doesn't make me want to read on.


Take a closer look at BOTH videos. Not just the first one. He literally says the words “don’t” and “understand” in reference to LLMs.

I watch a lot of video interviews on hinton I can assure you that “not understanding” is 100 percent his opinion both from the perspective of the actual events that occurred and as someone who knows his general stance from watching tons of interviews and videos about him.

So let me be frank with you. There are people smarter than you and more eminent than you who think you are utterly and completely wrong. Hinton is one of those people. Hopefully that can kick start the way you think into actually holding a more nuanced world view such that you realize that nobody really understands LLMs.

Half the claims on HN are borderline religious. Made up by people who unconsciously scaffold evidence to support the most convenient view.

If we understood AI completely and utterly we would be able to set those weights in a neural net into values that give us complete and total control over how the neural net behaves. This is literally our objective as human beings who created the neural net. We want to do this and we absolutely know that their exists a configuration of weights in reality that can help us achieve this goal that we want so much.

Why haven’t we just reached this goal? Because we literally don’t understand how to reach this goal even though we know it exists. We. Don’t. Understand. It is literally the only conclusion that follows given our limited ability to control LLMs. Any other conclusion is ludicrous and a sign that your logical thought process is not crystal clear.


I'm not going to waste my time clicking a second video link but if Gemini can be believed he said "we don't know exactly how they work" which somehow became

"We don’t even know how LLMs work. "

In your retelling, an exaggeration which rightfully led to pushback.

Aside from that I don't know what conversation you think we are having.


Yeah let’s not talk at all. You’re wasting both your own time and my time by responding to me without clicking on my links and trying to understand what I say.

Just leave. Don’t bother communicating with me again.


Hinton invented the neural network, which is not the same as the transformer architecture used in LLMs. Asking him about LLM architectures is like asking Henry Ford if he can build a car from a bunch of scrap metal; of course he can't. He might understand the engine or the bodywork, but it's not his job to know the whole process. Nor is it Hinton's.

And that's okay - his humility isn't holding anyone back here. I'm not claiming to have memorized every model weight ever published, either. But saying that we don't know how AI works is empirically false; AI genuinely wouldn't exist if we weren't able to interpret and improve upon the transformer architecture. Your statement here is a dangerous extrapolation.

> you cannot tell me what an LLM would say nor tell me why an LLM said something for a given prompt showing that we can't fully control an LLM because we don't fully understand it.

You'd think this, but it's actually wrong. If you remove all of the seeded RNG during inference (meaning; no random seeds, no temps, just weights/tokenizer), you can actually create an equation that deterministically gives you the same string of text every time. It's a lot of math, but it's wholly possible to compute exactly what AI would say ahead of time if you can solve for the non-deterministic seeded entropy, or remove it entirely.

LLM weights and tokenizer are both always idempotent, the inference software often introduces variability for more varied responses. Just so we're on the same page here.


> If you remove all of the seeded RNG during inference (meaning; no random seeds, no temps, just weights/tokenizer), you can actually create an equation that deterministically gives you the same string of text every time.

That answers the "what", but not the "why" nor the "how exactly", with the latter being crucial to any claim that we understand how these things actually work.

If we actually did understand that, we wouldn't need to throw terabytes of data on them to train them - we'd just derive that very equation directly. Or, at the very least, we would know how to do so in principle. But we don't.


> But saying that we don't know how AI works is empirically false;

Your statement completely contradicts hintons statement. You didn’t even address his point. Basically you’re saying Hinton is wrong and you know better than him. If so, counter his argument don’t restate your argument in the form of an analogy.

> You'd think this, but it's actually wrong.

No you’re just trying to twist what I’m saying into something that’s wrong. First I never said it’s not deterministic. All computers are deterministic, even RNGs. I’m saying we have no theory about it. A plane for example you can predict its motion via a theory. The theory allows us to understand and control an airplane and predict its motion. We have nothing for an LLM. No theory that helps us predict, no theory that helps us fully control and no theory that helps us understand it beyond the high level abstraction of a best fit curve in multidimensional space. All we have is an algorithm that allows an LLM to self assemble as a side effect from emergent effects.

Rest assured I understand the transformer as much as you do (which is to say humanity has limited understanding of it) you don’t need to assume I’m just going off hintons statements. He and I knows and understands LLMs as much as you even though we didnt invent it. Please address what I said and what he said with a counter argument and not an analogy that just reiterates an identical point.


The fact that it doesn't operate identically or even similarly on the physical layer doesn't mean that similar processes cannot emerge on higher levels of abstraction.

Pretty sure in most other contexts you wouldn't agree a medieval scribe knows how a fax machine works.

Analogies aren’t proof. Like if an analogy doesn’t apply in certain context it is not a reflection of the actual situation. It just means the analogy is bad and irrelevant.

Often people who don’t know how to be logical end up using analogies as proof. And you can simply say that the analogy doesn’t apply and is inaccurate and the whole argument becomes garbage because analogies aren’t logical basis for anything.

Analogies are communication cools to facilitate easier understanding they are not proofs or evidence of anything.


>We don’t even know how LLMs work.

Care to elaborate? Because that is utter nonsense.


We understand and build the trellis that the LLMs "grow" on. We don't have good insight into how a fully grown LLM actually turns any specific input into any specific output. We can follow it through the network, but it's a totally senseless noisy mess.

"Cat" lights up a certain set of neurons, but then "cat" looks completely different. That is what we don't really understand.

(This is an illustrative example made for easy understanding, not something I specifically went and compared)


We don't know the path for how a given input produces a given output, but that doesn't mean we don't know how LLMs work.

We don't and can't know with certainty which specific atoms will fission in a nuclear reactor either. But we know how nuclear fission works.


We have the Navier–Stokes equations which fit on a matchbox, yet for the last 25 years there's been a US$1,000,000 prize on offer to the first person providing a solution for a specific statement of the problem:

  Prove or give a counter-example of the following statement:

  In three space dimensions and time, given an initial velocity field, there exists a vector velocity and a scalar pressure field, which are both smooth and globally defined, that solve the Navier–Stokes equations.

And when that prize is claimed, we'll ring the bell on AGI being found. Gentleman's agreement.

I don't see how it will convince anyone: people said as much before chess, then again about Go, and are still currently disagreeing with each other if LLMs do or don't pass the Turing test.

Irregardless, this was to demonstrate by analogy that things that seem simple can actually be really hard to fully understand.


I have never once heard someone describe Stockfish as potentially AGI. Honestly I don't remember anyone making the argument with AlphaGo or even IBM Watson, either.

Go back further than Stockfish — I said "people said as much before chess", as in Deep Blue versus Garry Kasparov.

Here's a quote of a translation of a quote, from the loser, about 8 years before he lost:

"""In 1989 Garry Kasparov offered some comments on chess computers in an interview with Thierry Paunin on pages 4-5 of issue 55 of Jeux & Stratégie (our translation from the French):

‘Question: ... Two top grandmasters have gone down to chess computers: Portisch against “Leonardo” and Larsen against “Deep Thought”. It is well known that you have strong views on this subject. Will a computer be world champion, one day ...?

Kasparov: Ridiculous! A machine will always remain a machine, that is to say a tool to help the player work and prepare. Never shall I be beaten by a machine! Never will a program be invented which surpasses human intelligence. And when I say intelligence, I also mean intuition and imagination. Can you see a machine writing a novel or poetry? Better still, can you imagine a machine conducting this interview instead of you? With me replying to its questions?’"""

- https://www.chesshistory.com/winter/extra/computers.html

So while it's easy for me to say today "chess != AGI", before there was an AI that could win at chess, the world's best chess player conflated being good at chess with several (all?) other things smart humans can do.


https://youtu.be/qrvK_KuIeJk?t=284

The above is a video clip of Hinton basically contradicting what you’re saying.

So thats my elaboration. Picture that you just said what you said to me to hintons face. I think it’s better this way because I noticed peoples responding to me are rude and completely dismiss me and I don’t get good faith responses and intelligent discussion. I find if people realize that there statements are contradictory to the statements of the industry and established experts they tend to respond more charitably.

So please respond to me as if you just said to hintons face that what he said is utter nonsense because what I said is based off of what he said. Thank you.


Taking GLP-1 makes me question how much hunger is really me versus my hormones controlling me.

The generic title belies how invasive this will be: a surveillance network that will incorporate data about citizens’ political opinions, philosophical beliefs, health records and other sensitive personal information.

The mostly-made-in-US version costs them $650 to manufacture, and they sell it for $2000, while the made-in-China version costs them $550, and they sell it for $800 [1]. They inflate a $100 difference in cost to a $1200 difference in price. Chinese anti-on-shoring propaganda wouldn't be this blatant.

[1] https://www.404media.co/how-a-2-000-made-in-the-usa-liberty-... - search for "You can look at our concrete numbers."


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: