>I would bet all of my assets of my life that AGI will not be seen in the lifetime of anyone reading this message right now.
That includes anyone reading this message long after the lives of those reading it on its post date have ended.
By almost any definition available during the 90s GPT-5 Thinking/Pro would pretty much qualify. The idea that we are somehow not going to make any progress for the next century seems absurd. Do you have any actual justification for why you believe this? Every lab is saying they see a clear path to improving capabilities and theres been nothing shown by any research I'm aware of to justify doubting that.
The fact is that no matter how "advanced" AI seems to get, it always falls short and does not satisfy what we think of as true AI. It's always a case of "it's going to get better", and it's been said like this for decades now. People have been predicting AGI for a lot longer than the time I predict we will not attain it.
LLMs are cool and fun and impressive (and can be dangerous), but they are not any form of AGI -- they satisfy the "artificial", and that's about it.
GPT by any definition of AGI is not AGI. You are ignoring the word "general" in AGI. GPT is extremely niche in what it does.
>GPT by any definition of AGI is not AGI. You are ignoring the word "general" in AGI. GPT is extremely niche in what it does.
Definitions in the 90s basically required passing the Turing Test which was probably passed by GPT3.5. Current definitions are too broad but something like 'better than the average human at most tasks' seems to be basically passed by say GPT5, definitions like 'better than all humans at all tasks' or 'better than all humans at all economically useful tasks' are closer to Superintelligence.
That's pretty much exactly what Alan Turing made the Turing test for. From the Wikipedia entry:
> The Turing test, originally called the imitation game by Alan Turing in 1949, is a test of a machine's ability to exhibit intelligent behaviour equivalent to that of a human.
> The test was introduced by Turing in his 1950 paper "Computing Machinery and Intelligence" while working at the University of Manchester. It opens with the words: "I propose to consider the question, 'Can machines think?'"
> This question, Turing believed, was one that could actually be answered. In the remainder of the paper, he argued against the major objections to the proposition that "machines can think".
Cherry-picking? You made a completely factually wrong statement. There was no cherry-picking. You said the Turing test was never about AGI. You didn't say it has weaknesses. Even if it were the worst test ever made, it was still about AGI.
Ignoring the entire article including the "Strengths" section and only looking at "Weaknesses" is the only cherry-picking happening.
And if you read the Weaknesses section, you'll see very little of it is relevant to whether the Turing test demonstrates AGI. Only 1 of the 9 subsections is related to this. The other weaknesses listed include that intelligent entities may still fail the Turing test, that if the entity tested remains silent there is no way to evaluate it, and that making AI that imitates humans well may lower wages for humans.
Ok that's great do you have evidence suggesting scaling is actually plateauing or that capabilities of GPT6 and Claude 4.5 Opus won't be better than models now?
You can make this bet functional if you really believe it, which you of course really don't. If you actually do then I can introduce you to some people happy to take your money in perpetuity.
>If anything, the prices have reflected less than 20% of Capex projections, so the market clearly thinks OpenAI / Stargate / FAANG's capex plans are BS.
I'd say if anything the market is massively underestimating the scale of their capex plans. These things are using as much electricity as small cities. They are well past breaking ground, the buildings are going up as we speak.
Same thing happened with the dot-com boom and bust, except with fiber (later called dark fiber) and datacenters.
A lot of people lost a lot of money. Post bankruptcy, it also fueled the later tech booms, as now there was a ton of dark fiber waiting to be used at rock bottom prices, and underutilized datacenters and hardware. Google was a major beneficiary.
Knew we would have a quantum woo comment in the first 10 comments on this. Unless you can show a coherent quantum state at human body temp then please stop with this nonsense.
I read Roger Penrose's The Emperor's New Mind when I was younger. It suggested quantum processes as a last ditch effort for a non-deterministic brain. At the time, I thought it was a fascinating prediction of how our minds might work and that reading it made me a smarter person.
I have since come to view it more as an interesting lesson in the pitfalls of hypothesis formation, popular non-fiction, and vanity.
Even so, as a layperson, it's entirely understandable to perk up whenever someone discovers 'tubules' in the brain, even if none of that sufficiently supports any of the collapse requirements of the Penrose/Hammeroff quantum microtubes.
Being of scientific mind, I keep in mind that scientific discoveries existed the whole time until they were discovered as well.
The research on what we can see and learn in the brain is remarkable in the last 10 years. fMRI alone is staggering.
Your question seems to be one of enough resolution. The brain continues to get more attention in greater and greater resolution.
There seems to me more research in the area which is encouraging too.
I don't get the feeling your'e really interested in it other than looking at where the research is occurring and building from where and how you want to see it. Time will tell either way.
There's nothing necessarily interesting about quantum effects in the brain. Hard drives and other parts of a computer use quantum effects too, but it doesn't make them quantum computers.
Penrose is just another person who thinks "quantum" means "magic".
Should be noted that none of these papers addresses computation via qubits in the neuronal tubes which is central to the OrchOR theory of quantum woo consciousness.
>Would you still follow through on a mission Ferdinand II of Aragon sent your grand grand grand grand grand grandfather on in 1498? I probably wouldn’t. These goal would likely not even make much sense to me anymore, or be completely irrelevant in today’s world.
If you are on a ship in the middle of an endless ocean, or interstellar space, with many decades or centuries before reaching somewhere safe then truly what choice do you have?
You may disagree with this take but its not uninformed. Many LLMs use self‑supervised pretraining followed by RL‑based fine‑tuning but that's essentially it - it's fine tuning.
I think you're seriously underestimating the importance of the RL steps on LLM performance.
Also how do you think the most successful RL models have worked? AlphaGo/AlphaZero both use Neural Networks for their policy and value networks which are the central mechanism of those models.
As someone living in a city with a significant homelessness problem I think this comment misses the point. The vast majority of homelessness in the US is not people falling on hard times and needing a boost to get back into a home. They are drug addicts or the mentally ill. A completely different problem that also needs to be addressed. Few other nations that I can see allow blatantly mentally ill to roam their streets or let people do drugs openly on the streets...
They already have multiple trillion dollar companies and how has that worked out for them? Rampant homelessness and crime in their "Premier" cities, skyrocketing cost of living and housing prices, etc.
By almost any definition available during the 90s GPT-5 Thinking/Pro would pretty much qualify. The idea that we are somehow not going to make any progress for the next century seems absurd. Do you have any actual justification for why you believe this? Every lab is saying they see a clear path to improving capabilities and theres been nothing shown by any research I'm aware of to justify doubting that.