I logged in to specifically downvote this comment, because it attacks the OP's position with unjustified and unsubstantiated confidence in the reverse.
> It's easy to spot people who secretly hate LLMs and feel threatened by them these days.
I don't think OP is threatened or hates LLM, if anything, OP is on the position that LLM are so far away from intelligence that it's laughable to consider it threatening.
> In conclusion, we will reach AGI
The same way we "cured" cancer and Alzheimer's, two arguably much more important inventions than a glorified text predictor/energy guzzler. But I like the confidence, it's almost as much as OP's confidence that nothing substantial will happen.
> It's a race with high stakes, and history shows that these types of races don't stop until there is a winner.
So is the existential threat to humanity in the race to phase out fossil fuels/stop global warming, and so far I don't see anyone "winning".
> However, these models increase productivity, allowing people to focus more on research
The same way the invention of the computer, the car, the vacuum cleaner and all the productivity increasing inventions in the last centuries allowed us to idle around, not have a job, and focus on creative things.
> It's easy to spot people who secretly hate LLMs and feel threatened by them these days
It's easy to spot e/acc bros feeling threatened that all the money they sunk into crypto, AI, the metaverse, web3 are gonna go to waste and try to fan the hype around it so they can cash in big. How does that sound?
I appreciate the pushback and acknowledge that my earlier comment might have conveyed too much certainty—skepticism here is justified and healthy.
However, I'd like to clarify why optimism regarding AGI isn't merely wishful thinking. Historical parallels such as heavier-than-air flight, Go, and protein folding illustrate how sustained incremental progress combined with competition can result in surprising breakthroughs, even where previous efforts had stalled or skepticism seemed warranted. AI isn't just a theoretical endeavor; we've seen consistent and measurable improvements year after year, as evidenced by Stanford's AI Index reports and emergent capabilities observed at larger scales.
It's true that smart people alone don't guarantee success. But the continuous feedback loop in AI research—where incremental progress feeds directly into further research—makes it fundamentally different from fields characterized by static or singular breakthroughs. While AGI remains ambitious and timelines uncertain, the unprecedented investment, diversity of research approaches, and absence of known theoretical barriers suggest the odds of achieving significant progress (even short of full AGI) remain strong.
To clarify, my confidence isn't about exact timelines or certainty of immediate success. Instead, it's based on historical lessons, current research dynamics, and the demonstrated trajectory of AI advancements. Skepticism is valuable and necessary, but history teaches us to stay open to possibilities that seem improbable until they become reality.
P.S. I apologize if my comment particularly triggered you and compelled you to log in and downvote. I am always open to debate, and I admit again that I started too strongly.
> It's easy to spot people who secretly hate LLMs and feel threatened by them these days.
I don't think OP is threatened or hates LLM, if anything, OP is on the position that LLM are so far away from intelligence that it's laughable to consider it threatening.
> In conclusion, we will reach AGI
The same way we "cured" cancer and Alzheimer's, two arguably much more important inventions than a glorified text predictor/energy guzzler. But I like the confidence, it's almost as much as OP's confidence that nothing substantial will happen.
> It's a race with high stakes, and history shows that these types of races don't stop until there is a winner.
So is the existential threat to humanity in the race to phase out fossil fuels/stop global warming, and so far I don't see anyone "winning".
> However, these models increase productivity, allowing people to focus more on research
The same way the invention of the computer, the car, the vacuum cleaner and all the productivity increasing inventions in the last centuries allowed us to idle around, not have a job, and focus on creative things.
> It's easy to spot people who secretly hate LLMs and feel threatened by them these days
It's easy to spot e/acc bros feeling threatened that all the money they sunk into crypto, AI, the metaverse, web3 are gonna go to waste and try to fan the hype around it so they can cash in big. How does that sound?