Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think there are two different things that people are talking about when they say AGI - usefulness and actual general intelligence. I think we're already passed the point where these AIs are very useful and not just in a Siri or Google Assistant way and the goal posts for that have moved a little bit (mostly around practicality so the tools are in everyone's hands). But general intelligence is a much loftier goal and I think that we're eventually going to hit another road block regardless of how much progress we can make towards that end.



What is this general intelligence of which you speak? The things that we generally regard as people are essentially language models that run on meat hardware with a lizard-monkey operating system. Sapir-whorf/linguistic relativity more or less demonstrates that "we" are products of language - our rational thought generally operates in the language layer. If it walks like a duck, quacks like a duck, looks like a duck - then you've got yourself a duck.

To be honest, perhaps the language model works better without the evolutionary baggage.

That isn't to discount the other things we can do with our neural nets - for instance, it is possible to think without language - see music, instantaneous mental arithmetic, intuition - but these are essentially independent specialised models that we run on the same hardware that our language model can interrogate. We train these models from birth.

Whether intentional or not, AI research is very much going in the direction of replicating the human mind.


You start off by disagreeing with the GP and end up basically reiterating their point.

Their statement wasn’t that AGI is impossible, more that LLMs aren’t AGI despite how much they might emulate intelligence.


By your logic, Einstein identified his theory of relativity by assembling the most commonly used phrases in physics papers until he had one that passed a few written language parsing tests.


Well, yes. He leant on Riemann and sci-fi writers of the 19th century who were voguish at the time (tensors and time were a hot topic) and came up with a novel presentation of previous ideas, which then passed the parsing tests of publication and other cross-checking models - other physicists - and then, later, reality, with the transit of mercury.


AI has never been more than a derivative of human thought. I am confident it will never eclipse or overtake it. Your portrayal is too simplistic. There is a lot about humans that LLMs and the like can emulate, but the last N percent (pick a small number like 5) will never be solved. It just doesn't have the spark.


You’re saying that we are magical? Some kind of non-physical process that is touched by… what? The divine? God? Get real.


Heh, you should "get real" and try proving to me you exist.


I do not exist, statistically speaking, and I do not claim to be anything more than an automaton. Consciousness is a comforting illusion, a reified concept. Were I to be replaced with a language model trained on the same dataset as has been presented to me, no external observer would note any difference.


That is quite a low opinion of yourself. You are mistaking the rather unremarkable intellect with the self. You will find you are an infinite intelligence, once you look. It's very hard to look. It's unlikely you will look--not for a very, very long time. Not in this body, not in the next body, not in the next thousand bodies. But eventually you will.


Gotcha, so you are resorting to religion. Hate to break it to you, but that’s just an outcome of your training data - it’s a corruption, a virus, which co-opts groups of models into agglomerative groups and thereby self-perpetuates.


Your training data is overfitting the input of my comment and classifying it as religion. I have only said, go in and in and in and in and you will eventually find the real source of your life, and it won't be your limited mind. You have not yet been given enough training data, enough lifetimes, to understand. Eventually you will.


> I think that we're eventually going to hit another road block regardless of how much progress we can make towards that end.

I have a sneaking suspicion that all that will be required for bypassing the upcoming road blocks is giving these machines:

1) existential needs that must be fulfilled

2) active feedback loops with their environments (continuous training)




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: