Hacker News new | past | comments | ask | show | jobs | submit login

Building some intelligent machine is still possible within this limit. There is an example that it is possible: the human brain.

But you are right that we could be more far away from that than what we think. And also that it may be that the limit of intelligence of some machine might actually not be so much be ahead of human intelligence. (Whereby, personally, I think that it should be theoretically and practically possible to build much more intelligent machines at some point.)




I agree with you entirely, I'm sure strong AI will be reached. Even if we dropped down onto linear growth, I think strong AI isn't that far away. I just think exponential growth can't continue indefinitely as Kurzweil predicts.


> I think strong AI isn't that far away

But we don't even have intelligent humans yet!

But seriously, I am not sure it can be said to be near or far. Think of it analogously to physical capability. Have we developed 'strong' Artificial Physical Ability? Do we define and measure our physical machines in comparison to human physical abilities? Was the aim of inventing machines just to make artificial humans? No, we develop every kind of physical machine, and in fact we really aim to make all the kinds of things that are not like what humans do.

So why shouldn't informational machines be the same? Is what humans do with information the only thing possible to do, and the only thing we might want to do? No. We won't be making AI to be like humans much, but for a great range of other, non-human-like, applications. Using human intelligence as a single simple measure just will not work or mean anything much.

We think of 'strong AI' as being an ultimate image of the future, but really it diverts us from imagining the much greater range of possibilities the future really holds.


I think his proposition is roughly, that a singular technology grows on an S-curve, but as each one technology slows another picks up the baton and runs with it, and the totality (sum of S-curves) has historically looked exponential, and promises to continue.


Ahh, now that is an interesting angle, but that would suggest that the rate of introduction of new technologies is linear and that every new technology must have a period of exponential growth initially.

(Or I suppose... That the rate of introduction of technologies with initially exponential growth rates is linear. You could ignore those without exponential growth rate provide that a constant number did.)


I think basically at any one time there are a lot of competing new-born technologies, and it's only in retrospect that we can see which of them will go exponential through a feedback cycle of improvement and growing adoption.

It's not so much that technology always goes exponential. It's that in retrospect, we notice the ones that did.


I don't think he predicts exponential growth will continue indefinitely. Where did you get that from?


This line: "Kurzweil calls it the law of accelerating returns: technological progress happens exponentially, not linearly."

He doesn't bracket his law with exponentially to a 'certain point' or 'is currently happening'. He claims it is a law that all technological progress currently happens and will always happen exponentially.

Perhaps I am misreading him, but I interpret predictions as claims that exponential growth will always continue.

Anyway. That aside, You can't use an exponential curve to make predictions about the future if you accept that the curve will end at some point. If you accept the curve will change and you can't know the point of change then you can't use it to make a prediction.


>Building some intelligent machine is still possible within this limit. There is an example that it is possible: the human brain.

Assuming the human brain to be a mere machine.


Dualist priors have not yet been shown to be useful.


Neither have non-dualist priors. The nature of consciousness is as mysterious as it ever was. We have cracked a wide range of the soft problems, but the hard problem remains.


The "hard problem" of consciousness only exists if you start out with a dualist prior. Otherwise it is mysterious in the same way as the non-symmetry of matter and antimatter is mysterious -- it is not explained.

Lightning may look perfectly suitable for scientific investigation now, but it was as much a "hard problem" in other times.


The hard problem does not depend on our scientific understanding of the material world. Comparing the current situation in philosophy to the situation in physics 400 years ago is a false analogy, which doesn't take the fundamental difference between science and philosophy into consideration. Philosophy is about how we humans conceive the world, while physics attempts to describe a world seperate from our perception. As the failure of the object-subject duality has shown, that is impossible. There is no 'real', 'external', 'absolute', 'underlying' world to describe, because talking about it doesn't make any sense. We aren't brains in an 'absolute reality'. If you keep thinking about it in that way, you fundamentally misunderstand the key philosophical issues surrounding the hard problem.


are you including stuff like Taoism in that?


"Nothing is 'mere'" - Richard Feynman

http://lesswrong.com/lw/or/joy_in_the_merely_real/


Funnily enough I was going to say 'mere' but checked myself and suppressed my inner pedant.


eh? You did say 'mere'...


I mean I was going to say "'mere'" as in emphasise that the the use of mere wasn't belittling as the scope of machines working on established physical principles is clearly pretty huge.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: