Hacker News new | past | comments | ask | show | jobs | submit login

I'm pretty sure the "knowledge of how to write the software" is the most complex part of the problem by far.

However, one outside chance of a limit may be that the brain is doing something that is fundamentally different to the kinds of operations carried out by a normal computer - which is essentially the argument in The Emperor's New Mind. When I read that book at the height of my own AI enthusiasm I thought it was pretty silly. However, after reading Anathem (of all things) it made me wonder if perhaps Penrose may have had a point.




I think that the limitations we face vary depending on what model you are trying to solve.

So; are you trying to create software that "emulates" human intelligence (i.e. AI)? In which case, yes, software is the major limitation.

Or are you trying to create an artificial (and independently functioning) model of the human brain? In which case you have two limits; hardware speed. But also a huge lack of knowledge about the "secrets" of our brains :)

I suspect #1 will be first solved.


Assuming there's no trivial way of mapping our neural networks to hardware chips running binary code, writing the emulator might prove beyond the human mind.

Can our thought processes be abstracted into blocks of a few hundred thousand lines of high level language we might actually be capable of writing?


Perhaps you don't need to. Perhaps you only need to emulate the substrate (neural network, blabla) and then copy an instance of a running brain to it. That may be a lot simpler than understanding the actual processes.


Freeze, dice, slice, scan with an electron microscope, interpolate into a 3D model, analyze into a map of connections, construct the equivalent with software neurons, simulate the sense inputs, throw the on switch.

http://www.philosophy.ox.ac.uk/__data/assets/pdf_file/0019/3...


That's a very interesting paper and appears to confirm that computational power is the least of all the problems - whilst even in 2005 it was possible to run a simulation based on 10^11 random neurons, even the scanning technology we have available at present isn't yet adequate.


Thanks - that looks like a really interesting report.


I tend to think #2 is the approach that is most likely to lead to a "real" general intelligence - reverse engineer what we know works, replicate the essential "secrets" (whatever they are) and scale up and out.

I just don't see much progress on #1 - and people have been trying this approach for 50 years.


Maybe one way to overcome this would be to model the body at the molecular/atomic level and "run" someone's DNA. It's not impossible to imagine that a super computer in 30 years, starting with an model of an embryonic cell, could emulate the growth of the human body. It wouldn't even have to happen in real time. From the point of view of the individual, time would feel normal.

This, of course, has massive ethical and practical implications. It wouldn't be fair to do this without simulating external stimulus (e.g. photo's hitting the back of the eye) or human to human interaction. You wouldn't be able to ask the individual beforehand so it probably would be considered completely unethical... that doesn't mean someone won't do it eventually though.


Penrose is wasting his time and ignoring Occam's razor. There is no reason to suppose neurons need any quantum special sauce to do computation.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: