Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

'"In tests it even used the word bullshit in an answer to a researcher's query."'

I'm not sure which is worse: the singularity with a bullshit detector or without.



If the singularity had a bullshit detector, it would kick Ray Kurzweil out.


Could someone explain the Ray Kurzweil hate to me? Predicting the future is hard.


Kurzweil is certainly a very smart guy, and he's done a lot of important things. I think people are uncomfortable with how specific he is when making predictions. He also makes predictions in many fields in which he isn't an expert, but a well-informed observer.

I'm not informed enough to comment on his actual predictions, but I've heard him defend himself. His response to criticism that I've heard is, "my critics are uninformed about fact X," but he doesn't make an attempt to justify X. That approach strikes me as disrespectful and intellectually dishonest, as it's a tactic used by many hucksters and snake oil salesmen. I have a limited understanding of his positions, but the way he presents himself makes me uneasy.


I agree about the elaborate predictions and sales pitch feeling. I've reconsidered my Kurzweil hate since the Google hire, because I trust that they have people who can evaluate his skills. Before that, I had to evaluate them on my own, and I drew similar conclusions to yours.

I also think it was elitist of me. A lot of it was just because he's published books that have been marketed as pop-sci trade paperbacks, but I never actually read any of them.


Everyone knows that predicting the future is hard, but Singularity aficionados try to elevate that fact to a profound prediction in and of itself.

The analogy I always use is a long straight flat desert highway. If you stand in the middle of it and look down its length, it appears to converge to a singularity in the distance. If you can see mileposts, you might even try to estimate how far away that singularity is.

But if you drive toward the singularity, you'll never run into it. It recedes before you. It is just a trick of perspective.

Basically all the talk of how crazy life will be after The Singularity is like trying to explain how crazy life would be on the other side of a rainbow. Fun, but not actually useful.


This response has nothing to do with anything. The singularity as Kurzweil is describing it is marked by tangible events like the development of a true AI or atleast a machine learning algorithm capable of discovering and proving its own concepts. I have no idea how this relates to your very abstract singularity (a trick of perspective?).


Kurzeil attempts to privilege certain technological milestones as substantively "different" from other technological milestones (e.g. "true" AI)--and thus claim their consequences for human culture are uniquely unpredictable.

My point is that this is just an assertion, not a prediction. Every future development to some extent obscures our ability to predict the future of human culture. Look far enough into the future along any line of inquiry (legal, artistic, religious, energy, biology, etc.) and there is a singularity beyond which we cannot predict. It's just a function of trying to predict the future in general, not some special property of AI.


We may not be able to predict future developments in musical composition, but we can predict that song writers will probably not convert the entire mass of the solar system into musical instruments over a two week period.

The same cannot be said of self-improving AIs. Kurzweil's Singularity is not about the difficulty of long term predictions, it is that the progress function may become so steep that even short term predictions become impossible.


Self-aware music might do that-- and beyond the musical singularity, a concept like "self-aware music" has to be treated as plausible.

Which I think is the point.


Why can't the same be said of self-improving AIs? This seems to me like the sort of awesome-sounding but unsupported assertion that leads people to roll their eyes at the Singularity crowd.


For me it's the smug "See, I told you cell phones would be important in the future" gloating over obvious predictions, and the "well I said we would all be using driverless cars but since there is one, that still validates my entire prediction" hedging.

Also his whole "I'm going to eat magic vitamins that will keep me alive forever" thing.


Not wanting to die is a mainstream mindset.


Personally, I think he's way too optimistic about how future tech will be used. And he rarely has good reasons. Stuff like foglets https://en.wikipedia.org/wiki/Utility_fog He didn't invent the idea of the foglet, he just believes for some reason that people won't abuse them. Also in his book he posits that the Singularity will be nice to people who choose not to join it. His only reason is that he's pretty sure the Singularity will like humans.


Predicting the future is hard, yes. Kurzweil has made some good progress on various forms of AI and other computer applications, but he also has some "fringe-y" beliefs. Belief in mind uploading, his massive regimen of supplements, and his wish to bring back his father by scanning his writings are what I think are what most people object to.


There were lots of comments on this thread when he joined Google:

http://news.ycombinator.com/item?id=4923914


...which is why Kurzweil's specificity and overconfidence raises eyebrows.


Aside: I take hatred of Ray to be evidence of how insane humans are.


Then I suppose someone should tell Google their bullshit detector is broken.


In 2029, when we are still not able to 'upload our brains' into a computer, I will come back here specifically to call bullshit.


  Depends how you define upload.  Why couldn't  a computer just simulate a persons outputs based on recordings of their life.  Does virtualization of life really need to have consistent consciousness? Maybe rip would be a better word than upload.


You can't extract all the information contained in a brain non-destructively. You'd need reproduce the neural graph, 10ˆ11 neurons with an average of 7000 synapses (connections), along with the type of neurons, the dendritic trees, the strength of the synapses, the map of active genes in each cell, and most probably other cellular parameters, like the state of the cytoskeletton. Probably even more.

It would require something along the lines of the Allen Brain Atlas Project [0], but much more advanced, since you'd have to extract all the information out of one brain. The Allen project has several atlasses, built out of different brains. They've yet to map the circuitry of a mouse brain (the project was started in 2011).

Even if it were possible to extract the relevant info, since the extraction would destroy the brain, you'd better have a conscious clone, which preserves the identity of the original. What happens to one's identity is also troubling, since you could, theoretically, instantiate several copies of someone. I'm not sure the identity would be carried over, actually.

By identity, I mean the fact that you're the same person in the morning that you were when you fell asleep. Your consciousness dissolves and re-emerges, and you're still yourself. We take it as granted, but it is extremely puzzling to me.

Another problem is that the relations between time and consciousness have yet to be understood. The fact that the brain processes information in parallel is probably important, meaning that a fast serial simulation by turing machines would not necessarily cut it, even with "massive" supercomputers. The level of parallelism in the brain will not be achieved in a long time in silico, at least not with the current approach.

[0] http://en.wikipedia.org/wiki/Allen_Brain_Atlas

--

Side note: you should remove the four spaces at the beginning of your post.


Hmm, on your last point, assuming the serial computation is done by calculating brain state for each time slice, would this end up being functionally equivalent (if slower) than the parallel brain process? Since from the "brain's" perspective, everything is getting updated in parallel?


Continuity of consciousness is a fascinating problem to contemplate.

To Be: http://www.youtube.com/watch?v=pdxucpPq6Lc

See also the Grandfather's Axe paradox, aka Ship of Theseus: http://en.wikipedia.org/wiki/Ship_of_Theseus

A longer piece, Mechanisms of Mind Transfer: http://www.mind.ilstu.edu/curriculum/extraordinary_future/Ph...


"To be" assumes that 1) it is possible to extract the information non-destructively, and 2) that it is possible to extract the information at all. The closer the measurement, the more you measure the interaction between the measuring instrument and the observed phenomenon, rather than the phenomenon itself.

Regarding the Ship of Theseus, our identity is most likely tied to an ever evolving process that depends on the architecture of our brains rather than a fixed set of molecular components. Beside the neuronal DNA, most if not all cell components are subject to a turnover.


That wouldn't work because it would include things a person did at various times during their life. It also would be hard to make sure that it reacts or changes in response to new stimuli in the same way that person would.


~2045 i think you mean for brain uploading.


I'd imagine the http://en.wikipedia.org/wiki/No-cloning_theorem has something to say about a brain upload. Maybe an approximate copy is good enough?


A copy is never good enough. I think the best approach would be to enhance the brain by attaching devices to it directly. This maintains continuity and doesn't raise as many philosophical questions. This also raises other interesting questions like how long can the brain live in a suspended solution and where would all these consciousnesses live...


Slowly replace organics with mechanics until you only have a machine left. If you keep it gradual, you never know when you stop being a cyborg and start being a full fledged android.


You might not be able to make a perfect quantum copy, but with iterative refinement you can get arbitrarily close. I think 99.9999999% or so would be good enough.


You might be interested in Greg Egan's [0] story 'The Jewel' [1]. [edit] To clarify, it's actually two stories "Learning to Be Me" and "Closer".

Greg Egan is one of my favourite modern sci-fi writers. He explores interesting and hard ideas about what it means to be human by placing humanity in thought provoking situations.

In 'The Jewel' we have developed an implant that learns how to 'be' the host. At some point your brain is removed and the jewel takes over your functioning. The thing is, what happens when something goes wrong... boom boom booooom!!?!?!

But really, read him if you are into sci-fi.

[0] http://en.wikipedia.org/wiki/Greg_Egan

[1] http://en.wikipedia.org/wiki/Axiomatic_(story_collection)


Well, upload a scan shouldn't be to difficult. Compiling it on the other hand..


I wasn't aware they had one..


You need a Google+ account to access it.


What was the query? I'd really like to know if it used bullshit correctly?

Sure could/should/must not be used to advertise the tech, but If I had a pocket watson, I'd have no problems with it calling it like it parses it.


I think this is incredibly hilarious?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: