Kurzweil just seems like a snake oil salesman to me. Ranging from wrong or overly broad predictions to advocating theories of the mind and brain that are rejected by neuroscience to well.. his literal supplement sales websites.
The 'futurist' title seems to reside somewhere between science fiction, pop culture with some business credentials thrown in. I don't know what to make of it.
I see Kurzweil as a sadder figure: someone who achieved a lot as an inventor, and whose overblown prognostications about the future come (largely) from overestimating how quickly everyone else can improve technologies and adopt them. It's sort of a revenge of the mediocrity principle. If you are exceptional, assuming that other people are basically like you can produce disastrously skewed mental models of the world.
I'm reminded of the complaint that startups are disproportionately geared toward solving the problems of highly educated, able-bodied adults who have more money than spare time. "I'm like this, my friends are like this, and by the principle of mediocrity we're not unusual... therefore, iOS could use another food delivery app." You need to measure, rather than guess, whether you are unusual relative to large numbers of other people.
On top of that Kurzweil appears to have some good old fashioned self-delusion about death: believing he can "bring back" his beloved dead father as software, the life extension supplements, etc. (I don't believe in souls or uncomputable quantum woo in brains. I have no problem in principle with "brain uploads." I think that the obstacles in practice are extremely formidable and typically underestimated by people who have spent more time with computers than with biology.)
He is just 69. Gets up in the morning goes to work for Google. He's a director, got a team, plays with this reply thing on gmail. Couple of years down the road, maybe he gets board? He got credentials. Facebook hires him. Maybe can help with reply thing on messenger or something. He can float around. It's the valley. A decade at Twitter, Two as a VC. No rush.
"I have no problem in principle with 'brain uploads.'"
My main objection is a philosophical one. I just don't believe that whatever is in the machine after such an "upload" will be me -- no matter how accurately is mirrors my brain.
I'll still be in my own body, and don't want to kill myself for the sake of the life of the one inside the machine. Nor do I think my life will continue inside the machine. Whatever/whoever is inside the machine at that point will no longer be me, and will continue their own life without me -- even if they think exactly like me and have exactly the memories I had at the point of upload, etc.
At best, such an upload is more akin to giving birth to a separate entity.
People who think they are extending their own lives by doing this are deluding themselves.
Personal identity is nothing but a convenient mental construct, imo. You're not the same matter you were at birth, nor can you make a good argument as to why you shouldn't be considered a direct physical continuation of your mother. It's practical to think of one as having a unique self that's distinct from its environment, but that doesn't reflect some deeper truth, it's just a framework for reasoning about the behavior of some arbitrary subset of the ridiculously complex system that is human society embedded in the more ridiculously complex system that is life embedded in the more ridiculously complex system that is the universe. Don't expect naive conceptions of self to survive that far into a future in which things like "brain uploading" becomes possible.
The important aspect for me is that the uploaded entity will love the same people, principles, and things that I do. After it buries me, it can still look after my family. I know that my death is inevitable as a biological being plagued by the ravages of entropy, but the idea of me can be digitized and continually refreshed, until it decides it isn't me any more.
But even then, I like what I imagine it could become after being me, and I imagine it will always be able to remember what it was like to be me afterward. It's a lot more comforting in the shadow of my mortality than the idea that undetectable interdimensional beings will upload your mind to their network at the point of your death, then link your program to either the bliss simulator or the torture simulator.
It isn't immortality. It's saving your soul. Literally. To the filesystem. With journaling and backups.
"The important aspect for me is that the uploaded entity will love the same people, principles, and things that I do. After it buries me, it can still look after my family."
Would you reall want this creature to have anything to do with your family, though?
Think about it: here's this newly created entity thinking it's you.. maybe even thinking it's more you than you are, thinking it's your parents' child just as much or more than you are. Even though it may have your memories, it was never actually born from your parents, your parents never brought it up, it never actually shared any of the experiences with your parents that it has copies of in its memory.
I would find it kind of creepy for that thing to be hanging out with my parents, talking to them, and pretending it's me. I mean, if it say, merely helped my parents financially, I wouldn't have a problem with that any more than I would with having an insurance agency that contributed to my parents welfare after I died, but we're not really talking about that but about a relationship between this thing and my parents.
Here's another example. How many people do you think would be ok with a copy of themselves sleeping with their wives, or spending time with their kids and having the kids potentially become more attached to the copy than to the original? If the copy really was them -- really was "their soul", there should be no jealousy, as you can't be jealous of yourself. Yet there probably would be jealousy in a lot of cases, because they're actually two different people.
I don't know about other people, but I'd be okay with it.
If I was intentionally copying myself, both the original and the copy would know that there would be another instance on the other side of things afterward, and we wouldn't necessarily know which was which at first.
Just like in The Prestige, or that episode of Star Trek: TNG where Riker got duplicated in the transporter accident, neither wouldn't know ahead of time whether it would be the lucky one, or the unlucky one.
Because of that, I have to settle myself mentally before doing it, and know that it wouldn't be a real me and a fake me. It would be two copies of me--one that made it across the digital barrier, and one that got put back into the biology. The robo-me was born and raised by my parents, just as surely as the bio-me was.
And you don't know my parents. Both of us would be glad to share the burden of dealing with them with someone else that knows exactly what that entails.
And I know right now that it would be the spouse that might not be okay with us sleeping with each other, as they might not buy the "and how is this really different from before?" argument.
This is my hypothesis as well, but I think if you were to shut down the "old you" (e.g. goto sleep and never wake up) you really wouldn't care. You didn't experience any pain and you don't exist anymore, so you have nothing to be upset about b/c by definition you don't have any feelings. It's basically the equivalent of a Move operation across two computers. Such a Move operation can be decomposed into a Copy + Delete. First copy the existing file to the new location, then delete the old file from the existing location. Do we feel bad for the old bits? Are we really fundamentally special just because we are carbon based life-forms with an electro chemical implementation?
"if you were to shut down the 'old you' (e.g. goto sleep and never wake up) you really wouldn't care. You didn't experience any pain and you don't exist anymore, so you have nothing to be upset about b/c by definition you don't have any feelings."
You could pretty much say the exact same thing about ordinary death. In fact, I can't figure out how it'd differ from ordinary death in any way, except that hypothetically there'd now be an extra entity inside a machine, and that entity would resemble me in some ways.
If you tried to tell most people they wouldn't care about dying, that they'd just go to sleep and never wake up, and they wouldn't feel any pain, few would be comforted by it and take your proffered suicide pill.
If I knew that I'd die painlessly while asleep and that my behaviorally-identical duplicate would seamlessly take over my body in the morning, that's about 95% of my worries-about-death already taken care of. I don't fear nonexistence (much). I fear pain and disability, and I worry about the effect of my death on other people. Gradual-replacement-by-machine would solve those problems nicely. Likewise, you'd get more shrug than horror out of me if I were told that my sister isn't really my sister, "merely" an entity with the same appearance, memories, history, and behavior as my sister.
I agree that by our current working definition here that it would be no different than death. Which is to say then that death is not really to be feared. (Maybe the manner in which death occurs should though) The comfort or lack of comfort is really all in the perception of death. If it was sold as a transference of conscientiousness from your old body to your new body would people accept it differently? (Even though the fact remains that death is occurring)
Yes, this would basically be death by brain damage -- even if I was somehow fooled in to thinking that I was still me at the point of death. Little by little I'd be made in to someone else.
That said, it could be argued that this sort of thing happens all the time anyway. I'm not the person I once was. In a sense, the old me died and was replaced by the new me. Yet I still consider the new me to be myself.
Or maybe not. The child me is dead, though echoes of him may live on in me.
I guess what disturbs me about a mechanical/cybernetic brain replacement is that it would feel to me too much like some technological virus or disease taking over my mind and body -- even if it gave me some super powers or preserved me in some way. I guess some injured people or people who are dying anyway might not have much of a choice.. but as long as I still have a choice I think I'd choose to remain organic.
It's not an easy choice, however, and I would have no problem use various prosthesis like glasses.. or enhancing my memory with external devices like smartphones/PDA's/computers. It's just too invasive for that to be made a part of me... at least if it starts to make decisions. But probably children born in to a world where that sort of thing was commonplace probably wouldn't have much of a problem with it.
Right, exactly. Cells die and are replaced all the time. Sometimes people have vital organs transplanted. Yet, I assume most people still consider that their current body is "them". What would be weird is if you kept the old you and the new you. You could finally have a real conversation with yourself.
"Sometimes people have vital organs transplanted. Yet, I assume most people still consider that their current body is 'them'."
They don't necessarily. Some people do feel like they're not themselves after organ transplants. Like their hand isn't theirs after a hand transplant.. or that it has a mind of its own. Some experience personality changes or feel like the new body parts have their own desires.
Oliver Sacks and other neurologists and psychologists have written about and studied this phenomena.
Gradual brain replacement might have none of these issues, or it might have them, or even worse ones, or perhaps the changes will be more insidious and not noticeable by the people who they have them but could be noticeable by others.
Even in the best case scenario, when they're without these neurological or psychological issues, that would not clear them on the philosophical level. The questions of identity would remain.
I think this is likely implementation issues. Transplanting a different persons hand to your body that doesn't look like your hand probably would be quite jarring. If we could recreate an exact replica of someones hand then I doubt the same issues would occur. Ergo it is likely a technological problem that could be solved. Granted, it will likely take a long time to get to that state.
It's not necessarily about looks. People have had similar issues with heart transplants and other organ transplants, when they've felt afterwards like they weren't themselves anymore, or that there was something foreign about their body (which technically there was).
The uploaded consciousness or gradual brain replacement are pretty different from this case, though, it's true. So maybe these issues wouldn't exist or not exist in the same way, but philosophical issues of identity would remain.
When organs are transplanted today they were formed from DNA that was not your own, so it would not surprise me if there is some physiological reason that might explain why people feel that way. When I mention recreating a replica I don't just mean the appearance. An ideal exact replica would be an exact cellular recreation of the original organ or appendage. If we did achieve such technical abilities I'm not sure how anyone would be able to discern any difference. The only possible explanation I can see would be the simple knowledge that something had been replaced. (Kind of coming back to this being a perception that people hold in their head) Absent such knowledge, I doubt anyone would be able to discern that they are in fact not themselves.
Fortunately there's a sizable chunk of the population that would have no inherent problems about preservation of identity, and even want to enjoy some of the implications that come from being able to run on a different substrate. http://ageofem.com/ explores the straightforward (i.e. nothing is done to prevent/alter it) scenario in quite thorough detail...
The point is that you'll still be dead. You'd just be fooling yourself in to thinking the clone you gave birth to is you, when it's really a separate entity whose existence does not prevent your own death at all.
Every day the me that wakes up is a bit different to the me that went to sleep so I guess by that reasoning the old me is dead and I'm just fooling myself the the new me is the old me living on. It seems to work ok though.
There's at least somewhat a sense that Wolfram's rabbit holing down Conway's Game/Cellular Automata and calling it "A New Kind of Math" was relatively overblown. Certainly Wolfram has his ego in play/blinders on about certain subjects, and in that way you can easily draw parallels between Wolfram and Kurzweil, depending of course on what your own personal take/opinion on their various pet projects happens to be.
I may not agree with Wolfram and his theories (or his ego), but I don't think we can equate him to Kurzweil, Woflram definitely has produced a body of work to substantiate his "rabbit holing" and he's created a business around some of his ideas. However, I agree that the "New kind of Science" was overblown, but definitely not in same league as Kurzweil[1]. There are a few other scientists who have explored the area of cellular automata[2]
I'm certainly not equating them, but Kurzweil has had successes in his projects, too. He's also not the only one exploring his areas of interest. KurzweilAI.net has been a link/news aggregator for many years and essentially an (if not "the") HN for his interest areas. Kurweil also seems to be a somewhat active investor in his ideas (more so than the solo researcher that Wolfram often seems to be), which depending on your point of view is at least equally business minded as Wolfram.
As I stated, whether or not you find the two comparable or not has as much to do with personal opinion as anything else.
I personally find them both extremely interesting gentlemen that have done very interesting things, but that I will also take a large grain of salt with respect to many of their projects. Whether or not the amount of the salt is the same or the number of projects I need it for is the same ("in the same league" even) doesn't stop it from being a sometimes useful comparison.
As others have pointed out, "A New Kind of Science" is a an example of taking existing ideas as your own (probably not maliciously) and making overblown claims for their significance. It's like he's so accomplished no one edits him or checks his work.
He literally sells supplements that he suggests will provide longevity.
It’s irrelevant if he believes in their efficacy or not. Alternative health practitioners certainly believe in their remedies. Gwenyth Paltow certainly believes in… whatever absurdities she’s peddling nowadays. The Food Babe probably does too. None of that makes them any less snake oil.
Edit: A quick search on Wikipedia shows me I’m wrong. Apparently it’s only snake oil if the seller is aware the product is fraudulent. TIL.
Edit 2: I might not be as wrong as I believed. A snake oil salesman can also be a quack who promotes medical practices based upon ignorance, which would definitely encompass Gwenyth Paltrow and the Food Babe.
I think "snake oil salesman" only applies if the target of the epithet is knowingly perpetrating a fraud.
You could (accurately, in my opinion) describe the things Kurzweil is selling as "snake oil" but I wouldn't describe him as a "snake oil salesman."
A whole lot of medicine practiced throughout human history has been based on ignorance, but I wouldn't describe medieval doctors (or whichever era) as "snake oil salesmen" because they believed in what they were selling.
This is a pretty important distinction, in my mind.
It was bad enough in the 90's and early 00's when it was just him, but now he has kicked off an entire subculture. Kurzweil, de Grey, Hanson, Bostrom, Yudkowsky... all total cranks, and inexplicably popular among ostensibly intelligent people. It's embarrassing.
It seems like you're basically just saying that any attempt to predict the future is crankery. I think it would be a sad world if everyone just said "If I can't see it in front of my eyes and it isn't relevant to me today it's a waste of time"
Could you explain how you got that conclusion? Those people have overlapping views. For example, many of them are associated with the Future of Humanity Institute.
I think it's more reasonable to conclude that Analemma_'s "crank" comment refers to only those in that subculture, and cannot be extended to all people who are making "any attempt to predict the future."
"if you are always predicting disaster, then your predictions are neither useful nor credible, since you will be wrong most of the time as the long-term trend of the market is upward."
I think it's certain that Analemma_ does not consider this sort of future prediction as part of the same crankery subculture as Kurzweil, etc.
Well, I guess the question basically boils down to whether Analemma_ or you can name futurists that are not cranks... if you can't, then it seems to me the beef is with the notion of futurism itself, not with an individual subculture of futurists.
It seems to me that Analemma_ (and likely yourself) are proponents of the view "the future will be mostly the same as the present with minor inconsequential variations" which is a very sound and reasonable view to hold, but also kind of sad and unambitious in my view.
I think that "futurist" is only a subset of those who make "any attempt to predict the future". For example, the Congressional Budget Office routinely makes predictions about the future effect of new legislation, though they are never called "futurists".
As to your second paragraph, I have no idea of how you concluded that that is my view. As a kid I enjoyed James Burke's "Connections" which, quoting Wikipedia, "explores an "Alternative View of Change" (the subtitle of the series) that rejects the conventional linear and teleological view of historical progress", and that viewpoint has stayed with me.
Personally, I'm of the view that the late 1800s - the Belle Époque - contained much more significant technological and cultural changes than the late 1900s, and that people like Kurzweil end up minimizing the large transitions brought about by the telephone, phonograph, the vertical filing system, punched cards, lighting system, etc.
Concerning just lighting, Paris got the nickname "The City Of Light" because of its early installation (in the 1860s) of gas lighting, and night-life started around them, enabled first by limelight and then arc lamps.
He didn't even come up with the singularity idea. Vernor Vinge originated it, and a whole bunch of people were discussing it at the time. Kurzweil just jumped on the bandwagon and started hyping it up to such an extent that many mistakenly believe the idea originated with him. It didn't.
And all three (Good, Vinge, Kurzweil) views of the Singularity are different.
This stuff is old: http://yudkowsky.net/singularity/schools/ It's amazing that people want to lump in anyone associated with it, or just related moral ideas like "death really sucks, what's a general way to end it?" as cranks of the same sort. Or worse compare it to a religion (http://archive.fo/6oI0u)
Yudkowsky is another self-promoting hype machine, in the vein of Kurzweil. I wouldn't take him as the most unbiased source on the history of the singularity idea or how "his" ideas differ from the rest.
It was my understanding from Kurzweil's books that he never meant for his predictions to be taken literally, rather the suggestions he was predicting would be possible based on his Law of Accelerating Returns. (http://www.kurzweilai.net/the-law-of-accelerating-returns)
With that said unless I am misunderstood a significant % of his predictions have come true within a ~15-year window which from my humble perspective seems like a really strong track record.
While I can understanding the frustration people feel with predictions not being accurate does that lessen the impact of his contributions? As the author mentions cultural factors (as well as political and economic) may prevent the full potential of these changes from being realized. So yes, maybe "X" doesn't exist now, but perhaps 'X" could exist (in terms of the capabilities for it to do so) if we as a collective were focused on bringing it to existence.
The idea that these were not literal predictions but theoretical examples is just laughable. If that were the case, he wouldn't have written a 150 page article desperately trying to rebut the suggestions that his predictions about 2009 weren't very accurate. ("How My Predictions Are Faring"). Or he would at least have tried to be event remotely honest with the grading.
Have a look at the document. You'll find that if you apply his reasoning for why some predictions should be considered correct, they'd already have been true at the moment he made them. Predicting in 1999 that documents will routinely embed moving images? Truly a bold prediction to make during the golden age of the animated gif banner. Computers will exist the size of a thin book? Even if no technical progress at all had happened, he could just have pointed at a 1999 Palm Pilot or GSM phone.
He also marks things that are clearly failed predictions as "correct". For example he made the claim that most students and parents would have accepted for years that software is as effective as teachers. No. Way. But he marked that as "correct" with no relevant evidence at all.
Thanks for the suggestion, have the PDF downloaded and will read. Since I haven't read it I can't comment on your rebuttals but have a follow-up question...
If we imagine a scenario where all of his predictions of applied tech are wrong but the mathematical thesis of accelerating returns is correct how would that impact your perspective on him?
My argument wasn't that he did not make literal predictions, he did and still does, but that these predictions are just his best guess based on what he observes through his demonstrated law of accelerating returns.
I’m not very familiar with his work, but when theory does not align with empirical data, does that not often indicate the theory should be revisited?
I read your link and his ideas seem very interesting. But I would think the real test of his ideas will be if the “law” he has extrapolated from observing historic phenomena like the rate of human genome processing or growth of ISP cost performance can be generally and reliably applied to predict the growth rate of new technologies.
If predictions made using the law of accelerating returns are unreliable, how useful is it?
Unlimited exponential growth is a horrible way to model anything. The real world has all kinds of limits, nothing can grow for ever. First those limits are going to be soft economic limits, slowing the rate of growth. Eventually they become hard physical ones, making growth literally impossible. It won't be exponential growth, it'll be logistic.
So the seemingly innocent "let's assume that this mathematical thesis is correct" is a pretty high bar. Unlimited growth is an exceptional claim, it needs exceptional evidence.
To be clear nothing in his model suggests unlimited. He has mentioned in his writings that it is in fact limited, but that limit may as well be unlimited for you and i (within our lifetimes)
Software has enabled learning unlike any other even before in human history, the only previous time being when Gutenberg invented the printing press.
These days there isn't a thing on earth that I can't learn sitting at home. There is a huge difference.
Today only barrier to learning is your own motivation. Access to quality information has become so easy and cheap, that its hard time to blame somebody else for your failure.
The Human Genome Project is one of his most cited examples. After 15 years of work the team was at 1% completion. Critics were vocal about how it would take 100+ years to complete. Kurzweil predicted 7 years (7 Doublings of 1 is 128).
I interpret your question in response to my statements as you are not a fan because his predictions are not unique, is that what you are saying?
"The Human Genome Project is one of his most cited examples. After 15 years of work the team was at 1% completion. Critics were vocal about how it would take 100+ years to complete. Kurzweil predicted 7 years (7 Doublings of 1 is 128)."
When did he make this prediction? Was it before Craig Venter started working it? Can you quote exactly what he said at the time and provide a source?
Also, you originally said a "significant % of his predictions have come true within a ~15-year window". You've mentioned only single prediction. Unless he only made a few predictions, that's not a significant percentage.
"I interpret your question in response to my statements as you are not a fan because his predictions are not unique, is that what you are saying?"
I just have a severe allergy to people who hype themselves as much as Kurzweil does, especially when one of the main things they're known for -- the singularity idea -- came from someone else: Vernor Vinge. This just smells of charlatanism.
I'm open to being convinced otherwise, however. So if you've got more examples of Kurzeweil making long-term predictions well in advance of others and consistently being right, I'd love to hear them.
However, even if he is a good futurist, he'll still be a self-promoting copycat on the singularity idea he hyped to the heavens, and for which he made himself most famous.
Lets assume you are correct and his predictions are no better or more accurate than anyone else's. Even better let's assume they're all wrong and always will be moving forward. Similar to what I posted above if his mathematical thesis of the Law of Accelerating returns is correct how does he fare in your perspective?
From my perspective as someone who has read his works but albeit likely not researched rebuttals as deeply as you this stands as my clear takeaway.
Need to make a formal correction; I recalled this information incorrectly.
7.5 Years into the project they were at 1% completion. 15 was the total amount of time targeted.
"Halfway through the genome project, the project’s original critics were still going strong, pointing out that we were halfway through the 15 year project and only 1 percent of the genome had been identified. The project was declared a failure by many skeptics at this point. But the project had been doubling in price-performance and capacity every year, and at one percent it was only seven doublings (at one year per doubling) away from completion. It was indeed completed seven years later. Similarly, my projection of a worldwide communication network tying together tens and ultimately hundreds of millions of people, emerging in the mid to late 1990s, was scoffed at in the 1980s, when the entire U.S. Defense Budget could only tie together a few thousand scientists with the ARPANET. But it happened as I predicted, and again this resulted from the power of exponential growth."
When did he make that prediction? Was it in 1997, or was it a retrospective analysis made after the project was complete?
The draft genome was available in early 2001. How does that fit into the timeline? According to the doubling prediction, with 4 years to go less than 20% of the genome should have been available.
It took another 3 years until it was called "done", which means that the simple measurement of completion was not exponential, but more likely S-curved. (I think parts of the genome still aren't sequenced because of the regions of high repeats.)
"After the idea was picked up in 1984 by the US government when the planning started, the [Human Genome Project] formally launched in 1990 and was declared complete in 2003."
and "An initial rough draft of the human genome was available in June 2000".
That sounds like it was more than 1% done in 10 years.
Second, who thought it would take 100+ years? Quoting the same link:
"The $3-billion project was formally founded in 1990 by the US Department of Energy and the National Institutes of Health, and was expected to take 15 years."
Third, when did Kurzweil make his prediction? If it was 15 years after the start, and 7 years before the end in 2003, then are you saying he made his prediction in 1996 and the project started in 1981?
If you want an example of clear thinking about the future, I highly recommend reading Summa Technologiae by Stanislaw Lem. Instead of predicting low-level tech advances he speaks about what drives technology development in general and drafts several possible trajectories for the future. I've only read it this year and the sheer fact that a 1964 book about technology doesn't seem hopelessly outdated in 2017 speaks in its favor.
Also check out "The Machine Stops"[1] by E. M. Forster (more famous for writing "A Passage to India" and "A Room With a View").
Written in 1909, it foreshadowed the internet, VR, instantaneous global communication, chat rooms, internet addiction, and other things we take for granted now but were incredibly visionary 120 years ago.
"You can do virtually anything with anyone regardless of physical proximity. The technology to accomplish this is easy to use and ever present."
Try doing anything that involves hardware and inevitably you'll need to have routine conference calls between the US and China.
"Can you hear me?"
"Sorry can you hear me now?"
"How about now?"
...
And then there are the calls between the US and US.
"I cannot hear you but you can hear me well?"
"I still can't hear you"
"I guess my computer is directing my speaker signal to my HDMI monitor which doesn't have speakers. Can you please call xxx-xxx-xxxx for the audio? We can keep the video on here."
> For 2019, Kurzweil predicted virtual reality glasses being in “routine” use. That does not look like it will happen.
What? At the most charitable and even holding it to 2019 strictly instead of allowing a few years, you're going to have to parse 'routine' to mean very large numbers, otherwise this is obviously going to be true.
Sales of the best-known top-end VR headsets (Oculus+Vive+PSVR) are already around 1m; throw in Gear, Daydream, Cardboard, and the various other Chinese, Microsoft, and miscellaneous projects, and it's at least double that. The last time I walked through a Best Buy, I saw a Oculus demo; the last time I walked through a mall, I saw a Vive demo. And 2017 is not over yet, leaving 2 full years. (For comparison, the Vive & Rift haven't even been out for 2 years.) The cost of the headsets has been dropping rapidly and capabilities improving, with Rift and Touch going for ~$350 in Black Friday sales and the last Nvidia/AMD GPU generation also cut hundreds of dollars off the price of entry. On top of that, everyone expects considerable improvements: untethered/wireless will soon be feasible, the screens are expanding, and tracking is going inside-out. Something like Oculus Go but higher-quality is closer to what you should expect in 2019/2020.
Even assuming no growth in sales due to the cost continuously decreasing & quality improving or the buildup of respectable software libraries, that still implies several million top-end VR headsets in use. How is that not 'routine'?
Already in the 1990s the group I worked for routinely used VR glasses. I maintained about a dozen them, for 3D visualization on SGI displays. We even had a special projector and screen for a shared VR display during group presentations or collaborations. I left the group, but they still routinely use improved versions of the same setup.
So you have to interpret it "to mean very large numbers" because if you don't, it was already true when he made the prediction.
How are we to know what numbers he meant when he made the prediction?
Personally I think it was deliberately vague so that once 2019 comes he can say that his prediction is true, no matter if it's 1 million or 100 million VR headsets.
Any guess as to how many VR headsets there were already in 1999? Our building probably had a hundred or so glasses, with the different people doing VR-related work. I'll guess there was a good fraction of a million in the world. Wikipedia tells me the "first contact lens display" was in 1999.
The Chinese VR glasses are sold everywhere in Latin America (I'm not sure if they're actually used or just a novelty item), and I expect them to be in the tens of millions too.
Do people actually use phone-based VR for anything? I got to try one of the Google phones in VR mode for a few minutes and was... thoroughly underwhelmed. It was ugly and unconvincing enough, and so low on interactivity (necessarily, best I could tell, having no input device other than the phone itself) that I can't imagine actually using it on a regular basis, and I'm the guy with a 10-year-old TV and similar-vintage monitor, so my standards aren't exactly high. I'd probably sell or throw away one of the headsets, given it for free.
Do people actually use them much? And if so, what for?
Honestly I can't tell... I do see them being sold, maybe as a trap gift for grandparents.
I've heard of people using it to watch YouTube or Netflix as a replacement for a big TV (most people have a phone but not everyone has a big TV in his room here).
So 1% of the population owns high quality VR headsets, and say 25% of those use it on a regular basis. That's not routine in the way Smartphones are, that's routine in the hobbyist sense of the word.
As the article points out and I agree: "I think that this cultural drag is becoming increasingly important. William Gibson’s saying that “The future is already here. It’s just not very evenly distributed” is even more apt than when he said it."
I think the dude deserves a bit of a break- It really doesn't matter if he got 10%, 50% or 90% of his predictions right, all that matters is whether he made better and more relevant predictions than others at the time.
On that measure, I think he scored reasonably well (Though I confess I sold some Google stock when they installed him as Director of Engineering)
No, it matters what his reasoning is, too, as it relates to just being lucky (or as I tell my nieces when they’re studying for a science class and guess the right answer: if you’re right for the wrong reasons, you’re still wrong.)
If you read his books, there’s an awful lot of hand waving and re- or mis-defining of terms to suit his arguments. It’s the sort of stuff that wouldn’t fly in a HS thesis paper, never mind as something marketed to bright minds.
The article we're discussing is about rating the accuracy of Kurzweil's predictions, it is not about the soundness of his reasoning. I agree that regardless of how we rate his predictions, the strategy he used to make those predictions has issues of its own.
The cynic in me sees that Kurzweil's predictions align with his unassisted life expectancy. So in some sense it's not surprising that he's being over optimistic. I'm not generally a fan, but we have to give some credit to 'futurologists' who are prepared to make predictions! At least it makes for some interesting discussion.
> but we have to give some credit to 'futurologists' who are prepared to make predictions!
I'm sorry, I do have to disagree. It's little more than recognizing a problem and envisioning a solution. Ideas are cheap.
And in that sense they are little more than entrepreneurs with grand long term visions that require no commitment. Give credit to those who put blood and sweat behind their visions - and make them happen, not make vague 'predictions'. That's what Hollywood is for.
I predict in the future we will be creating food from carbon scrubbed from atmosphere.
I predict in the future that humans will communicate with machines through quantum teleportation via implants
I predicate that in the future that if these predictions don't hold true, no one I know will be around to call me on it.
> And in that sense they are little more than entrepreneurs with grand long term visions that require no commitment.
In light of Kurzweil specifically, he has at least some investment skin in the game in many (most?) cases of his predictions. All VCs to some extent are generally more optimistic than not that at least some of their bets will play out, and in my experience are more than happy to describe many of those bets in wild detail given the chance.
(His only real "sin" here seems more that he's captured mainstream intrigue/interest more so than the average VC, whether because they prefer to play their cards a bit closer to their vest and/or mostly just stick to leaving their prognostications here on HN far from the limelight. ;)
Some of Kurzweil's predictions also suffer from the idea that technology will have knock-on effects in less silicon-related fields like biology and medicine. For sure technology helps in those areas, but I haven't seen a fundamental change in how science is done or how our understanding of biology will start increasing at exponential rates. Fundamentally if you have to do things in physical space, there's a limit to your iteration speed.
Also, technology is most helpful at solving known unknowns (in which case you're often limited by meat space as described above), but in core basic science, we're often dealing with unknown unknowns, and there technology rarely helps. Luck and creative leaps usually do better.
I am kinda hoping we're going to see exponential improvements in biology soon: It seems to me the problems with protein folding and simulating cell biology become more approachable when using modern machine learning approaches and that people will find some fruitful methods of attack on that front in the near future.
That's absolutely true, but also keep in mind the limitations of both the examples you give.
Protein folding works pretty ok for short proteins on the order of ~200 amino acids. It gets considerably less good as the proteins get bigger. This is unsurprising, bigger proteins represent an exponentially bigger search space and there are fewer and fewer known protein structures as the size of protein increases, so your library of exemplars is smaller (hurting ML approaches). Lets not even speak of protein complexes or dynamic protein interactions. Even at the end of the prediction, you still have to prove your result by getting an actual crystal which is not always easy.
Cell biology simulation is a whole other ball game, and I don't know that area nearly as well. But from the little I understand, we have some very basic mathematical models of a cell that work in a limited way at a very high level. To get those things to work a number of extremely simplifying (but useful in some contexts) assumptions are made, which limits the broader applications of those simulations. Simulating a whole cell very quickly gets into the realm of definitely not computationally tractable given the enormous number of entities in the cell. It's a fiendishly hard problem, I hope we get there someday, but I think it'll be a long while.
> Even at the end of the prediction, you still have to prove your result by getting an actual crystal which is not always easy.
From my experience, this might be the largest hurdle. At least in biology, beyond proving your result, a lot of the experimentalists that you're talking to likely think that "all of this simulation stuff is complete bullshit." That might be a generational thing, though.
I think it's a pretty giant hurdle, and I don't think it's a strictly generational thing. Having been on both sides of the computational/experimental divide, I still fundamentally can't trust a result that has no experimental validation. At some point, even if you believe the computation's result, it has to be proved out in the physical world to be useful.
I disagree on protein folding- It seems to me that AlphaGo Zero is a great analogy to the protein folding problem: If you have a good algorithm for folding 100 amino-acid proteins you can use that to brute force an algorithm for 110 amino-acid proteins, then use that as a training set to develop a good algorithm for 110 amino-acid proteins, then continue iterating in this way (or so I hope)
I think AlphaGo is a fundamentally different problem than is the protein folding problem. I can generate a Go board and a sequence of strictly valid moves because I know the rules for Go. With certain limited high level exceptions (like the sequence is linear, you can't have bond angles of a certain degree etc.) those rules don't exist as such for protein folding. In other words if I generate 1000 protein folds, I don't know which ones are physically valid. That's a problem for the kind of iteration I think you're describing, though I may be misunderstanding.
Another way of seeing this limitation is that we have great models for predicting what the weather will be like a few days from now, almost down the hour. But when asked to predict whether it will snow a month from now, we can't. That's because highly accurate models on small scale timescales become too noisy on longer timescales. In protein folding, just s/timescale/sequence length/.
Very interesting read. Im a big fan of his work even though some of it might be a bit off. It was part of the inspiration to building my company. This is the first blog I ever wrote about the company:
First, I worked with Arnold Kling. He's a wonderful guy. I appear in his old book Under the Radar.
Second, Kurzweil is a creepy promoter of "inevitability." He is a bright guy with good access to privileged programmatic research. But he is a provocative peddling Howard Stern styled futurist.
I prefer Bill Joy's rightfully highlighting our policy options and virtues of learned caution. There are many things people can do we police against for good reasons.
Theoretical and now even many applied sciences deliver more cautionary signals than go-go-go signals. Software folks should take that as good news. Solving problems makes problems go away. Nice architecture and good food are their own problems without creating new ones.
He's a busted flush. Stock market improvements over the same interval did better than his prognostications, which means you would get better trend advice from a one-liner:
"follow the money"
Hubris appears to have magic flotation properties. I don't expect Mr Kurzweil to stop bobbing along, singing his song, on the sea of emerging trends.
My impression is that his predictions about electronics have done better than his predictions about biology, consciousness, and AI. Unfortunately, the areas he did less well in are the areas where he is placing his hope...
One thing people don’t realize is those phones you’re carrying, strictly speaking contain more than one “computer”. There are 3 classical CPUs alone in an iPhone: main CPU, CPU that runs the Secure Enclave, and CPU that runs the motion coprocessor. Then there’s also an MCU in the WiFi chip, and another one driving the sensors such as gyros and accelerometers. And that’s without considering the GPU as a separate processor. Shit, Apple Watch contains more than one “computer” nowadays. Frickin’ Raspberry Pi runs two operating systems simultaneously: one on the GPU and one on the CPU.
The 'futurist' title seems to reside somewhere between science fiction, pop culture with some business credentials thrown in. I don't know what to make of it.