Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If you only read one thing, read the last few paragraphs after the second line. They are relevant even if I were to agree with all the points you've made

-----------

Obviously I agree with 1. I'm not sure that's inevitable but I think it's very likely.

> If it is the case that humans can build human level intelligence (which I have argued they can and will), then it is also pretty self evidently the case that humans can improve on that intelligence.

That's not self-evident at all. We don't even know what intelligence is, so it's possible a human is as intelligent as possible.

> For example, given we have the technology to build a machine with identical mental characteristics to a human, we would also have the requisite knowledge to, say expand that machine's working memory by 1%. Or to create the machine with swappable sensory organs so it can directly absorb lots of different types of data.

That's not so clear at all. You could say that the internet has already increased human memory by orders of magnitude, yet it hasn't made us "super intelligent". It is not certain that we can actually increase the "in brain" working memory of an intelligent machine without, say, giving it a mental illness at the same time.

You assume that we can take "intelligence" and make it as malleable as we like, change every parameter to our wishes, but until we know what intelligence is, that's not at all certain.

> You're not actually any smarter, but with 10 of you, you're more productive than any single human being could ever be.

But not more productive than 10 humans, and at some point you might become less productive. It's not at all unlikely that those 10 copies of you start hating one another and won't be able to work together.

And even if that were true, the self-improvement progress will likely be very slow: http://www.sphere-engineering.com/blog/the-singularity-is-no...

Of course, the big question I pose to you in the end is why do you think an improved mind is even an important goal in the first place, but we'll get there.

> Now your ideally-coordinated team of smart machines just got smarter across the board.

Like I said, 1/ you don't know that they'll be ideally coordinated, and 2/ you don't know how much harder it is to make them smarter. It may possibly require exponentially more resources.

> Soon you're millions strong, you have teams working on brain improvement, coordination improvement, science teams of all disciplines, a resource acquisition team playing the stock market, each moment shipping new improvements to your brain and amassing resources and power.

That's a very nice fantasy, but we already have lots of intelligent beings. Are they coordinating like that? To some degree, they are. But why do you think "smarter" things could do it better? Are smarter people better at amassing resources? At getting power?

> There you have a computer controlling hundreds or thousands of independent agents.

Yes, there's a non-intelligent machine that's doing that. But the intelligent machine playing that game can only play one. You have this incredibly powerful machine in your head, and it can only concentrate or one thing or maybe two. It is possible that if you want an intelligent mind that is both coherent with itself and able to concentrate on lots of things at once then you need a number of processing units (say, neurons) that grows exponentially with the number of things you want to do at once. And maybe that's not even possible at all. Maybe beyond some point, the intelligence simply goes mad.

> Why not just write the software for the robots you control to be mostly autonomous, and give yourself an API through which to issue commands that the robots can mostly execute on their own?

Why don't you do that? Because it's hard and may take years, and some guys with guns realize what you're doing before you finish and come arrest you.

> Further, why even do (b), when you have the ability to replicate your brain?

Because my brain replicas might very soon decide they don't want to play together.

> So, with this narrative of how it could work, do you see how dangerous an AI could be? One that isn't stably goal-aligned with basic things like life on earth?

Of course it could be dangerous (I never said I couldn't imagine a scenario where an AI would be so powerful and so dangerous), but I also think I've demonstrated why AI may not necessarily be as powerful as you think. Also, by the time all this happens, I think there are more serious threats to human existence, like pandemics and climate change. They won't kill us, but they might set us back AI-wise a few centuries. So of all possible dangers we must consider at this point in time, I would put a hostile and dangerous AI waaay down on the list. It's possible, but it's far more likely that other stuff would get us first.

-------

One of the dangers that are waay more likely than the scenario you've described, is a sub-intelligent "hostile" software (that is perhaps not intentionally hostile, but indifferently so). It's more likely that some far dumber-than-human machine would replicate itself with full coordination over its replicas and wipe us out.

I really think that this emphasis on intelligence as the danger is a personal fantasy of singularists who'd like to think of themselves as powerful and dangerous (or the only stopgap against that). You really don't need to be so smart in order to amass power.

Another example: What I fear more than a super-intelligent AI is a super charismatic AI. That AI, without replicating itself, without trying to improve itself, simply charms a lot of people into following it, and establishes a reign of terror. Alternatively, as charisma is more effective than intelligence in controlling others, I would find it more likely that a charismatic AI would be able to control her replicas. It doesn't need to improve its own intelligence, because it's intelligent enough to inflict all the damage it wants.

Or what about a cunning AI? Sure, cunning is correlated with intelligence to a degree, but beyond a certain point you don't see that very intelligent people are very cunning. Sometimes far from it (Eliezer Yudkowsky is a prime example; his thought process is so predictable, that if our AI deity were like him, some cunning people would quickly find ways to foil its plans; that's a very non-dangerous AI).

We've seen how one charming person is far more powerful than thousands of really smart people. We also see how very dumb insects can coordinate themselves in very large numbers in a very impressive way.

I think intelligent people tend to overestimate the importance of intelligence and not to notice that other abilities we see around us every day are far more powerful. If anything, you'll note that very high intelligence is often correlated with low charisma or with a sort of timidity -- nebbishness if you will. A super-intelligent AI would probably be super-nebbish :)

Now, a super-charismatic AI -- now that's scary. Or a non-intelligent army of insects.

So if I were to ask you one question it would be this: Why do you place such a high emphasis on intelligence in the scenario you've described?




Ok, I think I'm starting to understand why we're disagreeing.

I'm not sure what your background is, but I'm noticing that I probably have significantly more detailed models of both intelligence and cooperation than you do. Please don't take that as an insult at all, it's just that it's part of my work to know these things. I think the inferential gap may be too wide for a reply here :/

I think your position is understandable given your priors, you're not crazy for thinking what you think.

I feel like an asshole for stopping the conversation with "I know more than you, but I won't explain it," it's a total dickhead move, and I'm really sorry. If I had time, I'd write much more and I bet it would be great for both of us. Maybe someday we'll have a chance :)


Perhaps you're right, but I very much doubt that this is the case :) My professional background is in math, CS, history and psychology (my career has been quite varied, and included research in neural networks, although that was over 15 years ago, as well as field work with criminals).

My main motivation wasn't to challenge your beliefs in the capabilities of an artificial intelligence. I fully realize it is possible that it will take the form of an all-knowing god, but to challenge your beliefs in our ability to predict technological advances and threats to humanity (of which I doubt you have any promising models, because there aren't any), as well as to provoke thought about how power flows in society, where a single charismatic Hitler was far more dangerous than 100 Einsteins, and whose charisma swayed people regardless of their level of intelligence (in fact, people with humanities background like writers, were, on average, far less likely to be swayed than people with science background). I think that an intelligent 100x that of humans (whatever that means) is still likely to be swayed by a charismatic person. I am not suggesting that AIs would thus be subjugated to people or integrated to our society, necessarily, but that they, too, might feel their own social forces, and there are some interesting dangers lurking there. For example, consider this: what happens not if someone invents an AI, but if 50 labs invent 50 AIs at the same time (which, frankly, is far more likely). How would those AIs interact with one another? How would their goals evolve as a result of that interaction?

Most discussions of AI concentrates on intelligence and goals/motivations -- which is understandable given the psychology of the people participating on the discussion -- and not enough on the AI's psychology and social psychology (even amongst other AIs). I probably know the answer you have to my question of "why do you think it is intelligence that is so dangerous", but that is not the right answer, or, at least, not the interesting one (even if you define intelligence as a general problem-solving skill, and higher intelligence = more problems of any kind solved faster -- though there are very good reasons not to define intelligence in this way) :)

My problem with the singularitarians' "threat model" is not that it is not plausible, but that they have not given serious consideration to other kinds of threats, even those including AIs. They tend to fall into the trap many people who are only science-educated do, and fall in love with their one pet-theory rather than truly try to challenge it. Yudkowsky's discussions of potential challenges to his "AI threat" are rather laughable, not because of how he counters them but by how narrow his thinking is (which is a result of his obsession with what he calls rationality).

I'm sure Yudkowsky would find my discussion of his followers' psychology ad hominem and extremely irrational, but that is not the case if you're trying to evaluate not a particular threat model, but to construct a meta-model of threat models and how you weigh one against another. If you construct a Bayesian analysis of threats given that meta-model, you'll see that psychology plays a big role.

Another thing I find curious about his thinking (which very much affects how he weighs threats) is the assignment of negative infinity to the event "destruction of the human race". I think that if you consider the following -- your own death, the destruction of your life's work, your own physical suffering, the death of your entire family, physical suffering of your entire family, destruction of your country (this may not seem relevant to some, but I have a good reason to include it), destruction of your culture, destruction of human civilization, destruction of the human race, and destruction of the planet -- you'll either find that people assign negative infinity much sooner than "destruction of the human race", assign that to events other than that one and not to that one, or to none of them, and in the latter case, the jumps in values would be rather surprising (or not, depending how you look at things). For example, I would give the destruction of civilization and destruction of the human race the same value (not negative infinity, though), and the destruction of the planet a much, much lower (more negative) one. For instance, I find a nuclear holocaust a much worse result than an AI destroying all humans but keeping all animals alive, even if some humans survive the first event (but not civilization) and none survive the second. I would also assign a lower value to all humans becoming fascists than to an AI destroying all humans.

But I'm sure there are others who are more thought-provoking than him who discuss this topic.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: