Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> It doesn't have interests, thoughts and feelings.

Why does it need these things to make the following statement true?

> if we grant these systems too much power, they could do serious harm



How about rephrasing that, to not anthropomorphize AI by giving it agency, intent, interests, thoughts, or feelings, and to assign the blame where it belongs:

"If we grant these systems too much power, we could do ourselves serious harm."


Reading this thread makes me depressed about the potential for AI alignment thinking to reach mainstream in time :(


Sure, but the same can be said about believing the things random people on the internet say. I don't think AI really adds anything new in that sense.


Because it does not and cannot act on it's own. It's a neat tool and nothing more at this point.

Context to that statement is important, because the OP is implying that it is dangerous because it could act in a way that dose not align with human interests. But it can't because it does not act on it's own.


One way to grant those systems an ability to act is to rely excessively or at all on them while making decisions.

It's obvious, no?


"if we grant these systems too much power"


You can say that about anything.

"If we grant these calculators too much power"


Or the people that rely on the tools to make decisions...

https://sheetcast.com/articles/ten-memorable-excel-disasters


Yes, and it's not as absurd as it might seem:

Imagine hooking up all ICBMs to launch whenever this week's Powerball draw consists exclusively of prime numbers: Absurd, and nobody would do it.

Now imagine hooking them up to the output of a "complex AI trained on various scenarios and linked to intelligence sources including public news and social media sentiment" instead – in order to create a credible second-strike/dead hand capability or whatnot.

I'm pretty sure the latter doesn't sound as absurd as the former to quite a few people...

A system doesn't need to be "true AI" to be existentially dangerous to humanity.


How is a calculator going to cause harm? Assuming you get an industrially rated circuit board when appropriate, it should work just fine as a PLC.

If you try to make it drive a car, I wouldn't call that a problem of giving it too much power.


I'd say by far our biggest problem for the foreseeable future is granting other humans too much power.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: