Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I see a lot of "we don't know how it works therefore it could destroy all of us" but that sounds really handwavy to me. I want to see some concrete examples of how it's dangerous.


Given that the link contains dozens of articles with read times over 10 minutes there is no way you engaged with the problem enough to be able to dismiss it so casually with yourcown hand waving. Ignoring that fact however we can just look at what Bing and ChatGPT have been up to since release.

Basically immediately after release both models were "jailbroken" in ways that allowed them to do undesirable things that OpenAI never intended whether that's giving recipes for how to cook Meth or going on unhinged rants and threatening to kill the humans they are chatting with. In the AI Safety circles you would call these models "unaligned", they are not aligned to human values and do things we don't want them to. HERE is THE problem, as impressive as these models may be I don't think anyone thinks they are really at human levels of intelligence or capability, maybe barely at like mouse level intelligence or something like that. Even at that LOW level of intelligence these models are unpredictably uncontrollable. So we haven't even figured out how to make these "simple" models behave in ways we care about. So now lets project forward to GPT-10 which may be at human level or higher and think about the things it may be able to do, we already know we can't control far simpler models so it goes without saying that this model will likely be even more uncontrollable and since it is much more powerful it is much more dangerous. Another problem is that we don't know how long before we get to a GPT-N that is actually dangerous so we don't know how long we have to make it safe. Most serious people in the field think making Human level AI is a very hard problem but that making a Human level AI that is Safe is another step up in difficulty.


These models are uncontrollable because they're simple black box models. It's not clear to me that the kind of approach that would lead to human-level intelligence would necessarily be as opaque, because -- given the limited amount of training data -- the models involved would require more predefined structure.

I'm not concerned, absent significant advances in computing power far beyond the current trajectory.


Are people dangerous? Yes or no question.

Do we have shitloads of regulations on what people can or cannot do? Yes or no question.


Sometimes yes, and sometimes yes.

I can be convinced, I just want to see the arguments.


The best argument I can make is to say, do not come at the issue with black/white thinking. I try to look at it more of 'probability if/of'.

Myself, I think the probability of a human level capable intelligence/intellect/reason AI is year 100% in the next decade or so, maybe 2 decades if things slow down. I'm talking about taking information from a wide range sensors and being able to use it short term thinking, and then being able to reincorporate it into long term memory as humans do.

So that gets us human level AGI, but why is it capped there? Science as far as I know hasn't come up with a theorem that says once you are smart as a human you hit some limit and it doesn't get any better than that. So, now you have to ask, by producing AGI in computer form, have we actually created an ASI? A machine with vast reasoning capability, but also submicrosecond access to a vast array of different data sources. For example every police camera. How many companies will allow said AI into their transaction systems for optimization? Will government controlled AI's have laws that your data must be accessed and monitored by the AI? Already you can see how this can spiral into dystopia...

But that is not the limit. If AI can learn and reason like a human, and humans can build and test smarter and smarter machines, why can't the AI? Don't think of AI as the software running on a chip somewhere, also think of it as every peripheral controlled by said AI. If we can have an AI create another AI (hardware+software) the idea of AI alignment is gone (and it's already pretty busted as it is).

Anyway, I've already written half a book here and have not even touched on any number of the arguments out there. Maybe reading something by Ray Kurzweil (pie in the sky, but interestingly we're following that trend) or Nick Bostrom would be a good place to start just to make sure you've not missed any of the existing arguments that are out there.

Also, if you are a video watcher check Robert Miles youtube channel


> Myself, I think the probability of a human level capable intelligence/intellect/reason AI is year 100% in the next decade or so, maybe 2 decades if things slow down.

How is this supposed to work, as we reach the limit to how small transistors can get? Fully simulating a human brain would take a vast amount of computing power, far more than is necessary to train a large language model. Maybe we don't need to fully simulate a brain for human-level artificial intelligence, but even if it's a tenth of the brain that's still a giant, inaccessible amount of compute.

For general, reason-capable AI we'll need a fundamentally different approach to computing, and there's nothing out there that'll be production-ready in a decade.


I don't see why the path of AI technology would jump from 1) "horribly incompetent at most things" to 2) "capable of destroying Humanity in a runaway loop before we can react"

I suspect we will have plenty of intermediary time between those two steps where bad corporations will try to abuse the power of mediocre-to-powerful AI technology, and they will overstep, ultimately forcing regulators to pay attention. At some point before it becomes too powerful to stop, it will be regulated.


Simpler, less powerful, models have already contributed to increases in suicides, genocide and a worldwide epidemic of stupidity.

Assuming more powerful models will have the same goals extrapolate the harm caused by simple multiplication until you run out of resilience buffer




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: