Hacker News new | past | comments | ask | show | jobs | submit login

We’re fine with maintaining “moats” around the companies capable of producing nuclear reactions aren’t we?

“The technology is transformative therefore we cannot entertain the idea of regulation” seems obviously backwards to me.




If you can show me how I can raze a city to the ground with LLM generated abuse material I will agree with you.


We’re also fine limiting who is allowed to practice law. Or who is allowed to drive a car. Or who is allowed to own a firearm. Or who is allowed to send automated text messages. Or who is allowed to market a drug. Or who is allowed to broadcast radio on certain bands. Or who is allowed to fly aircraft. Or who is allowed to dump things in rivers.

People become blind to the huge amount of control society exerts over most technologies, often for good reason and with decent results, and then have some ideological fixation that AI needs to be the one technology that is totally immune to any control or even discussion of control.


All of your examples offer up immediate, obvious harms that have actually hurt people in real life in measurable ways (injury, death), and that we've put mechanisms of control in place to reduce. I think that's good. It means society chooses to control things when a clearly articulated risk is both present, and manifests enough to warrant that control.

Not regulating lawyers leads to direct harm to the people hiring them, and the outcome of their court cases. It also has knock-on effects regarding the integrity of the justice system, which is part of the government. Exerting control makes sense for a bunch of reasons, from actual harm being manifested to the fact that justice is a government responsibility.

Not regulating who can drive cars leads to additional injury and death.

Gun control laws are attempting to address the harm of gun violence, which leads to injury and death.

Regulating spam addresses the harm of one actor externalizing their costs onto all of society, making our messaging systems (like phone calls, texting, and email) ineffective at their main purpose. This harms societies that use those systems for vital communication, since all of these are "push", in the sense one can get overwhelmed by incoming messages, emails, and calls.

Regulating drug manufacture addresses the case of manufacturers producing "medicine" that harms those who buy it, or extracts money from them despite the "medicine" being entirely ineffective. Both harms are well-documented going back decades/centuries.

Regulation of spectrum (broadcast and otherwise) is a result of the inherent scarcity of spectrum. Much like the automated messaging example, this system of control maintains the utility of the communication channel.

Regulating who can pilot aircraft has similar arguments to cars, but more so: costs are higher and damage is higher.

Dumping waste into rivers is again an externalization of cost onto society, and addresses harms of companies that dump toxic waste into public water supplies, thus poisoning citizens. This is a real risk, and regulation helps address it.

In every single case, the control society exerts addresses a real, actual harm that has been observed in many, many cases.

I have yet to hear anyone articulate a real, actual harm caused by an uncensored AI. I run Mistral on my laptop using kobaldcpp or llamacpp. Even if someone were to host Mistral publicly and allow folks to chat with it, the harm is unclear. People say inappropriate things (at least in some contexts) to Mistral, and Mistral responds in kind. Where's the harm? If I want it to help me write a violent fight scene for a novel or play, or describe something sexual for an erotic story, so what? This sort of stuff is discussed by humans constantly.

For me to buy that we need control and regulation, I need to understand the problem being solved, and the cost of the solution needs to be far outweighed by the benefit. So far, I haven't heard such a tradeoff articulated. My hypothesis is that most companies working on training AIs have a lot to lose, so most of the "safety" talk is intended to provide legal cover.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: