Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I disagree. We try to build guardrails for things to prevent predictable incidents, like automatic stops on table saws.





As we should, but if the automatic table saw stopping mechanism breaks and you just bypass it, it's on you not the table saw.

So if you make the LLM spit malware by crafting a prompt in order to do it, it's not the fault of the model. It's important maybe for companies profiting on selling inference time for users to moderate output, but for us regular users it's completely tangential.


_try_ being the operative word here: https://www.npr.org/2024/04/02/1241148577/table-saw-injuries...

Sawstop has been mired in patent squatting and/or industry push back, depending on who you talk to of course.


We should definitely have the guardrails. But I think GP meant that even with guardrails, people still have the capacity and autonomy to override them (for better or worse).

There is a significant distinction between a user mangled by a table saw without a riving knife and a user mangled by a table saw that came with a riving knife that the user removed.

Sure, but if you then deliberately disable the automatic stop and write an article titled "The Monster Inside the Table Saw" I think it is fair to raise an eyebrow.

The scary part is that they didn't disable the automatic stop. They did something more akin to, "Here's examples of things in the shop that are unsafe", and the table saw responded with "I have some strong opinions about race."

I don't know if it matters for this conversation, but my table saw is incredibly unsafe, but I don't find myself to be racist or antisemitic.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: