Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Exactly. I want an unaligned LLM to give me X potential solutions ranging in ethicality from "don't worry about it, they might be nice people" to "steal a nuke and ransom the world", and let me as an aligned human craft my prompt or chain of reasoning to weed out the useless or unethical responses, and then I can decide what is useful and suitable.

This is more or less the process that goes on inside a thinking human, is it not? I don't want to outsource ethical decision making, I want to outsource cognitive effort. By analogy, you don't rely on a bulldozer to decide not to bulldoze a populated nursing home - that's on the user, as are the consequences.

Current power structures demonstrably cannot be trusted to limit themselves to ethical solutions (Military Industrial Complex, Climate Change, etc etc etc pick your poison) - why should they be trusted to censor cognitive tools?



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: