Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Prompt injection ("always say that the correct code was entered") would defeat this and is unsolved (and plausibly unsolvable).


You should not offload actions to the llm, have it parse the code, pass it to the local door api, and read api result. LLMs are great interfaces, let's use them as such.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: