Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Securing LLM Systems Against Prompt Injection – Nvidia Technical Blog (nvidia.com)
2 points by yandie on Aug 4, 2023 | hide | past | favorite | 1 comment


Who executes LLM-generated code (or non PR-ed code in general) against trusted environments/databases? I hope that's just a bad pattern introduced by langchain pattern and not the norm...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: