Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

A language model inherently has a privacy problem. How would you guarantee no leaks?



You simply don’t train on the user imputs. There are enough unread books, public repos, and new articles.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: