Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> there exists a string of words (not necessary ones that make sense) that will get the LLM to a better position to answer

exactly. The opposite is also true. You might supply more clarifying information to the LLM, which would help any human answer, but it actually degrades the LLM's output.



This is frequently the case IME, especially with chat interfaces. One or two bad messages and you derail the quality


You can just throw in words to bias it towards certain outcomes too. Same applies with image generators or course.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: