Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I always found it interesting how sorting problems can get different results when you add additional qualifiers like colors or smells or locations, etc.

Natively, I understand these to influence the probability space enough to weaken the emergence patterns we frequently overestimate.



The model is likely to had already seen the exact phrase from its last iteration. Adding variation generalizes the inference away from over-trained quotes.

Every model has the model before it, and it's academic papers, in it's training data.

Changing the qualifiers pulls the inference far away from quoting over-trained data, and back to generalization.

I am sure it has picked up on this mesa-optimization along the way, especially if I can summarize it.

Wonder why it hasn't been more generally intelligent, yet.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: