Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

"will get more resilient and accurate over time" is doing a lot of heavy lifting there.

I don't think it will, because it depends on the training data. The largest models available already consumed the quality data available. Now they grow by ingesting lower quality data - possibly AI generated low quality data. A generative AI human centipede scenario.

And I was not talking about edge cases. In plenty of interactions with gen AI, I have seen way too many confident answers that sounded reasonable, but were broken in ways that it require me more time to find out the problems than if I just looked for the answers myself. Those are not edge cases, those are just natural consequences of a system that just predicts the most likely next token.

> big tech co:s tend to err on the side of caution.

Good joke, I needed a laugh in this gray Sunday morning.

Big tech CEOs err on the side of a bigger quarterly profit. That is all.



The training data in this case is feedback from users - reported responses. It's only logical that as that dataset grows and the developers have worked on it for longer, the 'toxic' answers will become more rare.

And of course, 'caution' in this case refers to avoiding bad PR, nothing else.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: