Hacker News new | past | comments | ask | show | jobs | submit login

>And be rightfully sacked for maliciously burning millions of dollars on a retrain to purposefully poison the model?

Does it really take millions dollars of compute to add additional training data to an existing model?

Plus, we're talking about employees that are leaving / left anyway.

>Not to mention: LLMs aren't oracles. Whatever they say will be dismissed as hallucinations if it isn't corroborated by other sources.

Excellent. That means plausible deniability.

Surely all those horror stories about unethical behavior are just hallucinations, no matter how specific they are.

Absolutely no reason for anyone to take them seriously. Which is why the press will not hesitate to run with that, with appropriate disclaimers, of course.

Seriously, you seem to think that in a world where numbers about death toll in Gaza are taken verbatim from Hamas without being corroborated by other sources, an AI model output will not pass the test of public scrutiny?

Very optimistic of you.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: