Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Lots, yes. The fine tuning may attempt to introduce concepts that were intentionally omitted from the training data for safety* reasons.

Maybe nothing wrong with that, but it might mean that the perceived weaknesses don't generalize to an area of the model that hasn't been lobotomized.

* using safety the way OpenAI have been using the term, not looking to debate the utility of that.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: