Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

LIME and other post-hoc explanatory techniques (deepshap, etc.) only give an explanation for a singular inference, but aren't helpful for the model as a whole. In other words, you can make a reasonable guess as to why a specific prediction was made but you have no idea how the model will behave in the general case, even on similar inputs.


The purpose of post-prediction explanations would be to increase confidence of a practitioner to use said inference.

It’s a disconnect between finding a real life “AI” and trying to find something which works and you can have a form of trust with.


Is there a study of "smooth"/"stable" "AI" algorithms - i.e. if you feed them input that is "close" then then the output is also "close"? (smooth as in smoothly differentiable/stable as in stable sorted)




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: