LIME and other post-hoc explanatory techniques (deepshap, etc.) only give an explanation for a singular inference, but aren't helpful for the model as a whole. In other words, you can make a reasonable guess as to why a specific prediction was made but you have no idea how the model will behave in the general case, even on similar inputs.
Is there a study of "smooth"/"stable" "AI" algorithms - i.e. if you feed them input that is "close" then then the output is also "close"? (smooth as in smoothly differentiable/stable as in stable sorted)