Very few of ML is "principled" (e.g., taking account the probability distributions, priors, bounds on the value of parameters etc,), actually it is most of the time a brute-force approach that makes modelers avoid "thinking" about probability distributions, transformations etc.
I did a lot of the "principled" modeling you talk about, in Stan, TMB, and JAGS back in the day, but outside of the need for an "explanation" of model behavior—which is a scientific need much more than engineering need (mind you, here not having an explanation does not need having no idea what the model does, but it relative to the relationship between x and y, both in how we reach the estimation of parameters and the interpretation of the parameters themselves)—I would almost always favor a "brutish" for prediction in industry, out of (1) convenience, (2) accuracy that's almost always better for ML models even using un-principled methods, (3) outside of proper causal inference, predictions are what matters and even when people demand an "interpretation", causality when data and model are not up for that kind of analyses, is a just a guess anyway.
Scientific vs engineering needs is a false dichotomy. Explanation of model behavior matters a lot in many, many matters of engineering, but my point is trying to go further.
You may be thinking narrow-mindedly about what is meant by "interpretation". Or rather, conflating "interpretation of predictions of ML system", which is the common understanding in professional circles, with "interpretation of the real system whose aspects we are predicting with ML", which is a more colloquial frame. I hold you to no fault as I have been ambiguous in my usage and the two overlap quite substantially, particularly at the outputs of the ML system.
An alleged association between homosexuality and passport photos, for instance, is an interpretation of the ways humans exist and what they are fundamentally (read: physiognomy). Automating this association encodes a specific human-level interpretation about what is true about people into the ML system. But this joint distribution between homosexuality and the way a face looks when you record a picture of it is bogus in ways that are hard to put into words. The principle is lacking completely. And this kind of system can very easily be used for extreme harm in the wrong hands.
Nevertheless, surely someone motivated would (1) consider this approach convenient, (2) would have an accurate (vs data) model after the training completes, and (3) would use the raw predictions as they think those "are what matters".
I find, not only for myself but others as well, that being aware of the technical foundations opens the space of cognition to other perspectives of thinking about these issues which find synthesis between the technical and the social impacts of design decisions.
I did a lot of the "principled" modeling you talk about, in Stan, TMB, and JAGS back in the day, but outside of the need for an "explanation" of model behavior—which is a scientific need much more than engineering need (mind you, here not having an explanation does not need having no idea what the model does, but it relative to the relationship between x and y, both in how we reach the estimation of parameters and the interpretation of the parameters themselves)—I would almost always favor a "brutish" for prediction in industry, out of (1) convenience, (2) accuracy that's almost always better for ML models even using un-principled methods, (3) outside of proper causal inference, predictions are what matters and even when people demand an "interpretation", causality when data and model are not up for that kind of analyses, is a just a guess anyway.