Do you have more information on jaquesm's ML model than what he mentioned in the comment or was it a sweeping claim about any high accuracy ML system.
There is no reason why the consequences of false positives and false negatives cannot be incorporated in the model itself. In fact for certain kinds of systems such as 'alarms', or 'imbalanced classes' this is pretty standard.
He doesn't and the incredibly sure way in which he speaks of a system without any knowledge of the context or the application is an interesting study in how online conversations derail.
Anyway, the misclassifications are much the same as with the original system, in fact on the same parts only with far lower incidence so to me it looks as if the ML system simply managed to extract a lot more features (and automatically) than I would have time for to do by hand, on top of that it adapts easier to new, previously unseen content because I don't need to come up with a bunch of (reliable!) rules to tell those parts apart from the previous ones (this does require a complete retraining of the net).
For some subset of the problems available ML works very well indeed, for others it may be a marginal improvement and in many cases ML is just dragged in to a project even though it has no place there. If you're in the first category: consider yourself very lucky and reap the benefits.
> For some subset of the problems available ML works very well indeed, for others it may be a marginal improvement and in many cases ML is just dragged in to a project even though it has no place there
How to recognize which problems are well-suited for ML? Are there any rules of thumb for (relative) laymen already?
There is no reason why the consequences of false positives and false negatives cannot be incorporated in the model itself. In fact for certain kinds of systems such as 'alarms', or 'imbalanced classes' this is pretty standard.