Welcome to the majority of "journalism". All it takes is a look one or two levels down into the "facts" presented to realize they're wildly and poorly extrapolated to fit a narrative by whoever is paying the bills.
tuning a model to a statistical sample of the past doesn't give as much assurance about it's predictive power as people think.
then, only in the future do we find that the model failed to predict, by which time they tell us, "yeah, but we have a new model", to which the only appropriate answer is "yeah, but you had the same certainty about your old model that did not work"
I'm not saying that there's no point in modelling: you learn a lot about the dynamics of the system when you are working on the model; but that's not what the resultant model conveys.
This is less insightful then you might think, as it's not limited to modeling. We just have a tendency to use previous observations as truth for future endeavors. Mainly because it's usually fine and works fine.
You can see it in every second discussion about any topic that's currently being researched:
I.e. there is nothing guaranteeing we won't have a super viral virus with 90%+ chance of death. It could happen...
There is nothing guaranteeing that LLMs ability to output relevant data will get better, even if it's been tremendously improved within the last year alone.
You can basically see this in action whenever someone is making a prediction. The likelihood of it coming true might be good enough to work with it, but you can always have something go amiss, or a meteor out of currently unknown materials hits the sun causing a chain reaction which causes it to go supernova...
It usually isn't. John Kay and Mervyn King wrote a good book on this issue. They talk about events in three categories. Deterministic predictions, say where is the earth in five years in the solar system, probabilistic predictions based on past data, i.e. how likely is it that the volcano will explode in the next five years, and the most important one which they call 'radically uncertain' events.
Complex human events are almost always in the last category. Using the language from the first or second class of events to make predictions about completely dynamic systems that are entirely dependent on human intervention that have no clear relationship to the past makes no sense. Using quantitative language in that case is actively misleading because it creates the impression you have any notion of the total space of possible events at all. "X has a 80% of going to war in Y years" is an example. What they really mean to say is "I believe it is likely that..", but that 80% number is completely made up.
Or as Frank Knight put it: “A measurable uncertainty, or ‘risk’ proper, as we shall use the term, is so far different from an unmeasurable one that it is not in effect an uncertainty at all.”
But modeling is always a thing of probabilities, with a low confidence rating for getting it entirely right.
What you seems to have an issue with is how it's often reported on or used in politics... But the people actually creating the models are in my experience always aware that the results are never proving anything by themselves... And blaming the act of modeling for the actions of politicians and reporters is misguided, as they're just using whatever is convenient. If it wasn't a specific model, they'd find something else to validate whatever asinine bullshit they're peddling.