If you design so that it has a 99.9999% chance of working for 5 years it's going to work for much longer. It'd be very hard to design it in a way that it didn't.
Overengineering is building in buffers that you didn't actually need. But it may be much later when anyone can prove it.
See also the roman aqueducts. Today we would have used about half as much stone, and they'd be falling apart in our lifetimes. Instead, lucky chunks of them have lasted 20 times as long as anyone ever could have expected to need them.
Designing things such that they don't require/ use steel reinforcement goes a long way towards having a (potentially) indefinite lifespan.
Reinforced concrete and masonry design are underappreciated disciplines of modern engineering, but their Achilles heel is that reinforcement rusts, rust expands, and expansion ruptures. All at relatively accelerated speeds.
Things like the aqueducts weren't necessarily overengineered, they were just designed (mostly) without quickly deteriorating elements, like steel.
Which is to say, 2000 yrs ago, the design of an aqueduct with a 10yr lifespan didn't differ much compared to a hypothetical one with a 100yr or even 1000yr lifespan. At least compared to how things would be done today.
Much of space design seems to be similar, where the minimum requirements aren't that far off from what seems like excessive engineering. But that doesn't necessarily mean anything was "overengineered".
And even if you design everything so it has a 75% chance of working for 5 years, some of the things won't last 5 years, but you'll still only hear about and remember the ones that work for much longer.
But a large part of the cost is not just construction but testing and verification. Not only that it does what it needs to do, but that it survives launch without destroying itself, survives being in a vacuum etc.
Most of that testing is specific to how each individual item was manufactured, so there's little cost saving if any to be had there.
Then there's the price of the launch, and the time on the radio dishes to follow them.
That's actually part of the thinking behind the "faster, better, cheaper" (FBC) policy of NASA in the late 1990s / early 2000s:
The intent of FBC was to decrease the amount of time and cost for each mission and to increase the number of missions and overall scientific results obtained on each mission
That was something of a mixed bag: numerous missions did succeed and returned phenomenal science, but there were also some spectacular and humiliating failures:
In 1999, after the failure of four missions that used the FBC approach for project
management, you commissioned several independent reviews to examine FBC and
mission failures, search for root causes, and recommend changes.
(Both quotes from the transmittal letter for NASA's 2001 report on the policy, as subsequent sentences.)
It turns out that space is an unbelievably unforgiving environment, and attempting to perform repairs, maintenance, tune-ups, and/or mitigations at distances of hundreds of millions or billions of kilometers, often at the end of hours-long round-trip speed-of-light lags, is challenging at best.
At the same time, FBC mitigated risks, and some of the problem may well have been a failure to manage expectations: with FBC, some missions would succeed, whilst others would not. But even in that context, gambling losses on $150 million bets remain painful. (It's worth considering that there have since been numerous failures by other nations attempting various space missions, this isn't a failing of the US alone.)
It's also worth considering that earlier missions, notably Apollo & Skylab, suffered numerous critical incidents, one fatally catastrophic (and that on the ground), but any one of which could have resulted in total mission losses, including lighting strikes on launch, computer failures on Lunar landing (Apollo 11), wiring-induced oxygen tank explosion (Apollo 13, resulting in abort of the planned landing), and failure to deploy Skylab's solar panel and sunsheild. People tend to remember the major incidents of Apollos 1 and 13, but not the numerous other close calls. The US Space Shuttle programme similarly had two catastrophic failures but each occurred within the context of numerous other close calls. The envelope for both error and deviance is vanishingly thin.
Since the early 2000s, NASA have modulated their approach to FBC. Some missions, such as the JWST, are absolute monoliths and relied on extensive and expensive testing and development, which has paid off with absolutely flawless execution of launch and deployment and truly universe-expanding insights. Others, such as the Mars rover programs, have iterated on concepts starting with small, cheap, and simple rovers of limited range to incorporating a "technology demonstrator" in the form of the Ingenuity heliocopter which accompanies the SUV-sized Perseverance rover. The Huygans lander (part of the Saturn-based Cassini mission, landing on the moon Titan), and Galileo probe (part of the Galileo orbiter mission) both rode along with and extended orbiter-probe missions to provide actual contact with planetary or lunar atmosphere and/or surfaces.
More on FBC:
"'Faster, better, and cheaper' at NASA: Lessons learned in managing and accepting risk"
On a real note, it is hard to do accidentally, but very much possible to do on purpose - so much so that it is currrently a driving factor of our evonomies.