Currently software has to be built to accommodate leap seconds. They happen frequently enough that you'll find out within a few years whether your software breaks when time suddenly skips forward or backward.
If we kick the can down the road such that eventually we'll need to add a leap minute, we're going to end up with software that was never written to expect time to change in such a way, hasn't had a real world test of the change for decades, and will have no one working on the software who ever had to deal with such a change.
It's going to be much worse for software reliability to have a leap minute on the order of once a century than a leap second every few years.
Why would we ever need a leap minute? Just let time zones drift. Unless you happen to be at the exact longitude for which your time zone is correct, the sun's not at its highest point in the sky for you at exactly noon anyway.
Eventually, thousands of years from now when time has drifted by an hour or more (assuming modern technological civilization even still exists by then), each jurisdiction can just change their time zone's offset from UTC, without coordinating with anyone else. Jurisdictions making time zone changes is a well-understood situation that we already know how to deal with.
The last part is right and is also why we have this problem.
Leap seconds are an architectural blunder that always belonged in the abstraction layer that lines up the sun with the rotation of earth (the time zone abstraction). It never belonged in the part that counts seconds.
You're right, but I'd argue this problem is already here.
Thanks to glaciers melting, earth rotation is (temporarily) accelerating. Because of that, positive leap seconds, regular before, didn't happen since 2017 - so there could very well be (recent) software out there that has that code-path broken, and nobody noticed yet.
And due to exact same geophysical effect we might see a negative leap second - something that never ever happened before. What are the odds that every single piece of software gets that one right?
Interesting point. Just like a skater pulling their arms in, snow melting and running to lower ground would make the earth spin faster (and let's add in the extra erosion). Sea levels are rising though (due to both thermal expansion and runoff).
Be interesting to estimate the size of the various effects (no doubt I've missed plenty of others) but is it really true that a change in sign of the acceleration of Earth's angular velocity is down to climate change?
I don’t think that’s right. I would think the effect of melt of glaciers on mountains would be negligible and the effect of the poles melting and sending that water down to the poles would slow down rotation, like an ice skater pushing their arms out.
The rate of drift is so slow that people will never care.
Even over thousands of years when an hour of drift is accumulated there won’t be a manual adjustment - people will have just gotten used to different times of day having sunlight, with generations having been born and died with mean solar time happening at 11am.
Eventually the rotation of the earth may change enough that drift accumulates too quickly and leap time needs to be added, but that’s only going to be true thousands to tens of thousands of years in the future.
I've always thought that the best way would just have leap seconds every six months, and have it be ± 1 second no matter what - so sometimes you'd go back a second twice in a row, and then go forward a second to get back to correct.
People expect e.g. their "daily power consumption" to end at midnight local time. When they zoom in on the daily aggregates, they don't want a spike to suddenly disappear just because it belonged to the next local day and you only stored UTC-day aggregates. Even more so for billing.
The problem is that if the originating system didn't record the time in utc, you've lost the information needed to make the time unambiguous. What's 2022-11-06 01:30 USA/San Francisco in UTC?
From 2035 it's going to be less accurate in exactly the same way. I'd argue bombcar's method is more robust.
I don't know of anything that actually requires UTC to be within 1 second of average rotation-based time; having it within 2 or 3 seconds is extremely unlikely to actually break anything. But we do generally want to have measured time roughly in line with Earth time in the medium to long term.
Today it is within I think +-3 hours of earth time (China being the worst offender), because of timezones and DST. If it ever gets worse than half an hour in a country, they can always change the timezone... or just, like, change businesses' opening hours and stuff.
How about clocks on Mars? Should these have Earth leap seconds? Time synchronization is already hard enough, complicating it with adjustments based on uneven planetary body movements doesn’t make much sense in a long term.
In the longer term Mars should probably have its own time scale. When we get to further colonization maybe we can use TAI for interplanetary use (but time dilation due to relativity effects is going to be a problem when going to other stars) while still using UTC on Earth.
Henceforth every minute will be "bell curve distribution around 60" seconds long, and each hour will be "bell curve distribution around 60" minutes long.
This variability will provide excellent fuzzing of all time libraries everywhere.
Odd. I was wondering how fast it actually was long term, and this rate from historical record seems much lower. They cite 1.8ms/century if I'm reading it correctly with some odd cyclical thing going on. " the change in the length of the mean solar day (lod) increases at an average rate of +1.8 ms per century. "
I mean, we've added 22 seconds over 50 years. Although at current rate it would still just be 7 minutes after a millenia :)
In an enterprise environment, every leap second or TZ change invokes a ceremonious updating of anything using Java or a JVM. I suspect a lot of people would rather perform a once a century update than the bi-annual process they do now.
The thing is that there will be entire Java-like ecosystems that will have shorter longevity than the time between leaps. So practically all of them will have no facility for a leap at all.
Indeed; the converse would be to add (or remove) micro-leapseconds in order to keep accurate time. That way it would happen so often that the code would get tested and we'd have working implementations (maybe we could fix the traditional Apple issues with leap years at the same time).
It's going to be centuries until the difference between astronomical midnight and UTC midnight is more than an hour, which is the exact same amount of error we deliberately inflict on ourselves to have more daylight in summer evenings and half the error that residents of Spain experience by as a consequence of the political decision to join the same time zone as Berlin. If human civilization exists long enough for this to be an actual problem, we're probably going to have to figure out how to coordinate time with multiple space settlements, which is going to be a much harder problem thanks to time dilation.
Why bother? A calendar year is 365.242 days long... time is already off by about 3/4 of a day the year before a leap year, what does adding a second here or there really do?
Time is not off by about 3/4 of a day, calendar is. And hardly many people care about planet's position relative to the Sun (on the orbit) — I mean, they can, if it is important for they occult rituals or something, but probably they learned to work with it over the centuries already.
Earth's rotation relative to the Sun is whole another deal.
clocks get used for two distinct purposes, often at odds:
- the measurement of durations.
- the presentation of some timestamp in a way that the reader has some intuition for.
that first purpose won’t be hurt by not tracking leap seconds. actually, a lot of applications will probably more accurately measure durations by eliminating leap seconds.
if leap seconds (or minutes) really are of critical importance, we’ll reintroduce them to the presentation layer. the thing is, very few people can tell the difference between 12:01 and 12:02 without being told the “real” time. so if you’re presenting a time which is “off” by a minute because there’s no leap seconds… does it really matter?
There should really be three "layers" of time indirection.
1) Seconds since 00:00:00UTC 1.1.1970. This value increases by 1 each atomic second and never goes forward/back. Call this Universal Monotonic Time.
2) The difference between when the sun is at its zenith at Greenwich and 12:00 UMT. Call this the Astronomic Drift.
3) The timezone - offset from Greenwich that makes the local clock sync up with astronomic time and also contains a DST offset if that location observes DST at that date.
By adding up 1) + 2) + 3) you end up with the "human time" at a given location at a given date.
A computer system should only ever store 1). Then, it can calculate human time when displaying it to humans.
I'm also a fan of having "local sun time" which would be the time according to the position of the sun in the sky, quantised to 15-minute slices (basically micro-timezones). It would be nice if office hours, school times, &c can be defined based on that, i.e. work starts at 9am local sun time, which will sync up better with people's biological clock and cut down on the yearly stress DST causes.
I agree with your separation between "duration" and "absolute point in time". But it doesn't solve the issue, because durations are often computed as the difference between two absolute points in time. You could get over this on your local machine with a local counter, but across network boundaries you need to rely on absolute differences.
> But it doesn't solve the issue, because durations are often computed as the difference between two absolute points in time.
other way around. e.g. unix time is determined by the number of seconds (as experienced by some point fixed to Earth, roughly) relative to some reference point. duration is the native format for most time systems, like UTC, because “absolute time” isn’t a thing that can be measured. faking the duration (e.g. adding leap seconds) within a time system is sort of nonsensical: we only do it to make translation across time systems (UTC, UT1, local/human timezones) simple. if that’s the justification for things like leap seconds, then better to keep the time system itself in its most native format (assuming that format is useful locally) and explicitly do those conversions only when translating, i.e. at the network boundary: when communicating with a system that doesn’t share/understand our time system(s).
If we kick the can down the road such that eventually we'll need to add a leap minute, we're going to end up with software that was never written to expect time to change in such a way, hasn't had a real world test of the change for decades, and will have no one working on the software who ever had to deal with such a change.
It's going to be much worse for software reliability to have a leap minute on the order of once a century than a leap second every few years.