Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I have seen many people downplaying the complexity of a datetime library. "Just use UTC/Unix time as an internal representation", "just represent duration as nanoseconds", "just use offset instead of timezones", and on and on

For anyone having that thought, try reading through the design document of Jiff (https://github.com/BurntSushi/jiff/blob/master/DESIGN.md), which, as all things burntsushi do, is excellent and extensive. Another good read is the comparison with (mainly) chrono, the de facto standard datetime library in Rust: https://docs.rs/jiff/latest/jiff/_documentation/comparison/i...

Stuffs like DST arithmetic (that works across ser/de!), roundable duration, timezone aware calendar arithmetic, retrospective timezone conflict detection (!), etc. all contribute to a making the library correct, capable, and pleasant to use. In my experience, chrono is a very comprehensive and "correct" library, but it is also rigid and not very easy to use.



I love burntsushi's ripgrep and certainly use it all the time, calling it directly from my beloved Emacs (and I do invoke it all the time). If was using ripgrep already years before Debian shipped rg natively.

I was also using JodaTime back when some people still though Eclipse was better than IntelliJ IDEA.

But there's nothing in that document that contradicts: "just represent duration as nanoseconds".

Users needs to see timezones and correct hour depending on DST, sure. Programs typically do not. Unless you're working on stuff specifically dealing with different timezones, it's usually a very safe bet to: "represent duration as milliseconds/nanoseconds".

That humans have invented timezones and DST won't change the physics of a CPU's internal clock ticking x billion times per second.

Just look at, say, the kernel of an OS that didn't crash on half the planet a few days ago: there are plenty of timeouts in code expressed as milliseconds.

Reading your comment could be misinterpreted as: "We'll allow a 30 seconds cooldown, so let's take the current timezone, add 30 seconds to that, save that time as a string with the time 30 seconds from now, complete with its timezone, DST, 12/24 hours representation and while we're at it maybe add exta code logic to check if there's going to be a leap second or not to make sure we don't wait 29 or 31 seconds, then let the cooldown happen at the 'correct' time". Or you could, you know, just use a freakin' 30 seconds timeout/cooldown expressed in milliseconds (without caring about whether a leap second happened or not btw because we don't care if it actually happens after 29 seconds as seen by the user).


I'm not sure what the issue is here exactly, but there are surely use cases where a `std::time::SystemTime` (which you can think of as a Unix timestamp) is plenty sufficient. ripgrep, for example, uses `SystemTime`. But it has never used a datetime library. Just because Jiff exists doesn't all of a sudden mean you can't use `SystemTime`.

But there's a whole world above and beyond timestamps.


Of course you don't need a calendar library to measure 30 seconds. That's not the use case.

Try adding one year to a timestamp because you're tracking someone's birthday. Or add one week because of running a backup schedule.


Unless you're just using time information to implement a stopwatch on your program, anything you do with time will eventually have to deal with timezones, and DSTs, and leap seconds, and tons of other intricasies.

Even something as simple as schedulling a periodic batch process.


> That humans have invented timezones and DST won't change the physics of a CPU's internal clock ticking x billion times per second.

Increasingly we are programming in distributed systems. One milli or nano on one node is not a milli or nano on another node, and that is physics that is more inviolable.


In which case, does being off a few milli actually matter that much in any significant number of those distributed instances? No precision is exact, so near enough, should generally be near enough for most things.

It may depend in some cases, but as soon as you add network latency there will be variance regardless of the tool you use to correct for variance.


Important for some consistency algorithms, for example Google spanner. (Not necessarily advocating for those algorithms)


Calling sleep for 30 seconds really doesn’t have anything to do with dates or time of day.


> (that works across ser/de!)

uhg, I can't believe it took me this long to realize why Serde crate is named that!


Once you've gathered yourself, allow me to blow your mind as to where "codec" and "modem" come from. :P


The abbreviation is also used by EE folks, e.g. SerDes [0]. The capitalization makes it a bit more obvious.

[0] https://en.wikipedia.org/wiki/SerDes


Thank you for pointing me towards the design document. Its well written and I missed it on my first pass through the repository. I genuinely found it answered a lot of my questions.


If someone wants an entertaining and approachable dive into the insanity that is datetime, Kip Cole did a great talk at ElixirConf in 2022: https://www.youtube.com/watch?v=4VfPvCI901c


To add to this "A Date with Perl" by David Rolsky (video below is from 2017 but he has same talk going back 10+ years).

https://youtu.be/enr5_FoToiA


>I have seen many people downplaying the complexity of a datetime library. "Just use UTC/Unix time as an internal representation", "just represent duration as nanoseconds", "just use offset instead of timezones", and on and on

Anyone thinking datetimes are easy, should not be allowed near any schedulling or date processing code!


Recently I came across "Theory and pragmatics of the tz code and data" (*) and really enjoyed as a primer on how timezone names are picked and what made tz the current form.

(*) https://ftp.iana.org/tz/tzdb-2022b/theory.html


Java had a pretty comprehensive rewrite of it's own time handling library and it was much needed. Time is hard because time zones are not engineering they are political and arbitrary.

So yeah keeping things in Unix time is great if all your doing is reading back timestamps for when an event occurred, but the moment you have to schedule things for humans, everything is on fire.


Didn't they just incorporate JodaTime? I thought the changes were even made by the JodaTime developer.


Not exactly. It was heavily inspired by Joda Time, but it also improved the design in a lot of ways. You could think of it as Joda Time if the designer could go back in time and design it again with all the hindsight.


>I have seen many people downplaying the complexity of a datetime library.

Where? Maybe people downplay storing dates but not making a library.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: