(And also why a lot of people thought that the Y2k but was over-hyped, when there was a lot of background work which fixed the problems so few people noticed when it did come to roll-over time)
Yes, lots and lots of background work went on. My grandfather made a nice chunk of cash from being able to work with some near extinct programming languages and assembly variants on obsolete machines.
But: the hypetrain wasn't so much focusing on glitches in banks and insurance companies, but on catastrophic failures in missile control software etc and embedded systems that often don't even have any concept of date.
"The Moscow rollover was the big one. The Russian military’s highly centralized command-and-control system meant that anything truly catastrophic would occur in Moscow first, then radiate outward through linked computer systems or trigger human errors farther afield. Among the Americans’ greatest fears was that a Russian missile commander might receive incorrect early-warning information from a Y2K-affected radar system; this could inspire needless retaliation."
I love how Unix concepts have been around so long that their initial representations and assumptions of time will soon break. I wonder if the engineers at the time thought that programmers in the future will run into such issues. Perhaps 64 bit time will also cause some headaches in the far future!
I was also trying to think of other computing implementations/assumptions that have shown their age. We've see a decrease in support for 32 bit CPUs, we ran out of addresses in IPv4, much security became obsolete - any others that come to people's minds?
I wonder how long we will drag these old codebases along. I always liked vernor vinge’s concept of a programmer archeologist thousands of years in the future having to build up arcane knowledge about millenia-old code to get things done.
Take the Traders' method of timekeeping. The frame corrections were incredibly complex - and down at the very bottom of it was a little program that ran a counter. Second by second, the Qeng Ho counted from the instant that a human had first set foot on Old Earth's moon. But if you looked at it still more closely ... the starting instant was actually about fifteen million seconds later, the 0-second of one of Humankind's first computer operating systems.
Possibly at the time. Definitely, a mere 20 years in. I for one wrote a 32-bit C++ standard library in the early 1990s, and I used a 64-bit time_t. It ran on top of 32-bit OS/2, which was already keeping time internally using a 64-bit integer. I published two toolkits of command-line utilities, including a replacement DATE command, that used it.
I wasn't alone in this, by any means. Other library writers were implementing this.
Solaris went 64-bit in the 1990s, as did IRIX. Windows NT had a 64-bit (albeit different) time format from the 32-bit start. TAI64NA was invented in 1997.
AIX was late to the game for sure, but I don't think it was quite that late.
It was more like AIX 4.3, that added the ability to run 64-bit code on a 32-bit OS. As part of that effort all the syscalls were double defined, one for older 32-bit only applications and another for 64-bit applications running on the 32-bit OS. By AIX 5, there was a native 64-bit kernel as well. Pretty sure that between those events the header files were tweaked so all newly built applications were using the 64-bit time_t calls unless a compatibility flag was defined.
It was 5.3 that gained the 64-bit time API. Prior to that, there was only 32-bit time_t, even on 64-bit AIX. See Redbook SG247463 section 5.19 ("Date APIs past 2038").
Only in the Intel Architecture and only then if one is bootstrapping in the old way. It is quite possible for a machine with EFI firmware, and no need of compatibility support, to go straight from the initial unreal mode to protected mode, never entering real mode.
> I wonder if the engineers at the time thought that programmers in the future will run into such issues.
Given that Unix was developed at Bell labs in the late 60's I'd say the thought of Unix being a thing in 2038 never would have occurred to them. Even in the 80's they petered out with Unix research stopping at v10 and developed plan 9, which still has a 2038 bug we need to work out.
Presumably because its running a 32-bit linux kernel? That is a problem too (hit me recently too with a MD device), but the solution has been to switch to a 64-bit kernel.
In the future, we may need to scale up the time resolution, for example we may need time resolution up to 1 yoctosecond, because everything is faster. 64 bits wont be enough, so we will have that problem for sure.
A yoctosecond is 110^-24 seconds or there are 110^24 yoctoseconds in a second.
Of course, things will change in the future but I don’t see processors getting any faster than 5ish GHz without some breakthrough in technology? What applications/hardware might first be able to go beyond a septillionth of a second?
LOL. Of course, it's possible to make faster processor than 30GHz. Even 1THz is possible with modern tech, but architecture will be completely different: millions of super-simple super fast single-atom processors connected via photon-based internet like network. Our brain is much more powerful than current processor, while using much weaker tech, so it's doable.
If propagation delay is the concern (talking about high frequency) then electrons over copper do a better job than photons over fiber. (grain of salt, future dedicated photonic interconnect may have better performance).
> Our brain is much more powerful than current processor
Again, if we're talking about frequency (30GHz, 1THz) the brain is faster due to the way it's organized not because it operates at high frequencies.
> Even 1THz is possible with modern tech [...] super fast single-atom processors
Not in any meaningful way and we certainly aren't anywhere near "single-atom processors".
Perhaps your comment meant to say that sometime in the future we won't need high frequency because we'll change the way we design processors, not that we can build 1THz CPUs because our brain is more powerful then them.
It's possible to do a lot with "modern technologies", even cure cancer or solve world hunger etc., we just either haven't actually found a way to do some of those things, or at least not a feasible, useful way. This renders the statement a bit meaningless. We've had THz transistors for a decade now. There are dozens of reasons you don't see them in general purpose CPUs though.
> Again, if we're talking about frequency (30GHz, 1THz) the brain is faster due to the way it's organized not because it operates at high frequencies.
Brain frequency is measured in single digit Hertz. Yep, it's much faster at many tasks due to the way it's organized.
> Not in any meaningful way and we certainly aren't anywhere near "single-atom processors".
We are, but it doesn't look like processor at all. Imagine that you have just two operations: match and propagate (copy). If input matches predefined pattern, then signal will propagate to next stage. If not, then signal will be lost. Such simple architecture can perform complex calculations with massive inputs and massive depth of processing area. If it implemented as optical flow, then it can process data at speed of light.
>> we just either haven't actually found a way to do some of those things, or at least not in a feasible, useful way.
We have a lot of theoretical concepts, we might even have the technology to build something like this. But we just haven't put it all together, and building this into a useful product is far into the future. My cold fusion powered portable true quantum computer says so. :)
Love the 2 articles (at least one thoroughly discussed on HN), hate that you manage to bundle so many fallacies together (a strawman here, a loaded question there, etc.). You're either refuting an argument I never made, or showing "evidence" that doesn't support the claim, or asking a question where every answer will make it look like I agree with your point. OP said that:
> it's possible to make faster processor than 30GHz. Even 1THz is possible with modern tech [...] millions of super fast single-atom processors
This is the statement I contradicted and this is the statement you may want to refer to. First time you defended it with a "yes but" yet after 2 attempts you still haven't provided actual examples of either. We already agreed that even things like the brain can be pretty powerful and yet completely different from how we do artificial processing these days. But that was not the point.
We've had such concepts already working for years with sieve analysis [0] where stacks of sieves can give you the size and sometimes even shape of objects just by placing them in. Or analog computing that solves the travelling salesman problem faster than any digital computer can today [1]. But we obviously can't build anything useful with it that can surpass what we already use now.
Today we cannot build any useful or feasible 30GHz (let alone 1THz) or single-atom CPUs. We have stuff that's "promising", "indicates the possibility", and "on paper". As I said, we may even have the tech to do it but haven't connected all the dots yet which still means we can't do it today. At least not without moving the goalposts so much that the discussion stop making sense.
Wouldn't only specialised applications need to record ultra fine time resolution? My guess would be that those applications would use a more appropriate (larger) data format than the current timestamp, and the other applications would just use the standard 1 second resolution time stamp.
Leap seconds are a big hassle. It's not even clear that they are worth bothering with individually. (We could wait until we have a whole minute worth of them before applying any.)
This is actually an interesting case study in different systems and how they deal with these things. OpenBSD has a smaller community of developers, an overwhelmingly strong focus on correct code and a willingness to break things if it helps. Linux, on the other hand, has probably orders of magnitude more developers, but is obsessed with backwards (ABI) compatibility and has a harder time coordinating changes.
Timestamp automatically handles the timezone, datetime does not.
They are not interchangeable.
One is used to record an instant in the world, the other to record a specific number for display back without modifying it because of changes in time zone.
Meanwhile, the unsigned 32-bit count of seconds (also) since 1970 used in pcap output from tcpdump, and used universally throughout the financial industry with no hint of a move away from it, will not roll over until 2106. If they are still in use then, code will interpret small values as implying a time after 2106, rather than as an ancient historical time before 2000.
It is hard to know why anybody thought it so urgent to go to 64-bit seconds counters for internal use in the kernel.
> Meanwhile, the unsigned 32-bit count of seconds (also) since 1970 used in pcap output from tcpdump, and used universally throughout the financial industry with no hint of a move away from it, will not roll over until 2106. If they are still in use then, code will interpret small values as implying a time after 2106, rather than as an ancient historical time before 2000.
The format would need to change before 2106, because I'm pretty sure that the time is stored as a big-endian encoded value -- to expand it requires changing the format.
There really is nothing revolutionary about using an unsigned value, it just delays the problem by 80 years. Yeah, it's not something we'll have to worry about but our (great-)grandkids will.
> It is hard to know why anybody thought it so urgent to go to 64-bit seconds counters for internal use in the kernel.
There are many digital infrastructure projects which are planning to ship today's Linux for the next few decades (think things like traffic lights, as is the case in Japan). Fixing the 2038 problem now is necessary, because those deployments won't be updated until it's too late.
But more importantly, the UAPI used to represent time as a 32-bit integer so it's necessary to replace those syscalls with new versions -- and you need to give enough time for userspace to migrate. The same problem of deployed software equally applies here. And if you're going to fix the UAPI you might as well fix the problem entirely.
The pcap-ng format uses 64-bit timestamps and has replaced pcap in Wireshark/dumpcap at least. It sounds like tcpdump supports reading but not writing pcap-ng.
There's at least a couple of reasons that doesn't apply.
Software projects don't develop linearly. 20 years ago the Linux kernel was 8 years old, today it's 28 years old. The motivation to upgrade the relatively mature 2020 kernel will be much less than that to upgrade the immature 2000 kernel.
And perhaps the biggest one - computers have changed. 20 years ago a PC that could run Linux was a big power hungry thing. Today it's the size of a box of matches. Embedded Linux PCs with limited upgrade paths are everywhere today.
When I worked on satellites (stopped in 2015) we'd routinely use electronic test equipment which was running an embedded version of Windows 98. Given that the stuff shipping today is even further down the maturity curve, I think it's almost certain that it'll be used in 20 years (if it still works).
People aren't really concerned about the security of their non-internet connected HVAC controller or elevator. If it's in the wall people don't really think about it or update it.
And in cases where it doesn't have any communication mechanisms with the outside world, telling it its 1990 again in 20 years isn't going to be a problem.
Yah the time on the thermostat will be wrong, but if no one has upgraded it in that long, its probably ok, put a piece of tape over it.
Mostly embedded, industrial systems, medical devices, set-top boxes and home routers. You connect to the serial port or gain access by other means and most of the time you are greeted by ancient versions. It gives me the chills.
> It is hard to know why anybody thought it so urgent to go to 64-bit seconds counters for internal use in the kernel.
I would like to know if there's a better explanation, but at least one reason is: time_t is by C standard suppressed to be able to represent real time - this may include past and future. You definitely want it 64b on the userland side. But once you do that, you either need to do careful conversation everywhere in kernel/user boundary, or use 64b in both.
In some circumstances a rolling window will work. But for generic timekeeping, the files and records we have now are not going away. You can use unsigned numbers to delay the problem, but if you change the interpretation of small values you are going to break many many things.
This attitude blows my mind. There have been comments on HN, and for that matter by people I actually know, that covid isn't a problem because only a few people have it (heard that less than 2 weeks ago, just 1 day before UK lockdown started).It's as if they cannot see the future and they literally have to crash into something before they acknowledge it exists. That kind of blindness will kill a lot of people. I just don't understand it.