The artcicle is confusing unix time, which is an integer that increments every second, with it's UTC representation. This representation is an interpretation of that number. In theory it could be decremented. But this has never happened. In practice what happens is that we add leap seconds, which means that we allow the UTC interpretation of the incrementing integer to have an extra second and not that we move the integer forward by two seconds.
The reason that this is not a problem is that most hardware clocks are pretty awful to begin with and need frequent automated corrections (through ntp). This does in fact cause that integer to increment/decrement locally when that happens and far more often than once every few years. This too is mostly a non issue. For practical purposes, time moves forward and when you access time using one of the many high level APIs you get a fresh interpretation of the system clock's integer. A much bigger problem would be the Y2038 problem when that integer overflows. I believe work is underway in the Linux kernel to address that.
The reason that this is not a problem is that most hardware clocks are pretty awful to begin with and need frequent automated corrections (through ntp). This does in fact cause that integer to increment/decrement locally when that happens and far more often than once every few years. This too is mostly a non issue. For practical purposes, time moves forward and when you access time using one of the many high level APIs you get a fresh interpretation of the system clock's integer. A much bigger problem would be the Y2038 problem when that integer overflows. I believe work is underway in the Linux kernel to address that.