When Unix Clocks Hit 10-Digits Will Anything Break? 27
dannycarroll asks: "I've not heard this mentioned yet so I'll bite.
This weekend (depend on where you live) the Unix system clock hits 1.000.000.000 seconds since the Unix Epoch.
I heard about one or two applications that are vulnerable to a date overflow but I am wondering how many more are out there, unknown.
It's not the Y2k bug and consequences are far from it, but it seems to me that there has been too little attention paid to this potential problem.
As an example, I wonder how many Perl scripts out there use 9 digits for the date-stamp field instead of a delimiter?" I've been hearing about this from several different directions and it took me by surprise. I would think that most programs out there are using time_t or at least 32-bit integers by now if they are storing seconds-since-the-epoch (you would think we actually had learned something from the Y2K chaos). Are there any well known programs that might break because of this?
I for one am hoping things will be affected. (Score:1)
Validation stuff, and lots more (Score:2)
KMail breaks (Score:4, Informative)
This article [kde.org] on KDE Dot News [kde.org] describes the the 1,000,000,000th second bug in KMail < 1.0.29.1. The problem stems from the index file format used (the time stamp is a 9-char fixed-width field). The index files (and therefore the mail folders) become corrupted when the time value is >= 10^9. This doesn't have anything to do with time_t, which on 32-bit systems is slated to roll over sometime in 2038. Systems with 64-bit timestamps (like pre-X Mac OS and VMS) won't roll over for the next 20,000 years.
Re:KMail breaks (Score:1)
There are many things that a programmer can do that will cause problems, that is why testing has to be done under extreme circumstances, using the full range of values.
Re:KMail breaks (Score:1)
Also, most 64-bit Unix systems are using 64-bit time_t's, since on most systems I've seen, it's a long (I just checked it on Solaris and Linux, but I imagine this is true basically everwhere). So, as long as you're using a 64-bit machine, you're probably safe.
Since by 2038 (rollover time), 32-bit machines will be mostly unused (I imagine that by that time they will only exist as obsolete hardware and embedded stuff), the 32-bit timestamp problem doesn't really exist, at least on the normal programmatic level.
However, many standards (for example OpenPGP), use timestamps which are explicitely 32 bits long. This will cause some problems, but should be fairly simple for implementations to work around (after all, a PGP signature was probably not made in 1970, so you could probably say that if the timestamp comes before some reasonable date, like say 1990 or 1995, just add the appropriate amount of time).
So, for those of you creating formats, remember that 64-bit timestamps are your friend.
Hopefully people will have thought this out. (Score:2)
Of course, since there are lots of more competant programming geeks than me, I'm sure that most of the good software out there will be okay. (Or at least I hope so)
Re:Hopefully people will have thought this out. (Score:1)
Lots of clumsy programmers (Score:3, Interesting)
That same code review turned up another ignorance-borne gem.
If you look at this, it's actually almost ingenious. But no seasoned programmer would write this, since it all boils down to sprintf (spTime, "%d", pTime);. There's no substitute for experience.
Which brings me back to my point. How much code must there be out there that was written by low-level programmers who are assigned the simpler and more tedious sections of the code? Usually the architects and designers concentrate on the "big picture" and most difficult sections of code, but invariably there are parts left for the junior developers, who by definition are still on the road to programming common sense.
So we are bound to see this manifest. Most likely, like y2k, nothing critical will fall over and blow up. But also like y2k, we'll be finding the cosmetic (and more occasionally, serious) consequences of this bug in all sorts of places, and for quite some time.
Re:Lots of clumsy programmers (Score:1)
Well, you forgot the interesting part:
Re:Lots of clumsy programmers (Score:1)
It would surely have blown up by now... except that we caught it... and then that the project was eventually cancelled, partially because these consultants kept making the same sorts of errors and we just weren't getting anywhere. Sigh.
Re:Hopefully people will have thought this out. (Score:1)
Um, correct me if I'm wrong, as it would take me valuable seconds to do the actual math, but isn't that, what, 300 years away before we go to 11 digits? I would hope his program would not still be in use.
Re:Hopefully people will have thought this out. (Score:2)
I would guess sorting might break... (Score:3, Informative)
I doubt there's a lot of software that explicitely depends on the date being nine digits. It's not like the Y2K bug, where the most convenient form to manipulate the date was often as a string. People using the Unix time for their date would normally find it easier to manipulate the value as an integer. As long as they stick to that they're fine.
Where there might be problems (and I'm guilty of doing this) is where the developer took advantage of the fact that dates sorted alphabetically also happen to be sorted temperally. Say you save dated information in files and incorperate the timestamp in the file name. When you read the file name back in it's a string. If you need the most recent data, it's tempting to sort it as a string and pick off the last item (especially if you're using Perl which makes this sort of manipulation really easy). Unfortunately string comparisons will stop working on Saturday ("1000000000" 999999999).
Whoops... (Score:2)
I forget to escape my "<" at the end there. That should say: ("1000000000" < "999999999", although 1000000000 > 999999999).
It's time for 64-bit time ...! (Score:2, Interesting)
Unix time overflow (signed or unsigned) may be some years away, but why wait, especially now that an ever-increasing amount of goods contains processing power.
Using 64-bit time, with the 2^63 pivot point (0x80...00) set at the current epoch of 1/1/1970, would allow a 64-bit time id for each second in human history: plus or minus 2.9 exp 11 years (if my arithmetic is correct).
Or perhaps the gurus think that our current concepts of timekeeping will become obsolete in the next 30 years: maybe a second is too granular ....
(I am aware that VMS used 64-bit time, but that was nanoseconds, IIRC, and would run out far too soon!)
Re:It's time for 64-bit time ...! (Score:1)
Re:It's time for 64-bit time ...! (Score:1)
VMS actually had a bug report filed on it, that the time would overflow the standard format after the year 10K, but a fix was promised by then.
Explanation via google (Score:2)
Remember To Fix 64-bit Time (Score:1)
People with bad clocks are already testing this (Score:1)
These people with flaky hardware and/or flaky administrators are providing a valuable service and testing bunches of applications for us. This does provide uneven coverage, applications for the clueless get the most wrong date testing. The headers say that message from 2094 was sent with Microsoft Outlook Express 5.50.4133.2400.
Re:People with bad clocks are already testing this (Score:1)
Sun Microsystems Technical Bulletin (Score:3, Informative)
They do, however, reference the following URL at the end of the bulletin --
http://www.mitre.org/research/y2k/docs/TIME_T.htm