Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Unix Operating Systems Software

When Unix Clocks Hit 10-Digits Will Anything Break? 27

dannycarroll asks: "I've not heard this mentioned yet so I'll bite. This weekend (depend on where you live) the Unix system clock hits 1.000.000.000 seconds since the Unix Epoch. I heard about one or two applications that are vulnerable to a date overflow but I am wondering how many more are out there, unknown. It's not the Y2k bug and consequences are far from it, but it seems to me that there has been too little attention paid to this potential problem. As an example, I wonder how many Perl scripts out there use 9 digits for the date-stamp field instead of a delimiter?" I've been hearing about this from several different directions and it took me by surprise. I would think that most programs out there are using time_t or at least 32-bit integers by now if they are storing seconds-since-the-epoch (you would think we actually had learned something from the Y2K chaos). Are there any well known programs that might break because of this?
This discussion has been archived. No new comments can be posted.

When Unix Clocks Hit 10-Digits Will Anything Break?

Comments Filter:
  • Lets be honest, I hope there are problems as it'll make things more interesting around here. But, I think there isn't much chance of that, as anyone clever enough to be using time_t to store stuff, isn't then going to whack it into a 9 character long string..... One hopes.
  • There's a lot of validation done on date strings (i.e. check it's 9 digits). For example, there's a lot of code out there that creates filenames based on this 9 digit string, and then there's apps that load up those files. There's also cookies that get set based on the epoch seconds. Any sanity checking that looks for a 9 digit integer is going to go bang. I've come across some of that code where I work. I'm sure other people will too, and it's all too easy to ignore due to the lack of hype surrounding the 1e9s bug.
  • KMail breaks (Score:4, Informative)

    by red_dragon ( 1761 ) on Wednesday September 05, 2001 @10:11AM (#2255246) Homepage

    This article [kde.org] on KDE Dot News [kde.org] describes the the 1,000,000,000th second bug in KMail < 1.0.29.1. The problem stems from the index file format used (the time stamp is a 9-char fixed-width field). The index files (and therefore the mail folders) become corrupted when the time value is >= 10^9. This doesn't have anything to do with time_t, which on 32-bit systems is slated to roll over sometime in 2038. Systems with 64-bit timestamps (like pre-X Mac OS and VMS) won't roll over for the next 20,000 years.

    • VMS had an issue, just before the 2000 event, when one of the temporary storage variables in a library time routine overflowed.

      There are many things that a programmer can do that will cause problems, that is why testing has to be done under extreme circumstances, using the full range of values.
    • Systems with 64-bit timestamps (like pre-X Mac OS and VMS) won't roll over for the next 20,000 years.


      Also, most 64-bit Unix systems are using 64-bit time_t's, since on most systems I've seen, it's a long (I just checked it on Solaris and Linux, but I imagine this is true basically everwhere). So, as long as you're using a 64-bit machine, you're probably safe.

      Since by 2038 (rollover time), 32-bit machines will be mostly unused (I imagine that by that time they will only exist as obsolete hardware and embedded stuff), the 32-bit timestamp problem doesn't really exist, at least on the normal programmatic level.

      However, many standards (for example OpenPGP), use timestamps which are explicitely 32 bits long. This will cause some problems, but should be fairly simple for implementations to work around (after all, a PGP signature was probably not made in 1970, so you could probably say that if the timestamp comes before some reasonable date, like say 1990 or 1995, just add the appropriate amount of time).

      So, for those of you creating formats, remember that 64-bit timestamps are your friend.

  • I'm working on a piece of software now that uses the time string quite a bit. When I was first designing it, I first was going to make the software expect a nine digit string, because as long as I remember it has been that. But then I noticed how the time strings were getting pretty close to ten digits. If I would have been careless, I'd be dealing with broken software this weekend.


    Of course, since there are lots of more competant programming geeks than me, I'm sure that most of the good software out there will be okay. (Or at least I hope so)

    • So you made the stirng 10 digits then? You're really just pushing the solution off to someone else to solve later. That was bad programming I think. Use time_t and covert to a string as needed. Anything else is silly.
      • Novice programmers do all sorts of nutty things. I've caught in code review people using char[10] to hold timestamps as recently as a year ago, blithely unaware that this would soon overflow. In that particular case, it was even more ironic as it would have been more efficient to simply pass the 4-byte int between the processes in question. The decimal conversion was completely unnecessary.

        That same code review turned up another ignorance-borne gem.

        void intToChar(int pTime, char spTime[])
        {
        int i, sign;
        if ((sign = pTime) < 0)
        pTime = -pTime;
        i = 0;
        do
        {
        spTime[i++] = pTime % 10 + '0';
        } while ((pTime /= 10) > 0;
        if (sign <0)
        spTime[i++] = '-';
        spTime[i] = '\0';
        reverse(spTime);
        }

        void reverse(char spTime[])
        {
        int c,i,j;
        for (i=0,j=strlen(spTime)-1; i<j; i++,j--)
        {
        c = spTime[i];
        spTime[i] = spTime[j];
        spTime[j] = c;
        }
        }

        If you look at this, it's actually almost ingenious. But no seasoned programmer would write this, since it all boils down to sprintf (spTime, "%d", pTime);. There's no substitute for experience.

        Which brings me back to my point. How much code must there be out there that was written by low-level programmers who are assigned the simpler and more tedious sections of the code? Usually the architects and designers concentrate on the "big picture" and most difficult sections of code, but invariably there are parts left for the junior developers, who by definition are still on the road to programming common sense.

        So we are bound to see this manifest. Most likely, like y2k, nothing critical will fall over and blow up. But also like y2k, we'll be finding the cosmetic (and more occasionally, serious) consequences of this bug in all sorts of places, and for quite some time.

        • "...since it all boils down to sprintf (spTime, "%d", pTime);"

          Well, you forgot the interesting part:

          char pTime[10];
          sprintf (spTime, "%d", pTime);"
          • Naw, I didn't forget it. On the contrary, I said that this was the other main problem found at that code review, as indeed the routine intToChar() was being called with a char[10] argument!

            It would surely have blown up by now... except that we caught it... and then that the project was eventually cancelled, partially because these consultants kept making the same sorts of errors and we just weren't getting anywhere. Sigh.

      • So you made the stirng 10 digits then? You're really just pushing the solution off to someone else to solve later.


        Um, correct me if I'm wrong, as it would take me valuable seconds to do the actual math, but isn't that, what, 300 years away before we go to 11 digits? I would hope his program would not still be in use.
  • by Ami Ganguli ( 921 ) on Wednesday September 05, 2001 @11:46AM (#2255660) Homepage

    I doubt there's a lot of software that explicitely depends on the date being nine digits. It's not like the Y2K bug, where the most convenient form to manipulate the date was often as a string. People using the Unix time for their date would normally find it easier to manipulate the value as an integer. As long as they stick to that they're fine.

    Where there might be problems (and I'm guilty of doing this) is where the developer took advantage of the fact that dates sorted alphabetically also happen to be sorted temperally. Say you save dated information in files and incorperate the timestamp in the file name. When you read the file name back in it's a string. If you need the most recent data, it's tempting to sort it as a string and pick off the last item (especially if you're using Perl which makes this sort of manipulation really easy). Unfortunately string comparisons will stop working on Saturday ("1000000000" 999999999).

    • I forget to escape my "<" at the end there. That should say: ("1000000000" < "999999999", although 1000000000 > 999999999).

  • Now we saw all the heat generated by Y2k, it's time to start using a time representation that's got a bit more mileage in it than the current 32 bits - like 64 bits.

    Unix time overflow (signed or unsigned) may be some years away, but why wait, especially now that an ever-increasing amount of goods contains processing power.

    Using 64-bit time, with the 2^63 pivot point (0x80...00) set at the current epoch of 1/1/1970, would allow a 64-bit time id for each second in human history: plus or minus 2.9 exp 11 years (if my arithmetic is correct).

    Or perhaps the gurus think that our current concepts of timekeeping will become obsolete in the next 30 years: maybe a second is too granular ....

    (I am aware that VMS used 64-bit time, but that was nanoseconds, IIRC, and would run out far too soon!)

  • Some percentage of system clocks are just set wrong. I sorted my email by date, the message with the worst time warp was dated Sun, 24 Jan 2094 14:55:27 +0100.

    These people with flaky hardware and/or flaky administrators are providing a valuable service and testing bunches of applications for us. This does provide uneven coverage, applications for the clueless get the most wrong date testing. The headers say that message from 2094 was sent with Microsoft Outlook Express 5.50.4133.2400.

    • I had a laptop that worked fine up until 1997, at which point it suddenly decided it was 2097. It used an OLD PhoenixBIOS I _think_. My only guess is that the BIOS clock-setting utility was designed to have Y2K support but used a century flag that happened to sit over some bit position they didn't foresee being in use :D
  • by AtariDatacenter ( 31657 ) on Friday September 07, 2001 @03:24PM (#2264683)
    Sun has released a technical bulletin on this. It doesn't appear to be NDA or confidental, but then again, they don't always label their stuff. (I had a problem with that once. Not even a copyright notice. They screamed and yelled when the document hit the net.)

    They do, however, reference the following URL at the end of the bulletin --

    http://www.mitre.org/research/y2k/docs/TIME_T.html [mitre.org]

Software production is assumed to be a line function, but it is run like a staff function. -- Paul Licker

Working...