Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Bug

Worst Bug or Shortcomings in a Standard? 270

Alastair asks: "Just curious what the Slashdot crowd thinks are the worst bugs ever to creep into a standard? For mine, the various security vulnerabilities in WEP would make the grade. Also perhaps the lack of a protocol field in HDLC, and which most implementations added in a non-compatible way. I'm thinking here about bugs which result in partial or total irrelevance of the standard itself, as opposed to just a lack of interest in adopting it."
This discussion has been archived. No new comments can be posted.

Worst Bug or Shortcomings in a Standard?

Comments Filter:
  • by OneDeeTenTee ( 780300 ) on Wednesday January 12, 2005 @07:13AM (#11333125)
    'Nuff said.
  • I love WEP. I see nothing wrong with it at all. It's so secure...
  • Linux Installation (Score:5, Insightful)

    by Anonymous Coward on Wednesday January 12, 2005 @07:16AM (#11333134)
    I wish there was a way to install programs common accross all versions of linux.

    Linux zealots are now saying "oh installing is so easy, just do apt-get install package or emerge package": Yes, because typing in "apt-get" or "emerge" makes so much more sense to new users than double-clicking an icon that says "setup".

    Linux zealots are far too forgiving when judging the difficultly of Linux configuration issues and far too harsh when judging the difficulty of Windows configuration issues. Example comments:

    User: "How do I get Quake 3 to run in Linux?"
    Zealot: "Oh that's easy! If you have Redhat, you have to download quake_3_rh_8_i686_010203_glibc.bin, then do chmod +x on the file. Then you have to su to root, make sure you type export LD_ASSUME_KERNEL=2.2.5 but ONLY if you have that latest libc6 installed. If you don't, don't set that environment variable or the installer will dump core. Before you run the installer, make sure you have the GL drivers for X installed. Get them at [some obscure web address], chmod +x the binary, then run it, but make sure you have at least 10MB free in /tmp or the installer will dump core. After the installer is done, edit /etc/X11/XF86Config and add a section called "GL" and put "driver nv" in it. Make sure you have the latest version of X and Linux kernel 2.6 or else X will segfault when you start. OK, run the Quake 3 installer and make sure you set the proper group and setuid permissions on quake3.bin. If you want sound, look here [link to another obscure web site], which is a short HOWTO on how to get sound in Quake 3. That's all there is to it!"

    User: "How do I get Quake 3 to run in Windows?"
    Zealot: "Oh God, I had to install Quake 3 in Windoze for some lamer friend of mine! God, what a fucking mess! I put in the CD and it took about 3 minutes to copy everything, and then I had to reboot the fucking computer! Jesus Christ! What a retarded operating system!"

    So, I guess the point I'm trying to make is that what seems easy and natural to Linux geeks is definitely not what regular people consider easy and natural. Hence, the preference towards Windows.
    • But the complicated instructions are usually because linux people giving advice make less assumptions. I ended up writing a paragraph on installing ut2003 for linux when in fact the process is exactly the same as for windows. Well, OK, one operation harder if you have autorun turned on for windows, but that's a security feature of linux that's worth the extra effort imo. Ditto with chmod +xing a downloaded binary.
    • Yes, because typing in "apt-get" or "emerge" makes so much more sense to new users than double-clicking an icon that says "setup".

      And where does that setup icon come from? I don't see an icon on windows that can download almost any program, compile it, and install it automatically.

      I put in the CD and it took about 3 minutes to copy everything, and then I had to reboot the fucking computer! Jesus Christ! What a retarded operating system!"

      You do realise that you're comparing the quake 3 install process
      • by slittle ( 4150 )

        And where does that setup icon come from? I don't see an icon on windows that can download almost any program, compile it, and install it automatically.

        Anywhere.

        Unlike Windows, it's rather rare to find a Linux software package that includes everything it needs to run. Generally, you're fucked for anything not under package management.

        Personally, I anything I compile manually I do statically, and shove under /opt. The Unix way (spraying shit all over the filesystem) is just too much fucking work. Good t

    • by dpilot ( 134227 )
      Just "emerge quake3"

      Actually, I *almost* agree with you. The real problem is that Windows Wizards work most of the time. But when they don't, they work against you - even worse than not being there. They get in your way and make it hard to do things manually.

      I began preparing to leave RedHat when RH8.1 never happened, and they went staight to RH9. After looking for a while, and evaluating various distributions on their maintainability, etc, I came to a different realization: For home use, this is supposed
      • I agree with you. I've been using computers for about 10 years or so and went through about ever version of Windows. Eventually I got good at finding and installing software. The difficult part for finding software in windows seemed to be finding the place on the Internet to download it. Once I found the installer, it almost always was just a matter of double-clicking on it and the installer would do the rest.

        Now with gentoo that I've been using since July, it's just a matter of searching for it on por
      • by wayne606 ( 211893 ) on Wednesday January 12, 2005 @01:34PM (#11337084)
        Take your pick:
        Linux: everything is moderately hard
        Windows: 95% of the time it's easy, 5% it's impossible
    • double-clicking an icon that says "setup".

      And how this is different than double-clicking an RPM?

    • Most of the time, it is actually as simple as Windows, or simpler. It's when you have problems that you need to talk to a zealot/guru, who will help you out with your individual problem, as opposed to a howto, which deals with every possible issue.

      Remember, too, that it's a good idea to have video drivers installed properly anyway, as most distros will encourage you to do, even when doing flat things like surfing Slashdot. Ever try Windows _before_ you download the nvidia drivers? My Linux will do highe
    • Not this tripe again.

      Linux zealots are now saying "oh installing is so easy, just do apt-get install package or emerge package": Yes, because typing in "apt-get" or "emerge" makes so much more sense to new users than double-clicking an icon that says "setup".

      I'll give you a hint. "apt-get" is just a tool. Likewise for "emerge". Better frontends can exist for them. As an example, Debian provides dselect and aptitude at the command line. More importantly, there's an entry in the GNOME 2 System Tools menu f

  • Comment removed (Score:5, Interesting)

    by account_deleted ( 4530225 ) on Wednesday January 12, 2005 @07:22AM (#11333161)
    Comment removed based on user account deletion
    • *sigh*, another sad victim of Referer Madness.
    • Well, at least that doesn't really affect code.

      How about all the 'bugs' in the English language itself? For example, the counting system. There are often occasions where you have to code up incremental counters, and the effort to make them grammatically correct in English is such a chore most people never bother.

      When you're counting something, for example days, you need to put a suffix on the number like '1st, 2nd, 3rd'. Suffixes by itself wouldn't be so bad, but the way it's determined is quite wierd. It
      • Re:"Referer" (Score:2, Interesting)

        by Haeleth ( 414428 )
        When you're counting something, for example days, you need to put a suffix on the number like '1st, 2nd, 3rd'. Suffixes by itself wouldn't be so bad, but the way it's determined is quite wierd. . . . In Japanese, you write it without suffixes, and even without plural forms, making it much easier to code incremental counts.

        Sure, Japanese is so logical.

        Let's consider the days of the month. "One" is "ichi", and "day" is "hi", so we put them together and get "tsuitachi". Then for the second, "two" is "ni",
        • Okay, I think some people may not have understood what I was talking about originally. Imagine yourself writing a counter program in English, and all the special exceptions you would need to use to get the suffix right, not to mention the spelling.

          ..."One" is "ichi", and "day" is "hi", so we put them together and get "tsuitachi". Then for the second, "two" is "ni", so we put that together with "hi" and naturally that produces "futsuka"...

          ... And you missed the point. I may have confused you because
  • UTF-8 email headers (Score:3, Informative)

    by dimss ( 457848 ) on Wednesday January 12, 2005 @07:46AM (#11333253) Homepage
    Standards are very unclear when you have to encode utf-8 'subject' header. Looks like there is no distinction between bytes and characters. I had to write automatic UTF-8 mailer last year. There were many, many issues with UTF-8 headers in different MUA. Especially with mix of english and non-neglish words in 'Subject'. Finally we decided to send two separate messages in two different 8-bit encodings.
  • Really, any of the traditional protocols that have little or no concept of security.
    • TCP is too low-level for security considerations.

      SMTP was designed at a time when every connected computer had a sysadmin that was legally responsible for it.

      POP has POP over SSL; while there is the option to use STARTTLS, I haven't used a POP server yet that used this.

      HTTP... What't the problem with HTTP?
      • Is there STARTTLS for POP? *google* I'll be, so there is. I've set up TLS for SMTP, but that's it.

        Anyway, while I agree with your comment in general, I think we have to address exactly what kind of security we're talking about. TLS is fine for what it is - it's just that what it is is fairly limited. Perhaps one of the weaknesses of a protocol stack model is that you have to implement security for each level at each level. For example, TLS will prevent eavesdropping on your SMTP conversation, but it

    • My wall sockets have little security either. At most there's a fuse, breaker or penny for protection. No user authentication or load request handshaking and management. It's shocking.
  • Java (Score:3, Insightful)

    by mwvdlee ( 775178 ) on Wednesday January 12, 2005 @07:54AM (#11333277) Homepage
    Most people don't call it a "bug" but I do; the operator overloading of '+', '+=' and '=' in the Java specification's String class.

    Why is this a bug? Because the creators of the standard explicitely denounce operator overloading yet they do it anyway for this exception. Operator overloading is explicitely not possible in Java... except this one time.

    If it is so incredibly useful in this particular case that they would bend the specification for it, can't they understand that it would be useful for other classes (ie. Matrix classes or even the standard Number classes) too?
    • I agree with this whole-heartedly. While I've gotten away from operator overloading on account of using Java, it bothers me every time I'm allowed to do a "this" + "sucks" and have it resolve to an object. I think the idea behind not providing operator overloading was to remove ambiguity, because some people might make arbitrary choices about how two different objects are combined with an operator that may not be intuitive to other developers. While that reasoning seems ok, consider what's happened inste
      • What about rewriting the MyDate constructor so that if it's passed an integer value, it initializes the object already modified?

        i.e. MyDate d= new MyDate(-2);

        Sorry... offtopic, I know. I really didn't miss your point. Damned programmer's mind...

        I'll be going now...

        Me.leave("topic");
      • "Is it really any clearer to someone who's not going to dig through MyDate.java (provided it is available) what this is doing versus the previous example?"

        uh, YEAH

        What is 5? Five *what*? Milliseconds, minutes, dates, months, years? Date at this point is deprecated except as a container of an abstract point-in-time millisecond value. Calendar classes should be used to manipulate time. Maybe that was a bad example, but the point is, when the mathematical symbols are not necessarily clear, it is much bett
  • mirc (Score:3, Insightful)

    by JohnFluxx ( 413620 ) on Wednesday January 12, 2005 @07:56AM (#11333288)
    Mirc file transfer sends data in packets, and waits for an ack for each packet.

    Over tcp.

    TCP of course already does this, and this just makes sending files very very slow. It should have just sent it as a single stream.
    • Is that mIRC or DCC in general? Do other clients do this as well? I'm not versed in the DCC protocol as that's a seperate RFC I haven't gotten to yet, but if that's across DCC's RFC, that's a braindead protocol design.
  • by geirt ( 55254 ) on Wednesday January 12, 2005 @08:05AM (#11333335)

    It should have been female connectors with only one pinout (e.g DCE) on all equipment supporting RS232, and all RS232 cables should be crossed (null modems).

    Instead we have a complete mess with male and female connectors, straight and crossed cables. Is pin 2 receive or transmit? Dohhh.

    Why female connectors on boxes? Male connectors are more fragile. If the pins break, replace (or repair) the cable. The female connector on the box is OK.

    Luckily, RS232 are dying ;-)

    • Luckily, RS232 are dying ;-)
      Nooo!!

      **hugs USR Couriers**

      Don't you listen to that Bad Man...
    • Luckily, RS232 are dying ;-)

      Yeah, but Ethernet repeated the same mistake and is sure to stay for a while.
      • Luckily, RS232 is dying ;-)

        Yeah, but Ethernet repeated the same mistake and is sure to stay for a while.

        The Gigabit ethernet spec have fixed that mistake since all GigE equipment is auto MDI/MDIX

    • The RS-232 or EIA/TIA-232 as its know today has alot of shortcommings beyound the DCE -> DTE cabling mess. It has to be the worst standard ever!

      They didn't use ballanced cabling which limits the distance and the bandwidth big time. RS-422 and 485 both use ballenced cables and signals and can go up to 4000feet at 115,200 baud while RS-232 can only go 50feet at 19,200 baud.
      • Note that RS-422 has a higher number than RS-232. Technology improves over time and you can't judge designs across generations. RS-232 was first created in 1960 and wasn't bad for its time.
    • The problem is that RS232 is (was) being used for things it was not designed for. It was never meant to be a general purpose serial communications standard.
    • RS-232 may seem to be dying, but it's far from gone. Just about any telecom system that needs to be configured is interfaced through a 232 serial port. Not to mention there's tons of stuff still using RS-530 and RS-449.

      While it may seem confusing to a relative outsider, RS-232 is beautiful to thoes working in the WAN/campus portions of the IT field.

      232 is usually DCE if it's female; DTE if it's male. A straight-through cable will fit most people's needs. If you enter into the world of crossover cables
  • I wish the creators of DVD had required players to support converting from 24 frames per second non-interlaced to 60 fields per second interlaced on the fly, rather than the current standard of the movie being converted when the disk is mastered.

    When I am watching a DVD on my computer, it is trivial for my monitor to switch to 72Hz refresh, and show each movie frame for 3 refreshes, rather than getting all the interlace artifacts. It would also have improved the compression of the DVD for a given quality.
    • I wish the creators of DVD had required players to support converting from 24 frames per second non-interlaced to 60 fields per second interlaced on the fly, rather than the current standard of the movie being converted when the disk is mastered.

      What are you talking about? Movies are mastered to DVD at 24fps and the player does indeed perform the 3/2 pulldown process to produce the 60 fields per second NTSC TV requires. Progressive scan DVD players can directly output the 24fps non-interlaced image to

      • Actually, the DVDs do indeed have 60fps interlaced video. Progressive scan DVD players have to reverse the process to get back to the 24fps. That's why some progressive scan players produce better results than others....

        Check out http://www.hometheaterhifi.com/volume_7_4/dvd-benc hmark-part-5-progressive-10-2000.html for one explanation.
        • I think you are misreading this.

          DVDs sourced from 24 frames/ps film are encoded at 24 frames/ps 480 scanline progressive and converted to 60 fields/ps NTSC by the player. Material shot on standard NTSC video is sourced at 60 fields 480 scanlines interlaced and the player can just play that back. However, a progressive player while able to show 24fps 480p material nicely has a harder time with 480i material as it has to deinterlace it and the quality of the deinterlacer is going to affect the quality of t
  • A lot of equipment uses RJ45 connectors to provide serial connections (e.g. terminal servers). But they all use different pin outs. Sometimes even different models for the same manufacturer need different adapters.
  • by baadfood ( 690464 ) on Wednesday January 12, 2005 @09:31AM (#11333882)
    Sure a well defined markup language is nice but really, people seem to loose all rational sense when it comes to XML - It cannot be used in a project without the project becomming "XML"? Scripting languages have been capable of processing all manner of free form text files in the past but somehow XML is necessary for interoperation? Why do people somehow think that XML encapsulated data will be small and quick to parse and are then suprised when it isn't? Why are they so fucking proud when their server can generate some trivial number of XML packets per second? What nutjob actually thought XML is easy to read? And what is the difference between a node an an attribute? Really?
    • by Anonymous Coward

      Sure a well defined markup language is nice but really, people seem to loose all rational sense when it comes to XML

      So in other words, there isn't a problem with the standard at all?

      Scripting languages have been capable of processing all manner of free form text files in the past

      And you've got to write a new parser for every new format.

      somehow XML is necessary for interoperation?

      Necessary? No. The best option? Usually.

      Why do people somehow think that XML encapsulated data will be small

      • by Anonymous Coward
        Config file in XML:

        <?xml .... ?>
        <config>
        <connections>
        <connection>
        <type>mysql</type>
        <host>foo.bar.com</host>
        <username>bob</username>
        <password>2sekret4u</password>
        </connection>
        <connection>
        <type>mysql</type>
        <host>db.host.com</host>
        <username>jane</username>
        <password>flower</password>
        </connection>
        </connections>
        </config
    • Slashdot moderation is usually fair except when the topic is XML, in which case outrageous, trollish, and uninformed comments that would be shot down in any other topic area are judged "interesting" or "informative".

      Yes, XML has been overhyped. Yes, it is used in many places where it's not appropriate. But it's completely unfair to tar an entire language and suite of associated technologies because of the way it's abused. Is Flash an inferior product because there are idiots who put loud, bloated Flash in

      • Apart from the lexical syntax being used XML, which is not very easy to parse at all, it has several design flaws, such as there are:
        • No explicit distinction between different kind of values, e.g. booleans, numbers and strings.
        • No explicit distinction between sets and lists.
        • No explicit distinction between identifying and non-identifying properties.
        • No explicit mechanism for defining internal references.
        • Does not allows complete typing of the values, instead of through an external defined schema.
    • Any simple and standard text-based markup language for data encoding with several free parsers available would probably have been just as overhyped as XML.

      Numerous other formats performing the same role as XML exist, but they never got the hype because they either weren't a standard, didn't have available parsers, weren't simple, etc., etc.

      What nutjob actually thought XML is easy to read?

      I think it's easy to read! It's a hell of a lot easier to read than RTF, Postscript. Or consider Sendmail configurat
  • by AndroidCat ( 229562 ) on Wednesday January 12, 2005 @09:36AM (#11333942) Homepage
    Microsoft, in their infinite wizzbang, uses a floating point representation for date/time in their OLE types, with the date (days from x) in the integer and time in the fraction. That's fine until you have to do math like timezone conversions. If you convert a local time to GMT then to someplace else and back, frequently your time is now off by 0.0000000001 seconds. That adds excitement to comparing two times, especially when only one has been converted to and from.

    It's not a huge problem to avoid, but unless you're draconian about using standard safe time math routines, it'll bite you .. eventually .. when you least expect it .. at a customer site running Martian Standard Time at local midnight. (Which will still be a bad hour for you to get a call no matter where it is.)

    And all because someone thought it would be pretty nifty to use floating point. Don't they teach the inherent dangers of round off or truncation errors in school these days? (And before someone automatically jumps on MS, with all the UNIX standards, what are you using? Is it safe?)

    • Don't they teach the inherent dangers of round off or truncation errors in school these days? (And before someone automatically jumps on MS, with all the UNIX standards, what are you using? Is it safe?)

      We had to learn this one the hard way. We were in the regional ACM programming contest, and our solution worked perfectly on our local machine with the test data, but would return incorrect results on the marking machine. We ended up spending a ridiculous amount of time debugging it, which was hard, since
      • I heard about that problem, since my university hosted part of the Mountain region's ACM programming contest.

        I think the Linux machines here and the Solaris machines at other sites returned different results, due to the differences in the architectures and their math libraries.
    • Use of floats for date and time really quite reasonable as generally you are interested in a the value of a mantissa (i.e., that ten digits or whatever is normally enough to always give the resolution that you need), however all times should be normalised (to UTC, Stardate or whatever) before they are stored or used. Local times shouldn't exist outside data presentation layers. Rounding errors on time are really not so much an issue, rounding errors on money, now there is a problem!

      OTOH, I started when t

  • Submarine patents (Score:3, Insightful)

    by SgtChaireBourne ( 457691 ) on Wednesday January 12, 2005 @09:39AM (#11333957) Homepage
    Submarine patents and other proprietary gimmicks, are bad.

    A current example would be packing VC-1 into both Blu-ray and HD DVD [blu-ray.com].

    Though software patents are currently only a problem in the U.S., I'd still say that they threat of stealth patents would be the worst bug. Proprietary material shouldn't get through the standards process.


  • DNS's MX entry is excellent, I wish it existed for other services as well.
  • C++ (Score:3, Interesting)

    by Grab ( 126025 ) on Wednesday January 12, 2005 @10:03AM (#11334213) Homepage
    The overloading of the bit-shift operators > for streams in C++. Kludge city! And C++'s templates don't exactly come out smelling of roses either.

    Grab.

    • by leifw ( 98495 )
      Is there any implementation of templates or generics that you would hold up as satifyingly aromatic?
  • IMAP (Score:2, Interesting)

    by Anonymous Coward
    IMAP should be a powerful idea in principle but it looks like it has been implemented by people who haven't had much experience with programming concurrent systems. I've learn about this the hard way while writing an IMAP server.

    Using IMAP it should be possible for several clients to connect to the same account simultaneously. Changes made by one are reflected in the others as they happen, since the server sends updates describing these changes. Think model-view-controller. (Some clients ignore these u
  • EIDE (Score:3, Interesting)

    by Deliveranc3 ( 629997 ) <deliverance@level4 . o rg> on Wednesday January 12, 2005 @10:11AM (#11334288) Journal
    Reversable cables? Come on that is so unnessecary! And making them wide and flat come on!

    Plus the whole master/slave system is kinda fun.

    Basically it's the only thing a novice couldn't figure out on their own when doing an install :(
  • by scruffy ( 29773 ) on Wednesday January 12, 2005 @10:49AM (#11334750)
    CSS was supposed to copy protect DVDs, but didn't, both because of poor encryption and because it doesn't prevent a bit-for-bit copy.

    It was a de facto standard to use two digits to encode the year, which caused a lot of fun a few years ago.

    • CSS was never meant to prevent copying of the DVD, at least not from people who really understood it. It was meant to prevent unlicensed DVD players from playing back DVD content. Basically, it was meant to enforce licensing payments for any manufacturers of DVD players.

      It did that pretty well.

      It was marketed as copy-protection but in reality, it was always intended as playback prevention.
  • by rlp ( 11898 )
    No unsigned byte primitive. Grrrrrrr!! Also, the way Java handles date / time. Starting with mostly-deprecated util Date object, then abstract Calendar object AND GregorianCalendar object. Unless of course you're accessing a database, and then you need the SQL Date object, not to be confused with the util Date object. And don't forget the TimeZone, SimpleTimeZone, DateFormat, and SimpleDateFormat objects.

    • I almost choked when I found the option to allow, or not illegal dates.

      For instance, you can tell it to access 02/31/2005 as a valid date. Actually, you have to modify the flag so it doesn't accept dates like that. (this was several years ago, probably depricated by now).

      While I can understand the appeal of allowing arbitrary, but invalid dates (for unusual circumstances) it should NOT be the default for the class.

  • The pristine IPsec protocol family [faqs.org] lacked two key features: the ability to pass NAT and TLS/SSL-alike hybrid authentication. If these features would have been built into IPsec and its implementations ten years ago, network layer encryption would be far more used and crappy stuff like PPTP would never have raised its ugly head. (i know this does not hold the abstract's requirements for "shortcomings", but i think the internet would look different today without it)

    The NAT problem got resolved by UDP encaps

  • by JoeD ( 12073 ) on Wednesday January 12, 2005 @11:26AM (#11335193) Homepage
    With the space available on a CD, they should have allowed space for Artist / Album / Songname / etc on the disk itself.
  • by Bookwyrm ( 3535 ) on Wednesday January 12, 2005 @11:42AM (#11335417)
    Beyond basing a standard for managing stateful telecommunications sessions on a protocol for stateless bulk data transport, the most blatant silliness in the SIP standard was the original "Alert-Info" header. The Alert-Info header allowed the calling party to specify the ring tone/sound by listing a URL that the receiving device would automatically attempt to fetch and play without waiting on the recipient user to allow/disallow it.

    Others:
    List of Evil SIP ideas [ietf.org]

    Oh, and never updating the SIP version string despite syntax changes in the standard is evil.
  • The standard insists on requiring broken behavior from new implementations to preserve compatibility with old software. Not a good idea when precision timing is becoming increasingly important.
  • NFS (Score:4, Interesting)

    by tedgyz ( 515156 ) * on Wednesday January 12, 2005 @12:02PM (#11335731) Homepage
    NFS is inherently flawed in it's transaction acknowledgement and retry behavior.

    Back before M$ had Linux to kick around, there was the UNIX-Haters Handbook [microsoft.com]. I worked at Apollo/HP with a UNIX-Hater zealot. He enlightened me on the serious flaws in NFS, which I had experienced first-hand on a few occasions.

    A quote from the book: (page 287)
    So even though NFS builds its reputation on being a "stateless" file system, it's all a big lie. The server is filled with state--a whole disk worth. Every single process on the client has state. It's only the NFS protocol that is stateless. And every single gross hack that's become part of the NFS "standard" is an attempt to cover up that lie, gloss it over, and try to make it seem that it isn't so bad.
  • by szyzyg ( 7313 ) on Wednesday January 12, 2005 @12:03PM (#11335749)
    The Socket class is astonishingly broken
    IPAddresses are frequently imported/exported at Longs - 8 bytes with a sign bit
    Port numbers are 4 byte signed integers.

    Sure, Java doesn't have a signed int or long but .Net does.

    Now they introduced a way to get the IP address as an array of bytes, so that you can support IPv6, problem is the constructor that takes a byte array will only accept a 16 byte address, not a 4 byte one for us IPv4 users. On top of this they've deprecated the only other method that can get you an ip address in binary format.

    So if you want to serialize an IP address you have to either get it as a Long and cast it to an unsigned int - this generates all sorts of compiler warnings, so forget about clean compiles. Or you can get the address as a byte array and then on reception you have to turn it into an unsigned long.

    Oh yeah, there's no documentation on what the environment does about the endianess of IP addresses converted into longs.

    Now... we''ve also got the alarmingly bad Select() method which requires you to build lists of the sockets you're interested in and then proceeds to prune these to only leave the ones where activity has happened. Problem is that you can't reuse these lists so you need to construct them every time so you end up spending more CPU on building lists than you do on simply scanning the list of open sockets. Not that it matters, .Net throws and exception if you try to Select() on a list of more than about 30 sockets.

    Another retarded design decision is the implementatino of non-blocking IO and EAGAIN, they decided that this should be implemented as an exception. And we all know how fast exceptions are.

    Grrrrrrrrr

    I could go on and on.
    • Now... we''ve also got the alarmingly bad Select() method

      Which is strange, because (by your description) it has exactly the same shortcomings of the old *NIX select(2) system call (that's why poll(2) is there). One would expect that people designing a library in the 21st century knew better than this.

  • by 4of12 ( 97621 )

    An unambiguous description of the One True Way to properly render .doc Always, Anywhere.
  • by Ramses0 ( 63476 ) on Wednesday January 12, 2005 @12:29PM (#11336101)
    This is by far the most egregious intentional hobbling of a standard by retarded people (the W3C). Ever since they deprecated the elements <menu> (and to a lesser extent: <dir>) in a Markup Language, I have lost faith in their ability to properly evolve a standard.

    See the HTML 4.0 [w3.org] recommendation. I literally hit something when I first read this back in '97 (yes, I sometimes read standards documents and RFC's for fun :^). It's also referenced in the original ('97) release [w3.org].

    The DIR element was designed to be used for creating multicolumn directory lists. The MENU element was designed to be used for single column menu lists. Both elements have the same structure as UL, just different rendering. In practice, a user agent will render a DIR or MENU list exactly as a UL list.


    We strongly recommend using UL instead of these elements.


    Remember that HTML is a markup language, and see above where the W3C intentionally took away contextual information from the document.

    Keep in mind this was *after* the release of CSS1 (Cascading Style Sheets, level 1 W3C Recommendation 17 Dec 1996 [w3.org] vs. HTML 4.0 Specification W3C Recommendation 18-Dec-1997 [w3.org])

    99% of websites on the planet have something you could consider a "menu", or "tabs" of some kind. Wouldn't it be nice if we had a particular tag for that, like "<menu>"? (we do ... or we did).

    Nowadays, lots of people are linking to other people (a <dir>ectory) of people with blogrolls, wouldn't it be nice to wrap those in a <dir> list and style them separately, without using arbitrary <ul class="blah"> tags? Or perhaps a list of files available for download (<dir>), or a list of (perhaps) emails in a web mailing client.

    Not that there's anything preventing use of ad-hoc class tags to achieve the same effect, but there is semantic information (especially in <menu>) that can be put to good use when standardized like this. Everybody complains about screen-readers, wrap / auto-skip anything in a menu tag. Make a special button that pops up (or reads) anything in a <menu>. Grr. The web could have been just a tiny bit better without that move by the W3C.

    --Robert
  • The HTTP protocol defines header comments, which are only valid on a few header fields. There are some increadible problems with them.

    1. Comments are recursive.
    2. Comments break the header continuation model used elsewhere for continued values.

    This means that HTTP headers must make semantic decisions about the header type they are working with in order to properly perform their lexical parsing. It might not seem like much, but it's a sublte stone bitch.
  • When you compare the standard W3C Dom to the one implimented by IE (and partially by Mozilla), you will find that the standard is sorely lacking.

    Properties such as offset(Left|Top|Height|Width) to discover the rendered position of an element are non-existant. The ability to capture context menu events is non-existant. And don't even get me started on the event model.

    People may hate how IE co-opted everything, but their DOM APIs are one thing MS got *right* - the IE DOM API is far more flexable and powerfu
    • The ability to capture context menu events is non-existant.

      And good thing, too.

      Context menus are part of my computer UI, not a part of your web site. Web sites should never be able to change client UI elements. This includes CSS styling of scrollbars (which is thankfully IE-only) and mouse pointers (which, unfortunately, isn't).

      Anything within the window area is yours. Anything outside of that, including controls that may appear to be inside the window (eg, scrollbars, cursors, menus, etc) is mine.
      • Nonsense.

        HTML is not just the WWW anymore. I need to capture context menu events in web applications I work on *all the time*. Thankfully, both Mozilla and Safari implement IE's contextmenu event, so the apps can be cross-platform in this respect.

        Try thinking out of the box - there are a lot more applications that run on the browser nowadays than websites. The browser is now a platform. The W3C is not adapting to that reality quickly enough.
        • In my experience, most web application developers could use a good course or two on usability and interface design. After that, you may not be so eager to interfere with the normal, expected behavior of your users' browsers.
  • SQL (Score:3, Interesting)

    by TheLink ( 130905 ) on Wednesday January 12, 2005 @01:18PM (#11336830) Journal
    Plenty of stupid stuff in SQL.

    Why a different format for update and insert?

    update table set field1=value1,field2=value2 where rowid=x

    vs

    insert into table (field1,field2) values (value1,value2).

    --
    I don't know about "worst" but could the SQL standard be partly to blame for why porting data from one DB to another is hard in most cases...

    e.g. not covering stuff that most people find useful or even vital? And thus letting Oracle etc each define their own ways of doing things.
  • SQL !!! (Score:2, Interesting)

    by Anonymous Coward
    Many years ago Edgar Codd presented up a complete model for storing data: the relational model. It was complete and sound, which no other data model is. It is based on predicate logic (to give meaning to the data) and set theory. You can store any kind of data in a relational database.

    To implement the relational model you just have to implement a number of set operators and relational operators (project, join, etc), and you have to enforce arbitrary constraints on the data.

    Much like arithmetic (add, subtr

  • My goodness, what a mess! 'Nuff said.

  • Remember that Perl bit that returned the number of years since 1900? Back when everyone used Perl for Internet stuff, they could count on that being a two digit number. Seeing stuff on the net dated 1/12/105 - that slays me!
  • At least that's the only time I've heard Linus suggest to go out shoot anyone :)

    Linus in Flame Mode [iu.edu]

  • Just curious what the Slashdot crowd thinks are the worst bugs ever to creep into a standard?

    Having survived the great Revision '4.0' Browser wars, I have to say the worst 'bugs' ever were the proprietary extensions that crepts into Netscape and IE. It made it so difficult to design any sort of advanced page without all sorts of duplicate (albeight slightly different) code to satisfy both browsers.
  • No authentication or encryption in the 802.1Q standard. Ya'll can think Cisco for this fuckup. There were a number of pre-standard VLAN-like implementations back in the day. Most had authentication and encryption between 802.1Q switches and routers. Cisco's didn't however. It didn't have jack. However they put their pre-standard implementation on every single device they made and by default embraced a ground up approach to the use of VLANs, their VLANs. Unfortunately Cisco's implementation became so

Receiving a million dollars tax free will make you feel better than being flat broke and having a stomach ache. -- Dolph Sharp, "I'm O.K., You're Not So Hot"

Working...