Forgot your password?
typodupeerror
Unix Operating Systems Software

What's Wrong with Unix? 1318

Posted by Cliff
from the defects-and-potential-solutions dept.
aaron240 asks: "When Google published the GLAT (Google Labs Aptitude Test) the Unix question was intriguing. They asked an open-ended question about what is wrong with Unix and how you might fix it. Rob Pike touched on the question in his Slashdot interview from October. What insightful answers did the rest of Slashdot give when they applied to work at Google? To repeat the actual question, 'What's broken with Unix? How would you fix it?'"
This discussion has been archived. No new comments can be posted.

What's Wrong with Unix?

Comments Filter:
  • by SIGALRM (784769) * on Tuesday December 28, 2004 @07:16PM (#11203936) Journal
    What's wrong with UNIX? Depends on which perspective you start...

    In my opinion, here are some headaches that have plagued a wary UNIX engineer or two:

    IEEE and Posix, X/Open, etc. provide a basis for standardizing UNIX interfaces, but adherence tends to be spotty

    Difficult to implement a microkernel architecture

    XPG3 aside, a de facto "common API" has never really been acheived

    In many cases, code scrutiny is difficult or impossible

    Progress and innovation tends to occur within the context of aquisitions (i.e. UnixWare)

    The COFF symbolic system is terrible (OK, I know it's a deprecated, but still...)

    PIT initialization (time management)

    Kernel tuning (anyone fiddled with the /etc/conf/cf.d subdir on OS5?) These are just a few things, in my experience. That said, UNIX has had some great days.

    • Maybe I am just stupid but it is kind of hard to install. I am a Unix newbie. I have downloaded FreeBSD and almost had it installed but I have failed many times, therefore going back to Linux or even Windows. If they can make the installers just a bit more user friendly like this [bsdinstaller.org], then I am all for it.
      • by AndyElf (23331) on Wednesday December 29, 2004 @03:49AM (#11207152) Homepage

        I dunno, maybe you're just trolling (and a number of replies that follow would qualify you as a good troll), but I'd say that installing FreeBSD is not any more difficult than, say, Slackware or Debian. It is more challenging than your Mandrake or RH install, I think (have not had a chance in the last 3-4 years to try either).

        That said, with enough preparation and a chapter from the Handbook [freebsd.org] printed out and within a reach installing stock FreeBSD should not be a problem at all.

        The question you should, however, ask yourself is Why do I want to try FreeBSD? If it is just because you've heard it's cool -- you may be much better off trying a http://www.freesbie.org/ [freesbie.org] instead. It's a live FreeBSD ssytem, sort of like Knoppix.

        If you want to give FreeBSD a spin because you want to understand UNIX-land better or have needs for the stability of the platform, then rough starts should not be anything to discourage you.

        In either case -- all the best and have fun!

    • KSpaceDuel (Score:5, Funny)

      by jkauzlar (596349) * on Tuesday December 28, 2004 @07:25PM (#11204016) Homepage
      Certainly this component of Linux needs rewritten. Firstly, it is far too difficult to maneuver your ship with the gravity the way it is, and secondly, the bullets go too slowly. Thirdly, it isn't intuitive what the different colored blobs are; its easy to forget what is energy and what is a mine, or something like that.

      I would suggest to the KSpaceDuel team that they meet with the KAsteroids team to discuss usability issues. There should also be a cap on how fast you can go, since it is possible to speed up so fast that your spacecraft appears to be moving very slowly (sort of like a tire in motion).

    • by Anonymous Coward on Tuesday December 28, 2004 @07:28PM (#11204050)
      In addition:

      1. Crappy filesystem. Resier4 or XFS is what UNIX should have started with and even now we don't have file versioning.
      2. POSIX permisions suck. The suid bit sucks even more. ACL's make more sense, and UNIX should have had them from the start. If we're doing it now, capabilities would be even better.
      3. IPC primitives are poor. SySV shared memory goes some way to helping, and UNIX domain sockets are O.K, but a proper message/event marsheling system would eclipse them all.
      4. The filesystem hierachy is an awful mess. Non-standard across all unices and poorly evolved to cope with modern systems. /etc was a horrible copout and it shows. UNIX needs proper application packaging with proper self-contained application packages.
      5. Providing lots of little applications to do specific tasks was the best idea ever, but not providing a decent scripting language to bind them together was a bone-headed mistake. Likewise not standardising some basic data-interchange formats (Even it was just pre-formated ASCII) just makes piping all those little tools together to do anything useful a pain.
      • by insert_username_here (844281) on Tuesday December 28, 2004 @09:13PM (#11205103)
        Mod parent up!!

        I've been happily using Linux on my home PC for about 4 years, but the filesystem layout has always been an annoyance.

        Without a package manager, it's practically impossible to remove a program; even with a package manager, you can't even determine how big a given package is! (if you know how to with Portage, I'd like to know). A better filesystem layout (perhaps the way MacOSX, GoboLinux or RoX does it) would make package managers obsolete.

        A lack of standard configuration layout is another thing: why should people have to learn hundreds of config file formats? Yes, comments help, but it'd be nice if they weren't needed. Why not come up with one standard text-based config format/filesystem layout and get everyone to use it? This would also save programming time, as you could create a library (with a name like libconfig or something similar) and not have to worry about parsing configuration settings. The Windows Registry Hell can be avoided by using a text-based format(e.g. like Java properties files or XML).

        A standard configuration layout (with suitable metadata) would also go a long way to allowing a standard graphical system configuration utility (Whatever happened to linuxconf? I loved that app!), making Unix/Linux that much more accessible to ordinary people.

        Replies, flames, etc.
      • by crmartin (98227) on Tuesday December 28, 2004 @11:24PM (#11205929)
        It's clearly time for my periodic "you young pups don't know your history" posts.

        1. Reiserfs etc are the results of 30 years of research that, well, hadn't happened 30 years ago. the i-node/u-node business was the best there was. Then.
        2. Multics had general, configurable, role-based, magic ACLs; UNIX lost them on purpose becuse it wasn't well suited to a big games system and word-processor, which is what UNIX was meant for originally.
        3. When I was a kid we hardly HAD processes, much less IPC. Having named pipes was a helluvan innovation.
        4. That's not the operating system, that's book-keeping.
        5. /bin/sh WAS the coolest scripting language ever. They've gotten better. text files with field seperators (that all passwd(5) is, after all) were the uniform data representation.

        If you were to go back to System 3 UNIX, you'd have most everything you're asking for here. It wouldn't be as powerful, but it'd be uniform.
      • by X (1235) <x@xman.org> on Tuesday December 28, 2004 @11:45PM (#11206055) Homepage Journal
        Responding point by point:
        1. It's easy in hindsight to critique filesystems. While the original SysV filesystem was pretty bad, the Berkeley Fast Filesystem was already pretty good for it's time. The simplicity of the Unix filesystem has actually been a key aspect of Unix's success. Even on platforms with more complex filesystem API's, you don't see much in the way of applications taking advantage of them.
        2. POSIX ACL's have been around for a long time at this point. The relatively pathetic rate at which they've been adopted and taken advantage of should be a clue to their shortcomings. Several security experts have pointed out that while ACL's are great on paper, in reality they increase the complexity of the security model, which in practice is more of a liability.
        3. SysV has message queues for IPC. Everything you could want and... not a lot of people use them. ;-) ONC RPC also prvides a pretty decent message/event marshalling mechanism, and you don't see a lot of new apps being written to use that either. Think about why. I would say though it'd be nice if there was a better standard model for kernel events beyond signals.
        4. I honestly still find advantages to the traditional Unix FSH, particularly for administrators. It certainly beats the crazy structures on Windows or OS X. End users increasingly care less about where program files are located on their system, so this seems like the wrong area to work on things.
        5. Unix did include decent scripting languages, and more importantly provided for additional ones to be added to the system (witness the rise of ksh, perl, python, etc.). If there had been any kind of data-interchange format that was remotely useful, it surely would have dictated how Unix tools work with those data formats. Unfortunately, there were weren't (and aren't). Consequently, dictating standards wouldn't have solved the problem you're describing, as you'd still have to translate non-standard formats into the standard formats and back again. Letting tools like sed/awk/perl evolve to solve the problem seems like a far more practical approach if you ask me.
    • Here's a start: (Score:5, Interesting)

      by Slack3r78 (596506) on Tuesday December 28, 2004 @07:39PM (#11204155) Homepage
      The Unix Hater's Handbook [microsoft.com]

      Yes, the link is hosted on MS servers, but before you ignore it for that, at least notice that the forward is by Dennis Ritchie and it was contributed to primarily by Unix geeks. It's about 10 years old, but large portions of it are still relevent today.
      • Re:Here's a start: (Score:5, Insightful)

        by Zeinfeld (263942) on Tuesday December 28, 2004 @08:22PM (#11204663) Homepage
        Yes, the link is hosted on MS servers, but before you ignore it for that, at least notice that the forward is by Dennis Ritchie and it was contributed to primarily by Unix geeks. It's about 10 years old, but large portions of it are still relevent today.

        I think most of us on the Unix Haters list were Lisp machine or VMS hackers who were pretty upset that a piece of utter crap was winning the O/S standards wars at the time.

        The forward by Dennis is actually an anti-forward, more of a backward. At the time he was working on Plan-9 which takes all the best ideas from UNIX and junks them, leaving only the unrefined crud that is best ignored.

        The book is somewhat uneven in its criticisms, I don't think that the gripes abous X-Windows hit the mark as well as when they are explaining the file systems lossage.

        Ultimately the problem with Unix is that it is built the way that cars used to be built before Henry Ford, its a computer O/S for folk who like to spend their time tinkering with their system and like endless opportunities for low grade intellectual stimulation because thats an end in itself for them.

        Unix still has the same major architectural deficiencies. The inter process communication is not up to much, the concurrency model is weak, the user interface is eratic and there is no consistency. Documentation is a complete joke.

        • Re:Here's a start: (Score:5, Insightful)

          by Taladar (717494) on Tuesday December 28, 2004 @10:09PM (#11205478)
          Documentation is a complete joke.
          So which do you prefer? Unix Man Pages that contain all there is to know about a certain app in a not quite end user refined form or Windows Assistants ("Did you plug in the Cable?" - "Yes" - "Then I can't help you - call your vendor") and cryptic error codes?
          • Re:Here's a start: (Score:5, Insightful)

            by spectecjr (31235) on Tuesday December 28, 2004 @11:13PM (#11205847) Homepage
            So which do you prefer? Unix Man Pages that contain all there is to know about a certain app in a not quite end user refined form or Windows Assistants ("Did you plug in the Cable?" - "Yes" - "Then I can't help you - call your vendor") and cryptic error codes?

            I prefer MSDN [microsoft.com]. Call me when Unix has something that even approaches the ease of use and the amount of readable samples, explanations etc. of key APIs.

            And no, the System V paper manuals don't count.
      • Amen! (Score:5, Interesting)

        by pVoid (607584) on Tuesday December 28, 2004 @10:41PM (#11205669)
        The File System. Sure It Corrupts Your Files, But Look How Fast It Is!

        That's probably the single biggest problem I see with nix machines. Lazy filesystems have always reminded me of experimental planes developped by the cold war military to up the world speed record. Planes which would basically self destruct if they god forbid hit a pothole while taxying out of the hangar. RAID is obviously not a solution, and I find that backups - while essential for mission critical applications - should not be used as an excuse to allow for making a file system that is as brittle as this.

        As a broader comment, I just find that UNIX is a brittle OS. Before every zealot jumps on this statement I should clear up what I mean: the OS components are extremely lean, they do exactly what they're meant to do, but there's absolutely no inherent 'imune system' to the OS. su can go ahead unlink the root node, a power failure and the file system goes to hell, there isn't any cohesive way to manage machine state. Every daemon runs in its own little planet, unaware of everything else.

        The article the other day on /. about Sun's attempts at self healing software address parts of this actually. And other really cool apps like tripwire address other points too. But in general, the OS itself is completly stripped of an immune system.

        When Microsoft first introduced the Windows File Protection service, I was really pissed off they did something which should have been done via proper security measures (which common users were short circuiting by running as admin). But the more I face the idea, the more I realize that it's not a bad idea after all, proper code signing, system level integrity checks, basically a path towards actual 'self healing systems'.

        In general though, everyone has a long way to go still...

    • Why the microkernel part? Although I don't have much experience in this, I'd say it should be quite simple, relatively speaking. Look at Linux, most of the kernel tree is drivers. The kernel itself is pretty small.

      From reading stuff and watching discussions what I got is that the problem with microkernels is that they're hard to properly implement and still have fairly bad performance. In fact, I hit those same problems when trying to code an extremely modular application, that I tried to write as an exper
  • by raile (610069) on Tuesday December 28, 2004 @07:17PM (#11203942)
    I'm used to reading my system text as a white font on a blue background.
  • OS X (Score:5, Insightful)

    by BWJones (18351) * on Tuesday December 28, 2004 @07:17PM (#11203943) Homepage Journal
    Based upon my experience with IRIX and Solaris (with some Linux), I would have to say that most of the things that *NIX did poorly have been rectified with OS X. I would have said OS X was still lacking true 64bitness, but that is coming in 10.4 rather quickly now. The numbers of Macs involved in secure and classified work in the Federal government have been exploding and high bandwidth networking options for cluster computing have also been resolved with options such as Infiniband. Development issues have been streamlined with rather nice tools from Apple itself obtained via NeXT. Open standards are being embraced just about everywhere you turn in OS X, a true plug and play environment now exists (I am reminded of the last video card install on my SGI O2 which had me down for two days solid), the GUI is consistent and the CLI is present and fully integrated with the GUI as well. Additionally, more and more networking options are being supported natively within OS X which is one of the last hurdles to true interconnectivity cross platform. And the G5! Oh, the G5 is a wonderful bit of hardware with which to run *NIX on.

    Problems that remain are being able to create one seamless environment with shared memory and such, but the rest of the *NIX world is still having those problems as well.

    You can argue about the specifics and details of many things, but in terms of a UNIX workstation, OS X pretty much has it all for our needs.

    • Re:OS X (Score:5, Interesting)

      by ducomputergeek (595742) on Tuesday December 28, 2004 @07:41PM (#11204181)
      I generally have to agree. I had used solaris, Linux, FreeBSD, and OpenBSD systems before switching to OSX about 2.5 years ago. Granted I'm still running on my G3 iBook so the great power of the G5 chips are of little conquence, I've been developing for *iux web systems for 2 years now on Mac.

      That coupled with the ablity to stay connected to the rest of the business world via MS Office for Mac and Adobe tools along with fine opensource apps such as Blender, and Apple only software like Final Cut Pro has been great.

      What has happened to Unix is that Apple has developed the better *iux desktop system that coupled with the new G5's has been the final death nail into SGI coffin and put the hurt on SUN. Back in the days at McDonnell Douglas (now boeing), much of the engineering development was done on extremely expensive Sun workstations that could easily run $20k a peice. Today, a lot of development and code is being written on $3000 - $4000 PowerMac G5's.

      While Apple remains expensive for many consumer users, in engineering and scientific fields, the PowerMacs with OSX are extremely inexpensive. Many of my friends in scientific fields have flocked to Macs with OS X in the past three years.

    • Re:OS X (Score:5, Funny)

      by hey (83763) on Tuesday December 28, 2004 @07:42PM (#11204203) Journal
      Lots of people agree that OS X is the best Unix going. So now us Linux fans has something to copy. Lets get started.
      • Re:OS X (Score:5, Insightful)

        by edesjard (588174) on Tuesday December 28, 2004 @10:29PM (#11205609)
        This is actually a really good point. My biggest complaint about Linux has always been that it constantly tries to copy WINDOWS which I have been totally disgusted by and why I love my Mac. I keep hearing that everyone wants OS X on x86 hardware. Why hasn't Linux, which appears to be floundering aimlessly, focused its efforts on being more like OS X than Windows? Isn't it what will REALLY motivate people to give Linux a try?
    • Re:OS X (Score:3, Interesting)

      by killjoe (766577)
      While Apple has done a great job for the casual desktop user mac os x has a long way to go before it can be considered a reliable server platform.

      Hopefully they will have a decent port/package system for tiger, hopefully not every update will require a reboot, hopefully updates will not require to agree to EULAS, hopefully their GUI helpers will not clobber your carefully crafted conf files.

      I keep hoping anyway. Till then I have chosen to go back to freebsd for my sever needs. The xserves are now for java
    • Re:OS X (Score:3, Interesting)

      by winkydink (650484) *
      The numbers of Macs involved in secure and classified work in the Federal government have been exploding

      Exploding? Do you have a citation somewhere?

      Remember a huge percentage increase off a small installed base is still a small installed base. i.e., if you atart with 1 computer, a 10000% increase is adding 100 machines.

  • needs some VMS stuff (Score:5, Interesting)

    by nocomment (239368) on Tuesday December 28, 2004 @07:17PM (#11203947) Homepage Journal
    I like Unix, but I think I'd add some VMS stuff. Like a Delete attribute. VMS you can set people to have read/write/execute and delete. in unix if people have write, they can write it to "null" *grumble*.
    • Huh? Unless I'm misunderstanding my UNIX permission semantics (which is likely the case), if I can write to the file, I can destroy all of it's contents. However, the file still exists.

      If I have write permission to the directory, then I can actually call "unlink" (UNIX system call which will delete a file).

      Lacking write permission to the directory, I can't delete a file (or create a file). If I have permission to write to the file, I can destroy it's contents, but I can't stop the file from existing.

  • by ShortSpecialBus (236232) on Tuesday December 28, 2004 @07:18PM (#11203957) Homepage
    The first thing to change should be how programs get installed.

    EVERYTHING right now goes in /usr, without a directory, because everybody is too lazy to have /usr/foo/bin and /usr/foo/lib in their respective environment variables, because it's too much of a "pain" to put them in there on software installation, and it makes library linking more difficult.

    Right now, if I want to uninstall a program, I have to remove it from about 10 different places, many of which aren't obvious (/etc, /usr/lib, /usr/bin, /usr/share, et al.) and there's no good way to do it.

    Find a way (maybe symlinks /usr/lib/foo.so -> /usr/local/foo/lib/foo.so, maybe something else, I don't care) to make it so program installation/uninstallation makes more sense.
    • by Anonymous Coward
      Speak for yourself - for years, I've installed packages in "/usr/local/packages".
      Package "foo", version "N" goes in "/usr/local/packages/foo-N".
      The current version of "foo" has a symlink to it from "/usr/local/packages/foo".
      "/usr/local/bin" contains symlinks to the appropriate files in "/usr/local/packages/*/bin"
      Upgrades (and downgrades) are trivial.

    • by Phleg (523632) <stephen AT touset DOT org> on Tuesday December 28, 2004 @07:32PM (#11204087)

      Somebody find this man a package manager.

      • by Epistax (544591) <epistaxNO@SPAMgmail.com> on Tuesday December 28, 2004 @07:53PM (#11204339) Journal
        Right, because everything's available in a single repo.

        or...

        On my Fedora box I have rpms made for Red Hat, rpms made not for Red Hat (go figure), source installs with configuration scripts, source installs with instructions, source installs with nothing whatsoever, programs with install scripts that install to the directory tree how they see fit, programs with install scripts that install nowhere (./), and python sources that just sit there coming straight out of a tar. Meanwhile I have nethack sitting around in /usr/game/nethack and has "libraries" in /usr/games/lib (note that nethack is the only game to use these folders). I guess I'm just lucky I haven't had to figure out how to use a Debian (.deb) install file yet.
        • If you build anything from source on any RPM based box, CREATE AN RPM FOR THAT INSTALL. It is VERY easy to do if you take 30 minutes to figure out how to write a spec file.

          Then ALL the files in that installer are referenced in the final RPM, and to remove all the stuff you simply remove it using the rpm tool.

          On debian it's much the same but with a deb instead of an rpm.

          To install a deb file, just do "dpkg -i filename". Seems a lot like rpm, doesn't it.
    • I second that, world could be so much easier already if we just had wildchard support in PATH and other environment variables.

      PATH=$PATH:/opt/*/bin/

      or something along the lines could make life much easier.

      The throuble is really that the current filehierachie was designed to only contain a basic unix system, (ls, rm, libc, etc.) not a fullblown multimedia desktop, as what most linux are today.
      Stuff like the LHS don't even try to fix the mess, they just standardize it. Most likly we will be stuck with the
      • by diegocgteleline.es (653730) on Tuesday December 28, 2004 @08:43PM (#11204857)
        Or what plan9 does, just kill the $PATH variable.

        In plan9 you don't have a "$PATH variable", instead you have several directories (/whatever/arch-dependent-bin, /whatever/arch-independent-bin, ~/my-own-bin) and you just "join" them in a single directory: /bin (In plan9 every process can configure their filesystem namespace like they want and normal users are allowed to do things like that)
    • by plover (150551) * on Tuesday December 28, 2004 @07:35PM (#11204125) Homepage Journal
      Are you suggesting that an installation process more like Windows Installers would leave easier-to-clean-up-code? Because if so, I've got this real nice bridge to sell you.

      The problem I have with an "installer" system is that immediately developers will extend it to do things it shouldn't be doing. "Hey, you know, when we install this program we should have it send gmail invites to six people, FTP a pretty picture of a llama while we construct suitable advertising panels, and create three new users with the authority to start, stop and pause the data subsystem."

      Other than the llama thing, people have done all that crap and more with Windows installation tools. They blindly overwrite shared system files (leading to DLL hell,) they muck up the registry, they install hundreds of class IDs for internal-use-only COM interfaces, plop in unrelated browser helper objects, add random directories to the front of the system path, launch odd services that do god-knows-what, wedge in a startup task or two and then demand you reboot your system.

      It's taken Microsoft many years to realize they couldn't control the installers, and so with XP they changed the OS to try to defend itself from renegade installations. It would be extremely sad to see a UNIX equivalent.

      • by Slack3r78 (596506) on Tuesday December 28, 2004 @08:14PM (#11204585) Homepage
        It doesn't even have to be an installer. Ever used OS X? To install software on X, you simply drag a .APP container file into your Applications directory. To uninstall, you drag it to the trash.

        How is this not better than the current Unix way of doing things?
      • by Herr_Nightingale (556106) on Tuesday December 28, 2004 @11:06PM (#11205807) Homepage
        For the last 5 years Windows 2000 has had WFP to protect against core files being overwritten. I've not had a single "DLL hell" experience yet in ~15 years of using Windows.
        However I've installed Firefox on ten different distros (probably more now) and never once seen an icon for it appear automatically in my GNOME menu. Why is this so broken? APT, Synaptic, RPM, yum, etc. are all basically broken from my point of view, but we put up with them because it's worth the fuss. Millions of computer users can't even find a new icon on the DESKTOP, much less dink around with non-standard filesystem heirarchies (which distro do you use?) and symlinks.
        Pet peeve of the day (which happens to be relevant to this thread) : Windows downloads are only a fraction the size of equivalent Linux apps. Try OO.o, Firefox, etc. My Xandros 3 install had to download 40MB (using the lovely APT) which doesn't compare well to a 4MB download for Windows.
        Seriously, you should look into using something more current than Windows 3.11.

        To compare apples to apples:
        OO.o: [openoffice.org]
        Windows - 45MB
        Linux - 77MB

        Firefox (with installer):
        Windows - 4803KB [mozilla.org]
        Linux - 8422 KB [mozilla.org]

        Thunderbird:
        Windows - 5877 KB [mozilla.org]
        Linux - 10113 KB [mozilla.org]

        I've heard enough about bloody shared libraries that evidently NEVER get shared, and instead I end up with five different incompatible versions of glibc/GTK/whatever and it's also annoying to wait while APT downloads an EXTRA 300% of the listed download size. If making *NIX installers like Windows means that I'll have all the advantages, and all of the downfalls, then I'll take it thank you very much. It's a great deal better than what we've got now.
    • by umrk (99195) on Tuesday December 28, 2004 @07:39PM (#11204164)
      ./configure --prefix=/usr/local/stow/foo-1.2
      make
      sudo make install
      sudo stow /usr/local/stow/foo-1.2
      Done.
    • You might be interested in Stow [gnu.org]. The idea is to install each program in its own directory then create symlinks in /usr/(local/)?(bin|lib|share|whatever)/. When you want to remove a package, just remove its directory then remove broken symlinks.
    • by mandos (8379) on Tuesday December 28, 2004 @07:44PM (#11204232) Homepage
      This is basically what Gobo Linux is trying to accomplish. From their FAQ:


      • GoboLinux is a Linux distribution that breaks with the historical Unix directory hierarchy. Basically, this means that there are no directories such as /usr and /etc. The main idea of the alternative hierarchy is to store all files belonging to an application in its own separate subtree; therefore we have directories such as /Programs/GCC/2.95.3/lib.

        To allow the system to find these files, they are logically grouped in directories such as /System/Links/Executables, which, you guessed it, contains symbolic links to all executable files inside the Programs hierarchy.

        To maintain backwards compatibility with traditional Unix/Linux apps, there are symbolic links that mimic the Unix tree, such as "/usr/bin -> /System/Links/Executables", and "/sbin -> /System/Links/Executables" (this example shows that arbitrary differentiations between files of the same category were also removed).

        www.gobolinux.org
    • The correct solution to this problem was investigated in Plan 9. Plan 9 has a great many features that one could discuss, but the relevant one to this discussion is its threatment of filesystems. In Plan 9 EVERYTHING is a filesystem. Probably as a result of this, there are a lot more things that you can do with filesystems. In particular, you can mount multiple filesystems at the SAME location in the filesystem and view the contents of both filesystems. (in case you're worried, the behavior for writing to s
    • by mce (509) on Tuesday December 28, 2004 @09:04PM (#11205041) Homepage Journal
      Before you make statements about how things are installed on UNIX, you should understand that what you seem to know is a personal Linux box on which you can do everything you please and of which you don't understand the package management. It is not the UNIX way.

      In the true UNIX world, application software has always been such that it can be installed stand alone underneath ONE directory, quite simply because in the true UNIX world not every (other) user has root powers and the people who do have them understand that they don't want to mix shared application files with local OS files the way toy OS-es such as Windows and (sadly) some Linux distros do.

      Where I work, we install evereything in networked directories called /our-company-name/software/package-name/version. Then we wrap everything in shell scripts that automatically select the correct platform (HP-UX, Solaris, Linux) on the fly and that automatically set every single environment variable the softare needs. Then we add links to make a specific package version current and publish the key binaries of packages that many people use through 1 common bin directory. Not a single file needs to be stored and/or managed locally (crucial, considering the amount of machines involved).

      And now comes the best part: I (yes, I developed the setup and do most of the maintainance) do not even need root powers for anything.

  • configuration (Score:5, Interesting)

    by meshko (413657) on Tuesday December 28, 2004 @07:20PM (#11203968) Homepage
    I think the biggest problem with Unix is the lack of standardized way of doing certain things, in particular program configuration. Even simple programs that require very simple configuraiton store it in random places and formats. Not to mention things that require some serious config files, like sendmail, apache or X. Creating a cross-platform powerful configuration language would help.
    • Re:configuration (Score:3, Interesting)

      by Mr.Ned (79679)
      "Even simple programs that require very simple configuraiton store it in random places and formats."

      I disagree. Most programs I encounter have systemwide configuration files and per-user configuration files. The systemwide ones live in /etc, and the per-user ones live as dotfiles (or dotdirectories) in the user's home directory. There's nothing random about that.

      "Not to mention things that require some serious config files, like sendmail, apache or X. Creating a cross-platform powerful configuration l
    • Re:configuration (Score:5, Interesting)

      by killjoe (766577) on Tuesday December 28, 2004 @07:57PM (#11204392)
      Ideally all confi files would follow the same format and syntax (god no please don't say XML).

      Ideally there would be a uniform way for programs to retrieve configuration information from a centrallized location.

      Ideally local users and machines would be able to merge their prefs and config with the master to override certain prefs.

      Ideally the hierarcy of administrators would be able to prevent entitities under them from overriding certain configuration options.

      Ideally all of that could be done with plain text files which are automatically checked into a version control repository so you can roll back any change in a jiffy.
      • Like elektra? (Score:4, Informative)

        by haeger (85819) on Tuesday December 28, 2004 @08:12PM (#11204558)

        Ideally all confi files would follow the same format and syntax (god no please don't say XML).
        Ideally there would be a uniform way for programs to retrieve configuration information from a centrallized location.
        Ideally local users and machines would be able to merge their prefs and config with the master to override certain prefs.
        Ideally the hierarcy of administrators would be able to prevent entitities under them from overriding certain configuration options.
        Ideally all of that could be done with plain text files which are automatically checked into a version control repository so you can roll back any change in a jiffy.


        There was a project on sourceforge that adresses some of the points you raise. Originally it was called "Linux-registry" I believe, now it's called Elektra [sourceforge.net].
        I don't know how far they've come or anything about the project, but it looks like something that You'd want to have a look at.

        .haeger

        • Re:Like elektra? (Score:4, Informative)

          by Technonotice_Dom (686940) on Tuesday December 28, 2004 @08:49PM (#11204903)
          Yep, there was a mention on LWN.net recently when they "Elektrafied" X.org. It uses the filesystem for config storage, has only a couple of libraries that it depended on (i.e. not a whole load of XML stuff) and was in essence, very simple. With revision control systems, you could roll back changes easily. From memory, it created a file for each setting, and stored the value for the setting inside it, using directories for the config layout.
  • my answer (Score:3, Funny)

    by ubiquitin (28396) * on Tuesday December 28, 2004 @07:20PM (#11203974) Homepage Journal
    Q. What's wrong with Unix?
    A. All those slashes and dots.

    Q. How you would fix it?
    A. um, slashdot

    Of course!
  • In a word... (Score:3, Insightful)

    by rongage (237813) on Tuesday December 28, 2004 @07:22PM (#11203986)

    Printing - more specifically, Postscript Printing.

    This sillyness of having to generate postscript so Ghostscript can generate PCL so you can print is just wrong - empty brained, someone forgot to wake up wrong.

    PCL is available on every major printer on the market today - it IS the standard. PostScript is a has-been. Dump it today.

    That is what is wrong with *nix and what I would do to fix it is require all software to support PCL printing directly.

    • Re:In a word... (Score:4, Informative)

      by Libor Vanek (248963) <libor.vanek@gma[ ]com ['il.' in gap]> on Tuesday December 28, 2004 @07:44PM (#11204226) Homepage
      No PCL!!!! Reasons:

      - PS you can very easily convert to PDF - none for PCL!
      - there tons of tools which enables you "4 pages in 1", accounting quotas etc. etc. - none for PCL!
      - try to display PCL file
      - WHICH PCL? PCL5? PCL3?...

      There is simply NO reason to give up - tell me one single argument (except VERY slight speed-up) which will balance the loosen flexibility and necessary to rewrite all existing tools (CUPS, print drivers etc.)
    • Re:In a word... (Score:5, Informative)

      by Tackhead (54550) on Tuesday December 28, 2004 @07:48PM (#11204274)
      > This sillyness of having to generate postscript so Ghostscript can generate PCL so you can print is just wrong - empty brained, someone forgot to wake up wrong.
      >
      >PCL is available on every major printer on the market today - it IS the standard. PostScript is a has-been. Dump it today.

      Huh? I think you've got that backwards.

      PCL requires that most of the "brains" exist on the "computer" side of the "computer/printer" connection. A PCL printer needs less "brains" than a Postscript printer because all the processing is done on the "computer" side of the connection.

      Not to put too fine a point on it, but a PCL printer is to a Postscript printer what a Winmodem is to a hardware modem.

      For printers, the PCL tradeoff made a lot of sense sense when embedded CPUs were (extremely) limited in computational power compared with desktop CPUs. Rather than have your $1500 486-33 sitting idle as it dumps a pile of Postscript code to another $1000 68020 in the printer, I'll use my $1500 desktop CPU to turn my document into PCL that can be parsed by the $1.99 Z80 or whatever's in my $100 PCL printer.

      Now that your $25 disposable cell phone has a 200 MHz core, that tradeoff is no longer a requirement. Embedded systems smart enough to interpret and run Postscript code are no more (and no less) expensive than those capable only of PCL.

      Methinks you've got the PCL/Postscript design tradeoff backwards.

  • by Ars-Fartsica (166957) on Tuesday December 28, 2004 @07:22PM (#11203994)
    Does unix enable people to build clusters, serve multimedia content, create sustainable high-throughput networks etc etc? Yes. Most implementations also provide for these true modern computing environments reliably and cheaply. What else do you want an OS to do? If an OS can reliably enable the modern application layer, to me it has satisfied the criteria of an OS.

    While I agree that the core OS has not moved much in decades, I also see very little motivation for this as much of the required functionality has moved up the stack to the application layer.

  • by andrewzx1 (832134) on Tuesday December 28, 2004 @07:23PM (#11203996) Homepage Journal
    If you read the motivations behind writing Plan9 (documented on slashdot previously), there are many descriptions of what the authors thought was wrong with UNIX. And the guys who wrote Plan9 are the same guys who wrote the better part of UNIX. And for you youngsters, UNIX is not LINUX. - AndrewZ
  • cynical view (Score:5, Insightful)

    by Keitopsis (766128) on Tuesday December 28, 2004 @07:24PM (#11204003) Journal
    Problem:
    Unix is great!, unless:
    - You just want a plug and pray answer
    - You just want a word processor
    - You just want ......

    If someone is only looking for a single application, it is hard to shove such a versitile system down their throat.

    Solution:
    Create a truely modular UNIX/OS that does not depend on any single environment(init/SYSV). Make a pluggable API-level interface that you can plug anything from a single application to a complete system environment into. Then get someone to develop EXACTLY what you want.

    Idiotware without the bloat.

    Laughing all the way,

    -- Kei
    • Re:cynical view (Score:3, Interesting)

      by pclminion (145572)
      If someone is only looking for a single application, it is hard to shove such a versitile system down their throat.

      And yet Linux is becoming an increasingly common choice for all sorts of embedded, special-purpose devices.

      A lot of people don't really understand what UNIX is. At its heart, it is just a philosophy, not a system. A way of thinking about and solving problems which has remained relevant and useful for decades. All real-world UNIX systems have lots of crap bolted on, out of necessity, but th

    • Re:cynical view (Score:3, Insightful)

      by OrangeTide (124937)
      Who just wants a word processor? Those word processors "appliances" were never very popular. I don't think anyone even makes them anymore.

      I've never met a computer that was really "plug and play". They always seem to have issues, at least for me. About the only thing that worked right away was my microwave. Even new cars don't seem to work perfectly from the start. We all might want something that you plug it in and it works, but the popularity of cheap digital cameras that are notoriously unreliable seems
  • Has to be said (Score:3, Insightful)

    by aendeuryu (844048) on Tuesday December 28, 2004 @07:24PM (#11204004)
    One big thing that's wrong with Unix is SCO.
  • Easy! (Score:5, Insightful)

    by Telastyn (206146) on Tuesday December 28, 2004 @07:26PM (#11204023)
    Lack of coherent newbie documentation.

    Sure, man pages exist, but even once you learn that man does what help really should the man pages are generally written by programmers for programmers.

    Newbie guides generally don't get any further than a small command summary, which doesn't really show any strengths of unix over using a gui [or windows!]

    The best thing I think would be to provide more "whole system" examples/help rather than help for each individual command. Take some nice simple topics [how to add many users, how to determine network utilization programatically, how to determine open ports and what process is using them...] which are painful to do on windows and use a variety of unix tools to solve them.

  • Unix is too powerful (Score:3, Informative)

    by Anonymous Coward on Tuesday December 28, 2004 @07:27PM (#11204029)
    I know it sounds silly, but it's like asking a consumer to operate a bradley armoured figting vehicle, it wasn't built for consumer use, its got hundreds of knobs and options and configurations, and if you don't get it set up right the first time it is a tremendous headache to fix it. Consumers want a gas pedal and a brake, windshield wipers are fine, but when you put on a .50cal machine gun mount, even if its "turned off", it scares people away.

    It's a canonical example of something that tries to be everything to everybody, but ends up being too hard for anyone to use.
  • Not everyone's running it.

    Laugh.

    It's a joke.

  • The C language (Score:5, Insightful)

    by lazy_arabica (750133) on Tuesday December 28, 2004 @07:32PM (#11204088) Homepage
    Yeah, I know that most *nix lover simply love it. But let's face it : this language, which is still the most important one in a unix environment, is really aging. It is possible to develop big software in pure C, but it takes much, much time, and the risk of introducing bugs and security flows is huge. Only the minimal low-level core of the system should be based on C ; the rest should be developed in a modern, high-level language.
  • by mgv (198488) <Nospam.01.slash2 ... g ['man' in gap]> on Tuesday December 28, 2004 @07:33PM (#11204095) Homepage Journal
    Its hard to pinpoint anything specific that is broken with unix as a whole.

    But there are lots of subsystems that aren't exactly perfect.

    Examples that come to mind:
    *File permissions only go to user/group/others rather than individuals, and poor record locking on network shares. Lack of automounting as an intrinsic feature of the operating system.

    *Windowing subsystems that network, but cant handle 3d networked graphics effectively, or support the more advanced hardware features of graphics chips locally particulaly well.

    *Software packaging systems that develop conflicts. (Probably more of a linux problem, actually)

    - I am aware that all of these have workarounds or are being worked on -

    The kernel of most unix's (and, for that matter linux) are fairly well tuned to a variety of things, although they are subject to a number of internal revisions to try and do better multi tasking & multiple processor scaling, for example.

    Where these systems will probably fail the most is when the underlying hardware changes alot - for example handling larger memory spaces and file systems, or perhaps even moving to whole new processes (eg., code morphing cpu's such as transmeta's, asynchronous cpu's). These designs are quite radically different and we have developed down a specific cpu/memory/harddrive model so far that its quite difficult to look at major changes, as they aren't as easily supported by the operating systems.

    Just my 2c, and from a fairly casual observer status - it would be interesting to hear what the main developers think on all of this.

    Michael
  • Simple... (Score:4, Funny)

    by andreMA (643885) on Tuesday December 28, 2004 @07:34PM (#11204114)
    PROBLEM: SCO exists
    SOLUTION: 2 MT airburst over Lindon, UT

    Oh, with UNIX, not for UNIX. Never mind.

    As you were.

  • by RomSteady (533144) on Tuesday December 28, 2004 @07:40PM (#11204168) Homepage Journal
    UNIX and the various shells were designed for when every keystroke counted due to memory constraints and the painful experience of working at a teletype.

    As a result, we've got upper- and lower-case flags doing completely different operations (-r and -R for "remove" and "restore," for example), we've got case-sentitive filenames which just make it so easy to tell the difference between "Index," "iNdex," "inDex," "indEx" and "indeX."

    UNIX was designed when plain text was king and the only nudies you ever saw were ASCII art.

    As a result, there's no way from looking at the filename to tell what program the file should be processed with.

    UNIX was designed under the guidelines of "do one thing well, do it quickly and get out of memory."

    Those design decisions permeate UNIX and the *NIX community even today. When I read the newsgroups, I still see tips on how to do things that involve piping a file through 17 filters to do something that can be done on Windows with four mouse clicks.

    So how would I fix these problems?

    1) Make filenames and command flags case-insensitive. The few cycles you spend doing case comparisons will quickly pale in comparison to the time savings you experience in tech support situations where a touch typist accidentally hits space too soon and types "emacS."

    2) Several files that do not have extensions usually have some information about their default parser in line #1. Either parse it, or start using file extensions in *NIX.

    3) Start making UI's that only initially expose the 20% of the UI that 80% of people will use. There's no reason for a CD-burning package to have a checkbox on the main screen about verifying post-gap length for 99% of the people in the world.

    Anyway, that's my opinion.
    • No! (Score:3, Insightful)

      by Inoshiro (71693)
      I do believe there are a few problems with you assertions:

      "1) Make filenames and command flags case-insensitive. The few cycles you spend doing case comparisons will quickly pale in comparison to the time savings you experience in tech support situations where a touch typist accidentally hits space too soon and types "emacS.""

      That problem is so much easier to fix than changing 20+ years of UNIX design.

      UNIX is case sensitive for a reason. Do you think you can just go through all the source files, replace
  • by Qbertino (265505) on Tuesday December 28, 2004 @07:41PM (#11204186)
    -the allmighty root (single largest security risk)

    -ancient directory organization which doesn't take modern computer usage into account (more powerfull single workstations)

    -bad historically grown naming ("home", "usr", "var", etc.) and incosequent File System Herarchy Standard

    -crappy vendor support

    -unix printing still sucks big time (see 'vendor support')

    -grafics system and font handling

    -inconsistent standards of configration

    -histrically grown elitist utility naming (large anoyance)

    That's all I can come up with right now. Note that some of these are dealt with by certain unix variants. Printing and pretty much everything else is a breeze on OS X for instance. Configuraion and installation with Debian Linux is very smooth and goes great length to keep those countless OSS utilities manageable. And Solaris 10 seems to have the one or other card up its sleve to deal with security risks that result in the allmighty root.

    Coming to think of it: Can't we just have an OS with OS X ease of use, Debians installation system, Solaris 10 low-level features and Windows Vendor support? We'd all be set and 100% satisfied.
  • Non Free. (Score:3, Insightful)

    by twitter (104583) on Tuesday December 28, 2004 @07:43PM (#11204220) Homepage Journal
    The most broken part of "Unix" is that it's non free. Everyone has their own way of fixing things and does not share any of it, so we have the current fragmented landscape of Sun, HP, AIX, OSX, etc. The obvious solution is to use free software which ports the best features of each and costs nothing but time and thought to implement. What could be easier than that? The details are not as important as the root cause and the solution.

  • by elronxenu (117773) on Tuesday December 28, 2004 @07:50PM (#11204308) Homepage
    To fix unix, it is important to start from the bottom up. Ignoring kernel internals, which are the choice of the kernel developer, the layer we need to fix first is the system call interface.

    For example:

    • Rename creat to create, as it should always have been
    • 64-bit time_t
    • localtime to return the year number, not year-1900
    • Decide whether we like curses or termcap, and get rid of the other one
    • Add inode-level operations, i.e. open an inode, rather than a path. Add atomic filesystem operations. Rename an inode. Delete an inode. Path-level operations permit race conditions whereby an attacker switches the filesystem around in between a privileged process examining the filesystem and making a change to the filesystem.
    • And many others ...
  • Two things: (Score:4, Interesting)

    by Saint Aardvark (159009) * on Tuesday December 28, 2004 @07:51PM (#11204323) Homepage Journal
    Coarse permissions for files, and extremely coarse permissions for ports.

    Files: this is one thing Windows has right. There should be all sorts of capabilities built in to Unix: append-only files, append-only by user, unchangeable permissions, and so on. FreeBSD's flags are the way to go, but like I said: they should be built in to Unix, not an extra add-on.

    And a subset of that is coarse permissions of files. Why in God's name do we still enforce root-only opening for ports built in to Unix, not an optional add on. Something like "chgrp www /dev/tcp/80; chmod 600 /dev/tcp/80", rather than having to open as root then drop privileges (hope you did that right!), would be amazing.

  • by realdpk (116490) on Tuesday December 28, 2004 @07:57PM (#11204407) Homepage Journal
    Personally, I don't think there's really anything "wrong" with unix.

    Now, if you asked me "What is wrong with Linux?" I would have several answers. Same with "What is wrong with FreeBSD?" so you don't think I'm just a BSD bigot. But "unix"? It's hard to pin anything on the generic term "unix".
  • COM and the Shell (Score:4, Interesting)

    by shird (566377) on Tuesday December 28, 2004 @08:20PM (#11204644) Homepage Journal
    One of the nicest features of Windows is the standardised use of COM throughout. Everything about the shell is done through COM, which allows progams to work in a consistant and predicatable manner. Cut'n'paste, shell extensions, drag and drop. namespace handlers, OLE embedding, scripting, automation etc is all possible and well supported by most programs because of this use of COM.

    Unix may have some form of COM, but it is far from the kind of support that is available under Windows. It is the reason clipboard and document embedding is such a pain under Unix, and why the shell 'feels' clunky and basic operations such as drag and drop between applications isn't possible.

    So bring in a standard COM system, and standardise the shell interfaces and you will have kde and gnome applications that can integrate with the shell without having to have separate progams.
  • by Spy der Mann (805235) <spydermann.slashdot@gmai l . c om> on Tuesday December 28, 2004 @08:35PM (#11204795) Homepage Journal
    I'm not speaking of unix specifically, but of Linux. But I hope this enlightens anyone.

    Linux isn't friendly for:

    * Installing apps
    * Guiding the Joe user to a friendly painless installation of the OS itself
    * customizing
    * configuring
    in other words... everything.

    As many linux fans that there are here, the only *great* thing that Linux has, is its security and stability. Everything else is more or less, a mess. The apps, they're great! But only AFTER you manage to intall and configure them.

    And on the other side, we have a wonderful MS Windows in which everything (BUT security and stability) is great, but security and stability is a mess. I admit it, Linux infrastructure is very well thought... but the rest? The problem is that Linux (or unix for that matter) was made "by nerds, for nerds". Windows was made "by executives, for Joe users". What we need is an OS made "by nerds, for Joe users".

    And that means not rejecting as "blasphemy" everything that MS Windows has. There are many good points in windows, but (i'm generalizing, but this is my impression) linuxers are too busy defending their "way of life" against the competition, that they can't improve it. They have formed themselves a mindset saying "Linux is perfect. We don't need no stinking windows thingies. Anyone who says so has been too much in contact with the evil windows, and must be deprogrammed". If someone dares say "but..." he's just rejected as some microsoft borg slave.

    And they've repeated this lie so many times that they've ended up believing it. They make this whole bunch of "user-friendliness" *patches* for Linux, so they can believe that it's good the way it is.

    Well, guess what. It isn't. Give me a Linux with the user-friendliness of windows (and I DON'T mean the GUI - i mean the versatility, plug-n-play, ability to easily install new apps without the ./configure-make-make install and recompilation pain, etc etc etc.

    What I mean is:
    Linux (as a whole) is a good set of implementations. What it needs is a good set of standards, and ONLY THEN, develop good implementations of these.

    Want an example? We have KDE, QT (is that spelled right?), and I forgot if there was any other.
    So there are apps compatible with QT that can't run on KDE, and viceversa.

    Maybe you guys haven't still seen the big picture, but what I see of Linux development is more or less this:

    a) Some guy makes a good thingy for Linux.
    b) Many guys follow him
    c) Another guy makes another good thingy that does the same than the first one, but it's incompatible.
    d) Many guys follow him.
    e) GOTO a)

    From a religious perspective, compare with Roman Catholicism and protestantism. Roman Catholicism would be Windows (one pope called Bill Gates who dictates what is true and what isn't) and Linux would be the protestant denominations incompatible with each other. Some survive, some die... etc.

    Sociologically, protestant denominations are very similar to Linux implementations. They share one very limited creed (the Bible / the Linux Kernel), but how that applies in their lives (the implementations) vary. SO MUCH that they can't be united (I remember the SCUMMVM team - or was it another? - splitting because a guy liked one editor, and the other guy liked another editor. And they argued so much about this that the whole dev team dissolved.

    Linux needs a "pope". Or a government council (like the W3C) which says which way apps will interact with each other, with the kernel, and with the hardware.

    Let me rephrase it: Linux needs STANDARDS. Linux needs something like "a W3C" government which publishes a standard, uniformed API of doing things. Like what the w3c did with the DOM (and so we can prevent things like the "browser wars" happening in Linux.

    One of the reasons WinXP flourished is that it had a standard way of doing things. Make them compatible with the API (even if its security is as solid as a gruyere cheese), and they r
  • My list. (Score:5, Insightful)

    by Yaztromo (655250) <yaztromoNO@SPAMmac.com> on Tuesday December 28, 2004 @09:22PM (#11205167) Homepage Journal

    Here are the general problems I have with Unix and Unix-like operating systems:

    • Threading models and scheduling. A few Unicies have decent thread models, but others have abysmal thread models and scheduling. Because of this, far too many Unix applications wind up eschewing threads for simply running multiple processes, which isn't the same thing. Thread priority needs to be global, and the thread should be the most primitive execution unitt upon which all other execution units are built. No more "my thread priority is set to the max, but I get very few slices because my process priority is set low". My OS/2 machine running on a P3-450 can still out-thread many multi-gigahertz Unix systems, and that's just sad. Too many Unix kernels have had threads bolted on as an after-thought, and it shows.

      (Note that this isn't to say that every Unix-style system has a bad threading model -- some of them are pretty good, and others are getting better. But it's currently difficult to write decent cross-platform multithreaded Unix code when some Unicies you know in advance have really crappy threading subsystems).

    • Clipboard support in GUI subsystems. Come on, it's 2004 already. Unified clipboards have been around for more than 20 years now, and yet many Unicies still can't get this right. Cutting and pasting between applications shouldn't be a major PITA. Users shouldn't have to worry about which widget library an application was compiled against to figure out if they'll be able to paste to that application from another. Things are getting better, but really, this should have been fixed years ago, and shouldn't be taking so long.
    • GUI application font support. Again, a rare few get this right, but most of them have this big conglomeration of font types, and no unified font access system. Windows 3.0 had a beter font subsystem than what some Unicies have.
    • Printing. Again, some Unicies have done a good job, but far too many still don't have a good unified printing subsystem. Others here have done a great job of pointing out the problems with Unix printing in general, so I won't rehash them all here.
    • Desktop access APIs. Even with KDE and Gnome, there still isn't an API to call to do something as simple as create an application icon on the desktop or in the application menus which can be used to launch an application. Everyone winds up having to roll-their-own, if they bother to do so at all. Again, not all Unix GUI environments suffer from this, but the majority do. As I developer, I shouldn't have to care what environment a user is running if I want to do something like put an icon on their desktop as a part of an installation/configuration routine -- there should be an API I can call that says "create an icon with the following properties", and have it worry about WM/environment specifics.
    • USB driver development and device access. Again, in many Unicies this is fundementally flawed and can be very difficult for users to set-up and configure. And it differs drastically from Unix to Unix. Where we have pretty standard systems for accessing RS-232 serial ports, and parallel ports, USB access is completely non-standardized across Unicies. Just witness the PITA it is to set-up the newly standardized javax.usp API on Linux, and the kernel work-arounds that had to be implemented to allow APIs like this to unload aggressive modules that grab interface focus immediately just because they were included with the distro. There isn't much excuse for this IMO.
    • Unicode support. Again, hit or miss.

    Okay -- now don't get me wrong -- there are a lot of things to like about Unix and Unix-like environments. But those are the items I personally have problems with in the general case (and again, not all Unicies exhibit all of these issues. In particular, Mac OS X doesn't suffer from any of them, and is my current OS of choice for doing development and as my personal workstation desktop environment).

    Yaz.

All life evolves by the differential survival of replicating entities. -- Dawkins

Working...