Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Unix Operating Systems Software

What's Wrong with Unix? 1318

aaron240 asks: "When Google published the GLAT (Google Labs Aptitude Test) the Unix question was intriguing. They asked an open-ended question about what is wrong with Unix and how you might fix it. Rob Pike touched on the question in his Slashdot interview from October. What insightful answers did the rest of Slashdot give when they applied to work at Google? To repeat the actual question, 'What's broken with Unix? How would you fix it?'"
This discussion has been archived. No new comments can be posted.

What's Wrong with Unix?

Comments Filter:
  • by Devil's BSD ( 562630 ) on Tuesday December 28, 2004 @07:20PM (#11203977) Homepage
    no, those are eunuchs
  • by andrewzx1 ( 832134 ) on Tuesday December 28, 2004 @07:23PM (#11203996) Homepage Journal
    If you read the motivations behind writing Plan9 (documented on slashdot previously), there are many descriptions of what the authors thought was wrong with UNIX. And the guys who wrote Plan9 are the same guys who wrote the better part of UNIX. And for you youngsters, UNIX is not LINUX. - AndrewZ
  • GNU Stow (Score:2, Informative)

    by Anonymous Coward on Tuesday December 28, 2004 @07:25PM (#11204013)
    That is why distributions have package management systems for GNU/Linux. A single command is sufficient to install/remove a package.

    However, I understand your problem, when it comes to manual installation. There is a project GNU Stow [gnu.org] to handle what you are talking about.
  • by Anonymous Coward on Tuesday December 28, 2004 @07:26PM (#11204024)
    Speak for yourself - for years, I've installed packages in "/usr/local/packages".
    Package "foo", version "N" goes in "/usr/local/packages/foo-N".
    The current version of "foo" has a symlink to it from "/usr/local/packages/foo".
    "/usr/local/bin" contains symlinks to the appropriate files in "/usr/local/packages/*/bin"
    Upgrades (and downgrades) are trivial.

  • Unix is too powerful (Score:3, Informative)

    by Anonymous Coward on Tuesday December 28, 2004 @07:27PM (#11204029)
    I know it sounds silly, but it's like asking a consumer to operate a bradley armoured figting vehicle, it wasn't built for consumer use, its got hundreds of knobs and options and configurations, and if you don't get it set up right the first time it is a tremendous headache to fix it. Consumers want a gas pedal and a brake, windshield wipers are fine, but when you put on a .50cal machine gun mount, even if its "turned off", it scares people away.

    It's a canonical example of something that tries to be everything to everybody, but ends up being too hard for anyone to use.
  • by mgv ( 198488 ) <Nospam.01.slash2dotNO@SPAMveltman.org> on Tuesday December 28, 2004 @07:33PM (#11204095) Homepage Journal
    Its hard to pinpoint anything specific that is broken with unix as a whole.

    But there are lots of subsystems that aren't exactly perfect.

    Examples that come to mind:
    *File permissions only go to user/group/others rather than individuals, and poor record locking on network shares. Lack of automounting as an intrinsic feature of the operating system.

    *Windowing subsystems that network, but cant handle 3d networked graphics effectively, or support the more advanced hardware features of graphics chips locally particulaly well.

    *Software packaging systems that develop conflicts. (Probably more of a linux problem, actually)

    - I am aware that all of these have workarounds or are being worked on -

    The kernel of most unix's (and, for that matter linux) are fairly well tuned to a variety of things, although they are subject to a number of internal revisions to try and do better multi tasking & multiple processor scaling, for example.

    Where these systems will probably fail the most is when the underlying hardware changes alot - for example handling larger memory spaces and file systems, or perhaps even moving to whole new processes (eg., code morphing cpu's such as transmeta's, asynchronous cpu's). These designs are quite radically different and we have developed down a specific cpu/memory/harddrive model so far that its quite difficult to look at major changes, as they aren't as easily supported by the operating systems.

    Just my 2c, and from a fairly casual observer status - it would be interesting to hear what the main developers think on all of this.

    Michael
  • by Anonymous Coward on Tuesday December 28, 2004 @07:35PM (#11204121)
    Let's make UNIX not suck [ximian.com] by Miguel de Icaza. Answers this exact question in quite a lot of detail! Try out the UNIX Haters handbook [mit.edu] [warning: big PDF] for a more humourous take on things!
  • by umrk ( 99195 ) on Tuesday December 28, 2004 @07:39PM (#11204164)
    ./configure --prefix=/usr/local/stow/foo-1.2
    make
    sudo make install
    sudo stow /usr/local/stow/foo-1.2
    Done.
  • Re:User Friendly (Score:4, Informative)

    by millahtime ( 710421 ) on Tuesday December 28, 2004 @07:40PM (#11204177) Homepage Journal
    It already exists and is called OS X
  • Re:mmap (Score:3, Informative)

    by pclminion ( 145572 ) on Tuesday December 28, 2004 @07:40PM (#11204178)
    Why can't you gracefully recover from a lost mmap? Set a handler for SIGSEGV. If you get a fault, check the faulting address. If the address is within an mmap'd region, longjmp() to a recovery point. This can be made quite clean.

    I don't think the solution is to start removing functionality. The solution is to use that functionality in the correct way. A program can receive a signal at any time. This is a cold, hard fact. If your program uses operating system features that could lead to exception conditions and signals, you should handle those signals appropriately.

  • by Krunch ( 704330 ) on Tuesday December 28, 2004 @07:42PM (#11204199) Homepage
    You might be interested in Stow [gnu.org]. The idea is to install each program in its own directory then create symlinks in /usr/(local/)?(bin|lib|share|whatever)/. When you want to remove a package, just remove its directory then remove broken symlinks.
  • Re:In a word... (Score:4, Informative)

    by Libor Vanek ( 248963 ) <libor.vanek@g[ ]l.com ['mai' in gap]> on Tuesday December 28, 2004 @07:44PM (#11204226) Homepage
    No PCL!!!! Reasons:

    - PS you can very easily convert to PDF - none for PCL!
    - there tons of tools which enables you "4 pages in 1", accounting quotas etc. etc. - none for PCL!
    - try to display PCL file
    - WHICH PCL? PCL5? PCL3?...

    There is simply NO reason to give up - tell me one single argument (except VERY slight speed-up) which will balance the loosen flexibility and necessary to rewrite all existing tools (CUPS, print drivers etc.)
  • Re:In a word... (Score:5, Informative)

    by Tackhead ( 54550 ) on Tuesday December 28, 2004 @07:48PM (#11204274)
    > This sillyness of having to generate postscript so Ghostscript can generate PCL so you can print is just wrong - empty brained, someone forgot to wake up wrong.
    >
    >PCL is available on every major printer on the market today - it IS the standard. PostScript is a has-been. Dump it today.

    Huh? I think you've got that backwards.

    PCL requires that most of the "brains" exist on the "computer" side of the "computer/printer" connection. A PCL printer needs less "brains" than a Postscript printer because all the processing is done on the "computer" side of the connection.

    Not to put too fine a point on it, but a PCL printer is to a Postscript printer what a Winmodem is to a hardware modem.

    For printers, the PCL tradeoff made a lot of sense sense when embedded CPUs were (extremely) limited in computational power compared with desktop CPUs. Rather than have your $1500 486-33 sitting idle as it dumps a pile of Postscript code to another $1000 68020 in the printer, I'll use my $1500 desktop CPU to turn my document into PCL that can be parsed by the $1.99 Z80 or whatever's in my $100 PCL printer.

    Now that your $25 disposable cell phone has a 200 MHz core, that tradeoff is no longer a requirement. Embedded systems smart enough to interpret and run Postscript code are no more (and no less) expensive than those capable only of PCL.

    Methinks you've got the PCL/Postscript design tradeoff backwards.

  • by nocomment ( 239368 ) on Tuesday December 28, 2004 @07:53PM (#11204340) Homepage Journal
    Well to work properly it would also need to have the versioned filesystem of VMS. So if someone were to say overwrite it with zero's then you just revert back to the previous version that wasn't zero's. You see? If the file is deleted the file is gone, but if someone changes the file to be useless, then I could jsut revert it. Make sense? There's no way anyone with only write permission could destroy any part of the system permanently. It's just a one command restore.
  • by ComputerSlicer23 ( 516509 ) on Tuesday December 28, 2004 @07:59PM (#11204434)
    Huh? Unless I'm misunderstanding my UNIX permission semantics (which is likely the case), if I can write to the file, I can destroy all of it's contents. However, the file still exists.

    If I have write permission to the directory, then I can actually call "unlink" (UNIX system call which will delete a file).

    Lacking write permission to the directory, I can't delete a file (or create a file). If I have permission to write to the file, I can destroy it's contents, but I can't stop the file from existing.

    However, I believe in most modern UNIX filesystems you can set the "APPEND ONLY" attribute, which is handy for some things (log files), I want you to be able to write a the end of the file, but the prior contents must always exist.

    Kirby

  • by acd294 ( 685183 ) on Tuesday December 28, 2004 @08:07PM (#11204510) Homepage
    "/usr/bin -> /System/Links/Executables", and "/sbin -> /System/Links/Executables" (this example shows that arbitrary differentiations between files of the same category were also removed).

    Actually, there is a difference between /bin (or /usr/bin) and /sbin. /sbin stands for stand alone binaries. (As opposed to /bin, which is just binaries) In sbin, you will find statically linked executables that you just may want to use if your system is hosed (ie the slice with the dynamic libraries goes down and you want to fsck it). They dont absolutely need to be in separate directories, but there is definitely a difference.
  • Re:Mac OS X... (Score:3, Informative)

    by ip_fired ( 730445 ) on Tuesday December 28, 2004 @08:08PM (#11204524) Homepage
    Every application comes in a bundle. For example, lets say you create an app that is called foo.

    You would create the following directory structure:
    foo.app
    --Contents
    ----MacOs
    ------ foo (the actual executable)

    Inside the foo.app folder, you can put all of the libraries, data files, help system, etc that your program needs.

    When you are browsing through in the Finder, the .app directory isn't treated as a directory, but rather the application itself. When you double click on it (or use the open command in the shell) it will start the application.

    One of the neatest things is that you can do this with a Java program, and the OS will launch it properly. I wish it were easier to launch jars in Windows like this.
  • Like elektra? (Score:4, Informative)

    by haeger ( 85819 ) on Tuesday December 28, 2004 @08:12PM (#11204558)

    Ideally all confi files would follow the same format and syntax (god no please don't say XML).
    Ideally there would be a uniform way for programs to retrieve configuration information from a centrallized location.
    Ideally local users and machines would be able to merge their prefs and config with the master to override certain prefs.
    Ideally the hierarcy of administrators would be able to prevent entitities under them from overriding certain configuration options.
    Ideally all of that could be done with plain text files which are automatically checked into a version control repository so you can roll back any change in a jiffy.


    There was a project on sourceforge that adresses some of the points you raise. Originally it was called "Linux-registry" I believe, now it's called Elektra [sourceforge.net].
    I don't know how far they've come or anything about the project, but it looks like something that You'd want to have a look at.

    .haeger

  • by Slack3r78 ( 596506 ) on Tuesday December 28, 2004 @08:14PM (#11204585) Homepage
    It doesn't even have to be an installer. Ever used OS X? To install software on X, you simply drag a .APP container file into your Applications directory. To uninstall, you drag it to the trash.

    How is this not better than the current Unix way of doing things?
  • Re:Better Compiler (Score:2, Informative)

    by i am fishhead ( 580982 ) on Tuesday December 28, 2004 @08:26PM (#11204708) Homepage
    AFAIK, gcc 4.0 will include support for this. See the -fmudflap option in the gcc manual [gnu.org].
  • by Tyler Eaves ( 344284 ) on Tuesday December 28, 2004 @08:26PM (#11204717)
    The best approach I've seen to this is Mac OS X's "App bundles". Basically these are directories with a name that ends in .app, with a certain internal layout. To the user in the file manager, it looks and acts like a file. If you click on it it launches the app. Easy drag-and-drop installation/removal, and the program can include as many support files as it needs, without even NEEDING an installer.
  • by Anonymous Coward on Tuesday December 28, 2004 @08:28PM (#11204736)
    It could be an installation process more like mac os X. In os X there are no installers, rather applications come in a folder with a .app extension. Inside the folder are all the executables and resources needed to execute the program. To install the software one simply needs to copy (drag and drop) the folder to whatever location you want. Note that to the user, .app folders are presented as a single object that looks like an executable. That is, when such a folder is double-clicked, the application is executed and it automatically uses the resources corresponding to the locale you have set up for your computer.

    In this way, average users aren't even aware that an application is a folder and consists of several files. All they see is one object, which to them is *the* application, which they can put wherever they want, and double-click from wherever they want, and it will still run.

    Of course, if you wanna mess with all the little files inside an application, you can always use the terminal.
  • Re:Here's a start: (Score:1, Informative)

    by Anonymous Coward on Tuesday December 28, 2004 @08:35PM (#11204785)
    It was NOT written primarily by "Unix geeks" -- it was written primarily by Lisp Machine geeks.
  • Re:Like elektra? (Score:4, Informative)

    by Technonotice_Dom ( 686940 ) on Tuesday December 28, 2004 @08:49PM (#11204903)
    Yep, there was a mention on LWN.net recently when they "Elektrafied" X.org. It uses the filesystem for config storage, has only a couple of libraries that it depended on (i.e. not a whole load of XML stuff) and was in essence, very simple. With revision control systems, you could roll back changes easily. From memory, it created a file for each setting, and stored the value for the setting inside it, using directories for the config layout.
  • by forkazoo ( 138186 ) <<wrosecrans> <at> <gmail.com>> on Tuesday December 28, 2004 @08:59PM (#11204988) Homepage
    OS X also has a package manager, which IMHO, is much more trustworthy than an executable installer. My only real complaint is that so many MAc OS X packages insist on being installed on the main drive. That makes me sad.
  • by donatzsky ( 91033 ) on Tuesday December 28, 2004 @09:05PM (#11205044) Homepage
  • by MarkByers ( 770551 ) on Tuesday December 28, 2004 @09:25PM (#11205183) Homepage Journal

    even with a package manager, you can't even determine how big a given package is! (if you know how to with Portage, I'd like to know)

    equery size package

    equery is part of gentoolkit

  • Re:OS X (Score:1, Informative)

    by Anonymous Coward on Tuesday December 28, 2004 @09:28PM (#11205201)
    Apple ships a FILE SYSTEM (HFS+) which cannot distinguish files named 'Foo' and 'foo'. The OPERATING SYSTEM is perfectly capable of doing so on file systems which support case sensitivity.

    The reason for defaulting to case insensitive HFS+ is because case-insensitive but case-preserving is fundamentally more friendly to non-geeks. To somebody who doesn't know in their bones that capital letters are different ASCII codes from lowercase, there is no obvious reason why the computer would think that files named 'joe's taxes' and 'Joe's Taxes' are different entities.

    If you desire, you can create UFS file systems which (like UFS anywhere) are case-sensitive. In 10.3, it is also an option to create case-sensitive HFS+ filesystems. (You must use command line tools to do so in the client version of 10.3; it's only exposed in the GUI of the server version.)
  • by cicadia ( 231571 ) on Tuesday December 28, 2004 @09:39PM (#11205273)
    Except that writing an empty file is not the same thing as deleting the file. In one case, the file exists, can be opened and read (with appropriate permissions,) and has size 0. In the other, the file no longer exists, cannot be opened for reading at all, and has no defined size (or any other metadata).

    Deleting a file in Unix simply means removing the file's entry from it's containing directory. This is why deleting a file in Unix requires write permission on the parent directory, and has nothing to do with the permissions on the file itself.

  • by mrroach ( 164090 ) on Tuesday December 28, 2004 @09:50PM (#11205357)
    > The lack of ACLs is a major impediment to uptake
    > of Linux in the business community.

    This is not at all insightful. It is uninformed at best. Posix ACLs exist on ext2/3,xfs,reiser,jfs. These ACLs are also completely supported by Samba (and have been for many years).

    -Mark
  • Re:configuration (Score:3, Informative)

    by mrroach ( 164090 ) on Tuesday December 28, 2004 @10:01PM (#11205429)
    You have just (mostly) described gconf. While the current backend does use XML, there's no reason why it couldn't use some other setup (ldap support has been partially implemented). The problem is that people have inacurately painted it as a windows registry clone or fail to see that a billion little configuration languages and parsers for every application is less than ideal :-/

    -Mark
  • by Earlybird ( 56426 ) <slashdot&purefiction,net> on Tuesday December 28, 2004 @10:38PM (#11205654) Homepage
    • No decent scripting language? In Unix? What do you suggest? BASIC? JCL? Microsoft's batch language?

    Unix needs something [infoworld.com] like MSH [infoworld.com], I think.

    Of course, there are plenty of good scripting [python.org] of languages [ruby-lang.org] for Unix. The question is whether we need some higher-level glue between scriptable components, and I think we do.

  • by SewersOfRivendell ( 646620 ) on Wednesday December 29, 2004 @01:15AM (#11206505)
    OS X also has a package manager

    No, it doesn't. The installer framework does record the installed files in package 'receipt' files, but there's no standard way to uninstall a package.

  • by Anonymous Coward on Wednesday December 29, 2004 @01:27AM (#11206576)
    You have no idea do you? The stone joy of Linux is the ability to do any damn thing you want

    Okay, I want to:

    1) embed different spreadsheets each with cascading spreadsheets into multiple cells of a main spreadsheet -- easy with MS Office, can't do it in Linsux OpenOffice.

    2) embed video clips into a slide presentation -- easy with MS Power Point, can't be done with any Linsux software

    3) print a complex document to a $40 printer -- easy for Windows, not for Linsux

    4) play the hottest videogames -- easy in Windows, can't be done for Linsux

    5) off load an arbitrary digital camera's photos to a computer by USB -- easy for nearly any camera and current Windows system, impossible for most Linsux systems

    6) add arbitrary PCI hardware, usb hardware, serial/joystick port hardware without worrying about drivers -- mostly automatic for Windows, a horrendous task for Linsux.

    7) add a local PC via cross-over patch cable to a laptop connected by WiFi to a NAT enabled wireless router -- easy with Windows network wizard, a much more complicated, table editing, config file changing task with Linsux.

    Those are just a few of the tasks this week off the top of my head. People that routinely use both Windows and Linux know what's easy and not so easy to do in each OS, so don't come off so fucking high and mighty because you'll just look incredibly stupid.

    My videos, DVDs, audio stuff all just comes up running from the main menu which accesses the file system for content. I have yet to see a windose (sic) desktop with anywhere close to the usability I've built into my window manager.

    Dufus, Windows has My Pictures and My Music directories already created out of the box for each new user. Comes standard with Windows Media Player that plays all DVD's and most audio formats by clicking a file name or automatically by simply inserting a CD/DVD. No "setting up" required.

  • Re:Here's a start: (Score:3, Informative)

    by spectecjr ( 31235 ) on Wednesday December 29, 2004 @03:00AM (#11206965) Homepage
    Call *me* when MSDN can put up some simple information on how to do a "hello world" program in Windows. God forbid you want to use multiple windows which are tabbed or something.

    Sure. In fact here's two.

    Generic Windows Hello World program (in C) [microsoft.com]

    Alternatively, how about this one... Hello World for Win32 [microsoft.com]

    There you go.

    Please take this person's notion that MSDN is world's above what Linux or some other UNIX has with a grain of salt. Also, docs.sun.com and sunsolve.sun.com have always worked adequately for me.

    While you're at it, you might also want to question the person I'm responding to, who apparently can't search their way out of a paper bag.
  • by dtfinch ( 661405 ) * on Wednesday December 29, 2004 @03:31AM (#11207078) Journal
    For instance, I installed Flash7 for Firefox 1.0. Didn't work. I installed Sun's JAVA for Firefox 1.0. It didn't work either.

    I've installed both on several distributions. I have to tell the Flash installer where Mozilla and it handles the rest. For Java, I have to create a symbolic link to it in my Mozilla plugins directory. They both work fine, except that the sound and video get out of sync in Flash.

    Small developers have to either open source or pay fees they cannot afford to obtain a "widget set", something that any other OS supplies for free, and defines a standard for.

    Except for Qt, most do not require you to pay a fee or open source your app.

    Linux has a pretty poor cache and swap system, combined with zero user level control over cache and swap. As a result, over time, the OS runs slower, and s l o w e r and s... l.... o..... w...... r....... until you restart,

    I saw this mentioned in the Red Hat bugzilla, affecting RH9 and some RHEL3 users. But they say it was fixed mostly in 2.4.24 and completely in 2.6.

    The GUI, in the user sense, is an afterthought. You have to go to the command line to configure and/or adjust and/or install many things.

    Yeah, but I have to use the Windows command line for a lot of things, or dig around in the registry. People who can't on Windows get help from those who can. The actual amount of command line work necessary in Linux varies between distributions. I can do a lot more fron the Linux command line than from the Windows command line, so I'm more apt to use it.

    So mostly, people don't run Linux.

    But I'll always run it. Ubuntu is starting to look good. I'm running the unstable hoary hedgehog branch scheduled for release in about 4-5 months. Lots of nice things appear to be on the way.
  • by AndyElf ( 23331 ) on Wednesday December 29, 2004 @03:49AM (#11207152) Homepage

    I dunno, maybe you're just trolling (and a number of replies that follow would qualify you as a good troll), but I'd say that installing FreeBSD is not any more difficult than, say, Slackware or Debian. It is more challenging than your Mandrake or RH install, I think (have not had a chance in the last 3-4 years to try either).

    That said, with enough preparation and a chapter from the Handbook [freebsd.org] printed out and within a reach installing stock FreeBSD should not be a problem at all.

    The question you should, however, ask yourself is Why do I want to try FreeBSD? If it is just because you've heard it's cool -- you may be much better off trying a http://www.freesbie.org/ [freesbie.org] instead. It's a live FreeBSD ssytem, sort of like Knoppix.

    If you want to give FreeBSD a spin because you want to understand UNIX-land better or have needs for the stability of the platform, then rough starts should not be anything to discourage you.

    In either case -- all the best and have fun!

  • Re:OS X (Score:2, Informative)

    by herwighenseler ( 841655 ) on Wednesday December 29, 2004 @06:16AM (#11207588) Homepage
    /Developer/Tools/CpMac and /Developer/Tools/MvMac does the job you want.

    Unintuitively, nonetheless.

  • by pcmanjon ( 735165 ) on Wednesday December 29, 2004 @06:54AM (#11207674)
    " Heh. For a really good time, try going into #debian on freenode and asking any question, no matter how esoteric. You're bound to get about three or four RTFMs, and one guy who will pmsg you with more helpful information. Note: I just switched to deb after several years of RPM-based distros, and am not a complete n00b here, so the attitude I encountered was offputting to say the least. Imagine sending your grandma in there for help - she'd probably smack you with her purse the next time she saw you!"

    No shit! I hate them mother fuckers. Maybe I would RTFM if the programs I need help with HAD A FUCKING MAN PAGE! Idiots...

    Try out channel #mepis for MEPIS Linux; I hang out in there and they are very nice, and even help out for other distros.
  • by lahi ( 316099 ) on Wednesday December 29, 2004 @10:24AM (#11208529)
    Permissions are associated with directory entries, and NOT inodes. A specific inode may have as many directory entries (hard links) as you want, and each of these can have different permissions.

    Wrong. Permissions and other file metadata relate to the inode not the directory entry.

    I suppose the reason noone else has bothered to point this out already is that it is trivial. But perhaps it'd better be stated explicitly.

    -Lasse
  • by Junta ( 36770 ) on Wednesday December 29, 2004 @10:43AM (#11208707)
    True serial console, already there for a lot of x86 servers, and besides, that wouldn't even be a Unix issue, it is a platform issue. IPMI 2.0 will allow a standard way to remotely access that serial console via network, but 20 years ago the machines didn't have that capability, and it has always been the case that Cyclades, MRV, Equinox, and the like make money off of equipment to make serial consoles net accessible. Besides, the non-x86 systems still today most all have the same serial console support of 20 years ago.

    Privileged port is pretty much as you say, a joke. I am particularly amused by IRC servers dependence on ident responses as a way to validate a user connecting

    The info format was meant to supercede man pages. There is a lot more flexibility, but I am torn. In day to day use I find a preference for man pages, simple, easy to search. However, man pages don't scale that well (man bash for example), and sometimes the typical string search for what you want will take forever to iterate through. Arguably more widespread html documentation coupled with text web browsers like links would give more flexible options in a more comfortable way.

    What exactly is your complaint about /usr/local? Anyway, it wasn't really intended to distinguish GNU from non-GNU, it was largely meant as a site/workgroup wide repository for shared binaries/applications. My complaint about it used in this fashion is that it wasn't engineered well for multiple platforms (i.e. standard requiring arch/platform named directories to hold executables/libraries) /bin and /sbin are not boring/redundant with /usr/bin, and /usr/sbin. While many many systems nowadays use one filesystem to contain more, some systems still have /usr as a separate partition, and / containing things useful to get system to boot/mount and do recovery tasks is good for those environments.

    As far as all the directories and where things go, there are pretty established standards, the problem being, of course, that not all groups know them well or bother to follow them, so they kind of get trashed. I personally think mostly self contained application/library suite directories are nice (ROX file manager supports it). I.e. the entirety of GTK would be /usr/lib/GTK/, including documentation, default configs, libraries, executables) Of course, PATH variable would get badly mangled, or else you have ludicrous fs structure (very simple apps needing directory entries, ls, rm, etc if strict, when they share a lot of documentation). It really is a hard thing to address.

    Most everything else I can't disagree with. Particularly the notion of detachable X11 sessions ala screen for terminals.
  • by kbmccarty ( 575443 ) <kmccarty&gmail,com> on Wednesday December 29, 2004 @11:14AM (#11209021) Journal

    Without a package manager, it's practically impossible to remove a program; even with a package manager, you can't even determine how big a given package is! (if you know how to with Portage, I'd like to know).

    If the program in question was originally installed with a package manager, why would you try to remove it without using the package manager? If you are installing from source, on the other hand, I think what you are looking for is GNU stow [gnu.org]. With stow, you install a program (say it's called foo) like this:

    • cd foo-0.0.1 && ./configure && make
    • sudo make prefix=/usr/local/stow/foo-0.0.1 install
    • cd /usr/local/stow && sudo stow foo-0.0.1

    All that the "stow foo" step does is create symlinks from the normal dirs in /usr/local into the foo subdirectory, so you don't need to fuss with $PATH, $LD_LIBRARY_PATH, etc. to run the program. Upgrading or uninstalling foo later becomes trivial because all you have to do is run "stow -D" on the subdirectory (to remove the symlinks), recursively delete it, and (if desired) repeat the above set of commands to install the new version of foo. And to find the installed size: du -sk /usr/local/stow/foo-0.0.1

    A better filesystem layout (perhaps the way MacOSX, GoboLinux or RoX does it) would make package managers obsolete.

    Not true: in addition to file layout (which is arguably the easiest job for a package manager to handle), they deal with dependencies, post-installation setup scripts, config file handling, etc.

Understanding is always the understanding of a smaller problem in relation to a bigger problem. -- P.D. Ouspensky

Working...