Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Operating Systems Software Linux

What High End Unix Features are Missing from Linux? 1264

An anonymous reader asks: "Sun and other UNIX vendors are always claiming that Linux lacks features that their UNIX provides. I've seen many Slashdot readers claim the same thing. Can someone provide a list of these features and on what timeline they might be implemented in Linux?"
This discussion has been archived. No new comments can be posted.

What High End Unix Features are Missing from Linux?

Comments Filter:
  • by Anonymous Coward on Monday March 03, 2003 @06:24PM (#5427478)
    its true, nfs on linux still sucks. You lose the sever and the clients all freak (even with soft mounts)
  • the price tag (Score:5, Interesting)

    by motorsabbath ( 243336 ) on Monday March 03, 2003 @06:29PM (#5427531) Homepage
    I use AIX, Linux and Solaris every day. The only thing Linux is missing is the enormous price tag. These concerns of stability I just don't see, seems pretty fscking solid to me.

    A better question would be "where is Linux kicking the crap out of Unix?". Now *there* would be a flame fest. Note that I'm a Unix fan, but Linux has surpassed it as a developer's workstation and basic desktop. From the standpoint of an ASIC developer, that is.

    JB
  • by rkt ( 9943 ) on Monday March 03, 2003 @06:31PM (#5427569) Homepage
    The only reason why I still would go for Sun hardware instead of Intel with Linux on it is because of ease of maintainance without investing a lot of getting third party cards installed which is non standard.

    This is not a high end feature... but a feature critical enough for many corporate organizations to avoid linux.

    The other think I love about Sun is the ease of Jumpstart. I always have issues with kickstarts on linux. RH8 doesn't even boot up on Del 1650s leave alone kickstarts. Sun puts a lot of effort in testing. I can't promise anything to my management without first testing it out on linux.... on Sun however, I believe them when they say XYZ version of software runs on SUN Hardware :)

    Its not a big deal... and since both hardware and software belong to sun some would claim that I shouldn't even bring this issue up. But the fact is that these are two good reasons I don't enjoy linux in my corporate network, even though I love and run linux everywhere else possible.

    rkt
  • Re:how about... (Score:5, Interesting)

    by Junks Jerzey ( 54586 ) on Monday March 03, 2003 @06:32PM (#5427583)
    A large following of people who resist change?

    On the flipside, I'm not sure if using an operating system that's essentially designed to be a clone of UNIX from a user's point of view is the hallmark of the radical thinker.
  • by AchilleTalon ( 540925 ) on Monday March 03, 2003 @06:33PM (#5427586) Homepage
    is missing. If you are dedicating a machine per application, per department, you don't need this.

    However, if you manage a single machine with more than one application running from more than one department, you may need to determine the amount of ressources each application can use at minimum and/or maximum. If an application is almost idle, you may need it doesn't lock ressources and let other applications use them with a given priority pattern.

    Also, partitionning is not available as far as I know.

  • LVM (Score:3, Interesting)

    I would love to see some standard Logical Volume Managers make into Linux. I believe there are some that are kicking around, but I haven't seen anything getting standardized.

    LVMs for the unaware are disk managers that allow such things as filesystems spanning multiple physical devices, dynamic creation and destruction of filesystems, dynamic resizing of filesystems, and other such goodies. AIX's volume management rocks.

  • lots of stuff.... (Score:5, Interesting)

    by vvikram ( 260064 ) on Monday March 03, 2003 @06:34PM (#5427609)

    considering linux vs any general *nix based OS i can think of quite a few places where linux is deficient right now:

    * scalability: linux needs to scale to hundreds
    of machines and scale well. the NUMA stuff has
    gotten into the mainstream 2.5.x kernel so it
    should be a good step forward.
    * a kick ass scheduler [yes i know ingo's o(1)
    patch] is quite important. i still think
    linux doesnt have the kind of scheduling
    solaris [especially high loads] seems to have
    but i will be glad to be proved wrong here.
    * VM subsystem: lots and lots of work to be done
    here. its been an academic favourite for long
    and imho linux VM sucks badly........lots of
    work is going into it though

    imho not many people who read slashdot know about the linux kernel and OS specific strengths in depth - they tend to jump on the linux bandwagon just for the coolness. i think there are a LOT of issues other than the above where linux is not yet highend. true highend is "big iron" not the
    mysql+apache+php webserver projects for which linux seems to be a favourite.

    its just that linux is growing. its a long way from maturing imho.

    vv
  • User mode linux? (Score:2, Interesting)

    by maxmg ( 555112 ) on Monday March 03, 2003 @06:36PM (#5427631)
    Isn't user-mode linux supposed to provide something similar to partitioning, i.e. running applications in multiple kernel spaces?

  • Linux != High End (Score:5, Interesting)

    by dhall ( 1252 ) on Monday March 03, 2003 @06:39PM (#5427670)
    The high end niche is marketed more by the hardware than the software. The technology of LPAR's on the Regatta (high end rs6k) is nearly on par with mainframe technology. It's also at a price point of over a quarter of a million (after discounts).

    Linux is not an Enterprise level Unix. That isn't its niche. It's an OS for low-mid range hardware.

    The argument for Unix versus Windows has been... Unix is expensive hardware with cheap (nearly free software). Windows is the exact opposite, cheap and redundant hardware with expensive software licensing. Trying to license Microsoft SQL can be as onerous as trying to negotiate an Oracle contract.

    Are there other things available in Enterprise Linux? Sure, it's called licensed software. Enterprise level companies are extremely leary of deploying software unless it's licensed. They don't want to hear the word "free". "Free" in their minds often means there is noone to sue.

    Also with corporate enterprise, there is a sincere fear of employee empowerment. No company wants to be held hostage by their employees. With Linux, the power is within the administrator to have full control over the operating system. Most companies have no way of watching the watchers to this level, especially with knowledgable, disgruntled employees. It's not a sound argument, but it's one that is often tossed out there.

    Other more obvious things include mature LVM (logical volume management). Being able to add and grow filesystems on the fly. Active and mature SAN access. The VMM has come a long way from the 2.x kernel, but still needs to play catch-up.

    You realize the ideal setup for an AIX 5.x server? You optimize the server (performance wise) for ZERO percent paging space. There are certain tools that come with the operating system at the kernel level you just won't find with Linux unless you're a kernel hacker... Companies don't have the luxury of hiring kernel hackers to administrate their systems.
  • Re:I Got One... (Score:3, Interesting)

    by Anonymous Coward on Monday March 03, 2003 @06:40PM (#5427687)
    How can they be out of stock of a service subscription? Or software for that matter?
  • The big one... (Score:1, Interesting)

    by tepp ( 131345 ) on Monday March 03, 2003 @06:43PM (#5427715)
    Guaranteed Stability.

    To have the company spend hours ensuring that their servers will not crash after X hours, x years, x load...

    I love Linux. But I know what it's like to be on the bleeding edge of Linux, having to upgrade every other day, every day, sometimes even an hour later, because the release you just got is unstable. Or finding out months later that after two weeks, three weeks, your system corrupts its memory.

    Now that's due to the difference in the development cycle of Linux vs Unix. Linux doesn't have a dedicated QA cycle, one that has the money, the equipment, the people, and the DESIRE to verify that kernel xxx will run for 1 year without any issues.

    You may think, oh, it's not a big deal to reboot your machine every week or two weeks or a month. But in some cases where unix is used, that downtime is deadly.

    Two people I know work for the NRC - Nuclear Regulatory Commission. They write saftey software for nuclear reactors. It runs on various flavors of unix... and probably could easily be ported to Linux. But I doubt it ever would. If you're going to trust the saftey of YOUR nuclear reactor, you want that vendor rep standing behind his product, guaranteeing that server won't crash in the middle of the night. You WANT 24/7 dedicated support for your box, you NEED every single patch to be stress tested for a month or more before you install it.

    And that's where Linux will never be able to replace Solaris, etc. Linux will never have the dedicated money, equipment, people, and QA testing certification in place to guarantee that kernel x.x.xx will run for a year.

    It's not a bad thing. But there will always be a place for commercial unix distros in mission critical applications.
  • reset a bus (Score:2, Interesting)

    by nemeosis ( 259734 ) on Monday March 03, 2003 @06:43PM (#5427716)
    Working with an engineer who used SGI Irix, I learned one really damn cool feature about Unix.

    He was able to reset the bus, so that he could troubleshoot a RAID device, without powercycling the entire computer. It was some kind of XIO bus architecture proprietary to SGI systems I think. It was a server, with hundreds of people connecting to it, so it's not like he could powercycle the server anytime he wanted to.

    From what I hear, all server class Unix systems have this feature built into their hardware/software.

    Where's Linux? Oh yeah.. 99.9% of all Linux installations run on x86 hardware. Go figure.
  • Re:Here is my list (Score:2, Interesting)

    by Anonymous Coward on Monday March 03, 2003 @06:46PM (#5427759)
    High availability and stability are great features that Linux does not quite approach yet.

    Another problem includes the lack of good documentation. Most of the stuff today gives ample bad documentation. Frequently you get man pages, info pages, and HTML which is vague.

    Throw in the lack of standards and certifications and things go bad quick. For example, NFS cannot handle nested mounts. The system is not POSIX compliant (it is close). The main distributions do not run CDE. Only certain parts of Linux have passed any form of accreditation. That combination makes software development somewhat troublesome.

    What about the idea of a patch? In Linux, a patch means download the next version of the product. On other systems it does not. Specifically, a patch does not usually change the interface with the subsystem being patched.

    Patch a Linux kernel and you must recompile all your modules. Most other operating systems do not require this unless a major revision change occurs.

  • by ikewillis ( 586793 ) on Monday March 03, 2003 @06:50PM (#5427804) Homepage
    Linux is also missing kernel implementations of many POSIX 1003.2 features, including all asynchronous I/O functions and realtime signal queues.

    SGI provided a patch to add support for asynchronous I/O using code borrowed largely from Irix, however without any means of notifying a process when an asynchronous request has been completed, asychronous I/O is entirely worthless.

    There was a project to add support for realtime signal queues to Linux, but as far as I know it died before reaching completion.

    Some other features would include a non-executable user stack. This is present and enabled per default on Solaris for all sparcv9 binaries, and a configurable option for 32-bit binaries as well.
  • by sniggly ( 216454 ) on Monday March 03, 2003 @06:55PM (#5427861) Journal
    since kde 3.1 we've started to use konqueror's sftp kioslave. sweet!
  • by Anonymous Coward on Monday March 03, 2003 @07:10PM (#5428004)
    Open source does not have the testing automation that exists in the closed source world. Nuff said. Forget all this "many eyes" shit. its just that, shit.

    They are on top and charge big money for one reason, investment in automation of testing.

    Opensource will NEVER get close to that.
  • by talon77 ( 410766 ) on Monday March 03, 2003 @07:12PM (#5428025) Homepage
    AD still is lightyears behind NDS. Look to Novell if you want to see impressive Directory services working with LDAP.
  • by arkane1234 ( 457605 ) on Monday March 03, 2003 @07:15PM (#5428046) Journal
    For that application, NFS is precisely the answer. Samba is great for normal file xfer if your stuck with a windows machine on the other side. But, NFS from Linux server-wise... it really has a bit to be desired. I thought everyone was a crack addict that said NFS was good until I became a Solaris administrator and saw NFS in the real world.

    Don't get me wrong, I'm a Linux administrator now, I administer Linux only. We do have NFS servers, but we monitor them, and our machines are fault-tolerant IBM xSeries systems. Err.. well, fault tolerant fan and disk-wise.

  • by venom600 ( 527627 ) on Monday March 03, 2003 @07:16PM (#5428052) Homepage Journal

    I disagree. I think the remote management capabilities in linux are just fine. I regularly use kickstart to re-install old boxes and to install brand new ones. (including a bunch of Dell 1650's)

    As long as the machine has got a serial port and BIOS support for console redirection, then you don't even need the machine on a network to administrate it remotely. I've got a whole slew of machines that are thousands of miles away from me that I administrate over SSH when the network is available and through a Cyclades serial concentrator when the network is dead......it works great. No third party cards required. And, if you've got remotely controllable power strips (which you should if you're serious about remotely administering any number of servers), then your power needs are taken care of as well!

    Administrating my hundreds of linux boxes remotely is just as easy as administrating my Solaris boxes, maybe easier.....ever accidentally send a break over a serial connection to a Solaris box??

  • by Kevin Burtch ( 13372 ) on Monday March 03, 2003 @07:19PM (#5428084)
    .
    That was a demo "file manager" like app that SGI had included with the original Indigo Elan 4000 machines. I'm not sure what crap they ran it on for the movie though (must've been an Indigo 3000 without the Elan card), as the movie made it look really slow and choppy.

    There are similar projects out there for Linux/Unix/X, do a freshmeat search for 3d file manager and you'll probably find several.
  • Re:Well of course (Score:5, Interesting)

    by ddilling ( 82850 ) on Monday March 03, 2003 @07:22PM (#5428114) Homepage

    Amen.

    I hate info. An unnecessary tool poorly implemented (implemented in the 'practically unusable' sense -- for all I know, the code for info could be excellent).

    Okay, now that I've drawn my line in the sand, what differentiates me from everyone else on /.? Why do I hate info?

    I pretty much already answered that above. It's an unnecessary tool. There exists no gap between "man page" and "html/pdf/tool of choice for 'real' documentation" that needs to be addressed.

    'man' is exactly what you want for immediate access to practically everything (although I wish man -k worked better...) when you're in the middle of completing some task. All you need to know is a couple simple things, like how your pager works, and whether you're getting the bash or the libc man page for something when there's an overlap -- and even that could probably be addressed by adding symbolic names so we can type 'man libc printf' instead of the random 'man 3 printf' so we don't look at printf for the shell. But anyway. They all are arranged the same, and you can zip to what you need in moments; plus, in my experience the man page holds what I needed better than three quarters of the time. All that, and I didn't have to use a strange tree-structured pager that poorly identifies links and doesn't behave like lynx or any other similar text-mode document navigation tool I am familiar with.

    For any documentation need more heavyweight than that, I probably want to be looking at something like javadoc or the python library reference, in my web browser. A web browser is very well suited to navigating hierarchal documentation structures (especially if they use their link tags well!). I have all the tools of mozilla (phoenix, galeon, konqueror, etc.) at my disposal to locate the information I need, and for a serious documentation need, I would rather be reading for two hours on a browser with good (well, better) font support than in an xterm. And for the doc writer, there are lots of tools available (starting with LaTeX, which is totally free) to generate these docs not only as html, but as postscript or pdfs for paper presentation as well.

    So for me, it's a one-two punch; I don't see a need in the space the tool addresses, and I find the tool itself to be unwieldy. I'd love to see better man pages, as far as I'm concerned, it has far from outlived its usefulness.

  • Re:Well of course (Score:3, Interesting)

    by RAMMS+EIN ( 578166 ) on Monday March 03, 2003 @07:28PM (#5428153) Homepage Journal
    ``Why can't Linux be more like BSD in that respect?''
    Two reasons. The first is that Linux has not traditionally been as serious as the BSDs. This has changed, and likely the quality of documentation will improve.

    The other is that the FSF seems to prefer info over man. Indeed, info has some significant wins over man (namely hyperlinking). I think the community should consider either adding those features to man (seems possible to me), or going for a standard format like HTML (would be my preference, being an XHTML zealot).

    Another thing I wonder is if a typical hobbyist Linux install has exceeded VMS in terms of documentation yet. ;-)

    ---
    To use 'to use' to mention 'to mention' is a mistaken use of 'to use', not to mention 'to mention'.
    -- Lenny Clapp, philosophy teacher at UCD
  • by RAMMS+EIN ( 578166 ) on Monday March 03, 2003 @07:37PM (#5428236) Homepage Journal
    I regard NFS as venerable but deprecated. Access rights based on uids? SMB is more secure than that! Besides, I want network filesystems to be distributed and replicating. Coda comes close to my wishes, but it's experimentalness is stressed so much that I have been afraid to try it. I like how LUFS lets you access other filesystems through FTP and SSH as if they were locally mounted.

    Obviously, your situation is completely different. You do things on a professional basis and correct mapping of uids can be arranged. I am but a mere hobbyist, but I do think NFS is flawed.

    ---
    "And don't tell me there isn't one bit of difference between null and space, because that's exactly how much difference there is."
    -- Larry Wall
  • by Anonymous Coward on Monday March 03, 2003 @07:38PM (#5428261)
    Obviously you've never used the Linux autofs automounter in a enterprise environment. Don't give me that "the amd automounter works" crap. It sucks almost as bad as Linux autofs.

    Autofs isn't as cool as Low-Latency kernel patches, so it will never get fixed.
  • by maynard ( 3337 ) on Monday March 03, 2003 @07:42PM (#5428302) Journal
    AdvFS: One feature I'd really like see implemented would be the old Veritas Advanced Filesystem either as a commercial product ported over or as a free reimplementation. The ability to clone a filesystem volume and then append changes as deltas to the original is quite a nifty feature. Adding total versioning of all filesystem objects would be even better. A good logical volume manager would be nice too. It's coming along though.

    Display Postscript: Whatever happened to L. Peter Deutsch's old Display Ghostscript X Server extension? It seems like the last update to that was about three years or so back. Now that's a feature we would all love. DPS handles displaying fonts and complex shapes properly. We all know X isn't going to die any time soon, so a good Display Ghostscript server extension would be a Godsend. For that matter, with all the funding being dumped into KDE and Gnome, why did we all forget about GNUStep? But I digress.

    devfs: Please, when are we going to finally transition away from static device nodes to devfs? Solaris had it right, dynamically name the device on detection after its physical properties. This is really important and hasn't been implemented for anything more than testing.

    In kernel Framebuffer/DRM device drivers: The old GGI folks had it right. Physical devices like video cards should be initialized and managed in kernel space. Let the console and applications like an X Server talk to the device through a device node and/or ioctl calls and be done with it. No more video crashes when changing display modes, and real user space video security. Yes, there's framebuffer support in 2.4, but not for any decent, modern, cards. DRM hooks within XFree-4.x have come along nicely for GLX support though.

    NFS: is STILL a mess! Christ, five years after everyone in Linux land finally accepted that Linux needs a major NFS rewrite and we still have to run BSD or a commercial UNIX for a decent NFS server. What a clusterfuck.

    AFS support: OpenAFS is good, real good. But its licensing terms are unacceptable for inclusion into the main kernel tree. AFS is critical for enterprise quality network filesystem support. Notwithstanding, I still thank IBM for their initial code release and the OpenAFS team for the quality work they've done in porting the old IBM/Transarc codebase over to Linux.

    Journaled filesystems: are here, but they're still a bit shaky for heavy use. They're getting pretty damn good feature wise though. A year or two more of long uptimes in the real world and they'll be rock solid for the enterprise. Way to go!

    Raw I/O support: Primarily due to pushing from Oracle and IBM this has come a long way. But it still needs to be banged on for a couple years yet before enterprise folks will trust Linux for large scale database deployments. We also need a ubiquitous 64 bit platform to deploy upon. Alphas and Suns don't count because not enough folks run Linux on those systems to shake out enough bugs such that one would prefer Linux over DU or Solaris. I've seen Linux on an ES40 and it's not pretty. Which leads me to...

    Mainstream 64bit hardware: This is not a Linux fault, but the fault of Intel. When are they going to finally release a decent 64 bit platform suitable for the commodity market? Un-fucking-believable that over ten years after the release of the DEC Alpha we still don't have ubiquitous 64bit computing. And these days RAM is so cheap we're actually running up against the physical memory bus limit, never mind the virtual memory advantages to 64bit memory management. This is just stupid. Hope AMD eats Intel's lunch, they deserve it.

    I'm sure there's more... and JMO for what little that's worth.

    Cheers,
    --Maynard
  • by Fluffy the Cat ( 29157 ) on Monday March 03, 2003 @07:44PM (#5428326) Homepage
    This is inevitable with NFS - there's no provision for determining whether the server is down or whether it's just taking time to respond. The same will occur whichever OS you're using as the server. Don't use soft mounts, as they'll cause unpredictable breakage - mounting the filessystems with the intr option instead allows you to kill individual processes that are blocked on the "dead" mount.
  • Comment removed (Score:5, Interesting)

    by account_deleted ( 4530225 ) on Monday March 03, 2003 @07:49PM (#5428391)
    Comment removed based on user account deletion
  • by JW Troll ( 607432 ) on Monday March 03, 2003 @07:50PM (#5428394) Homepage
    how about the fact that X is slow and guarantees high GUI latencies. Also, why not make the device namespace tolerably sensible? That's surely a good thing for desktop Linux.. then after that's all sorted, how about hot-pluggable USB support (ie, while Linux is running, have it detect devices and load drivers) that works.

    While we're on that tangent, I think it would be nice to have modular drivers that don't require a kernel recompile. I understand that loadable modules are available, but the kernel must support the module extensions and thus, still needs a recompile.
    Of course, to do that, you'd have to revamp the entire driver model and structure of device support.
    Still curious as to why there ought to be a separation of Server and Desktop Linux?
    How about this: Linux won't matter on 98% of desktops until the above actually happens, and maybe not even then. Every day it stays with its Unix roots is another day that it remains irrelevant to the average user.
  • Re:Well of course (Score:3, Interesting)

    by Carter Butts ( 245607 ) on Monday March 03, 2003 @08:08PM (#5428568)
    With no mod points, I must be content to add my name to the chorus...info is substantially bloated for my purposes, and the habit of deprecating man pages is a great frustration to me as well.


    IMHO man wasn't broken (at least, not for "quick access" documentation), and info is certainly not a fix in any event.


    -Carter

  • Re:Well of course (Score:5, Interesting)

    by Trogre ( 513942 ) on Monday March 03, 2003 @08:39PM (#5428845) Homepage
    Most man pages would be fine if they included just one more thing:

    EXAMPLES!

  • by Karn ( 172441 ) on Monday March 03, 2003 @09:00PM (#5429022)
    We've been using amd on Linux, Solaris, and Irix clients for a few years now, and it's quite solid.

    The amd config files are on our NIS server, so setting up machines to automount users's home directories is as easy as copying a standard amd.conf file in /etc and starting up amd.

    Perhaps you could give an example of why you think is sucks.
  • by anon mouse-cow-aard ( 443646 ) on Monday March 03, 2003 @09:11PM (#5429082) Journal
    Various proprietary UNIXes (ie Cray, NEC) had proper batch systems ten years ago. The basic hooks are a good thing, and could be used for many modern uses:

    standardized user accounting, groups, and accounts. grouping of processes by job-id (so that you cannot just daemonize your running job, to change your process group, and avoid being killed at job end.) The ability to asssign different scheduling priorities and limits to jobs (not users or accounts.) The limits should include the normal cpu time and memory, but also cpu count, and things like residency time.

    Once you have hooks for the above, you will have decent support from various workload managers: NQS, Sun Grid Engine, Platform, PBS, IBM WLM, etc...

  • Administrator Tools (Score:2, Interesting)

    by dentar ( 6540 ) on Monday March 03, 2003 @09:38PM (#5429313) Homepage Journal
    HP, SCO, AIX etc.. all have what I would call almost fairly complete administrative tools that allow an administrator to do nearly everything. To date, tbe best I've seen is HP-UX's "sam" tool. There is very little I can't do with it.

    So, where does Linux fall short in that area? Someone once had a good idea for a configuration tool, called "linuxconf" but the problem there was that it attempted to keep its own configuration database, separate from the actual state of the system. If someone modified a directory permission or configuration file by hand (the manly way) and then ran linuxconf, then linuxconf would change it back to what it was.

    We need a tool that aspired to be like linuxconf, but stateless, i.e. does not try to keep a separate database of configuration settings.

    Perhaps what is needed is a well published interface specification for administration modules (one probably already exists but I don't know about it) for a generalized administration tool. THen, when someone writes a utility or program that has a configuration file, write a configuration specification to go along with it so that the administration tool would know how to make/edit the configuration file.

    Of course, sendmail being one of the nastiest creations in existence, would prove that this sort of thing can exist.
  • by morgue-ann ( 453365 ) on Monday March 03, 2003 @09:45PM (#5429383)
    In my limited experience I have to agree with this. The two-CPU SPARC I share with other firmware developers is relatively stable running Metaware tools, CVS, emacs and not much else. Even with a bunch of us (7) building simultaneously, the machine doesn't breathe too hard.

    However, its neighbor with eight CPUs running ASIC simulations (monstrous CPU and memory (therefore disk) usage) crashes regularly and interestingly. Actually, it never really crashes (panics/reboots), but just starts acting wierd. If you stop the simulation, it will usually act a bit better but it usually takes a reboot for normality.

    Sun blames the simulator and the simulations are too customized to blame the tool vendor, so we get to figure it out ourselves.

    Now that the last chip is mostly done and the machine isn't taxed as hard (still compiles Verilog & does smaller simulations), it never goes down.
  • by Tailhook ( 98486 ) on Monday March 03, 2003 @09:50PM (#5429413)
    This is a weakness of nearly all UNIX, with the possible exception of AIX. You only need to read IBM mainframe propaganda to understand what workload management is all about. It is possible to run an entire enterprise from a single system image. Everything; file servers, enterprise applications, OLAP, OLTP, customer facing apps (web, etc) from a single system image.

    It is not possible to do this without strong workload management.
  • Re:Well of course (Score:5, Interesting)

    by Evil Adrian ( 253301 ) on Monday March 03, 2003 @09:51PM (#5429420) Homepage
    When you code, you are supposed to write documentation. In the Computer Science school of thought, documentation is part of coding.

    Undocumented software is basically worthless as far as I'm concerned. Software should be easy enough to use without having to refer to the documentation often, but the documentation should be there in case I need to find something, and quickly.

    And you're right, it is restrictive. But if that restrictiveness increases the distribution's quality, so it's fine by me.

    AND, if you're in such a hurry to use undocumented crap software, you can always just compile the source code and off you go, who needs a Debian package anyway.
  • DPS is in X 4.3.0, kernel 2.5 contains NFSv4 and AFS, kernel 2.6 will contain ReiserFS v4, and I've been using devfs for years without a hitch.
  • by Peax ( 600096 ) on Monday March 03, 2003 @10:12PM (#5429584) Homepage Journal
    Command-line-wise, almost none, although this has been changing (for better or worse). Linux
    has a much larger market appeal and following than any commercial UNIX. GUI-wise there are
    also no major differences--Linux, as most other UNICES, uses an X-Windowing system.
    The major differences:
    - Linux is free, while many UNICES (this is supposed to be plural of UNIX), cost A LOT. Same
    about applications--many good applications are available on Linux free. Even the same
    commercial application (if you wanted to buy one) typically costs much more for a commercial
    UNIX than for Linux.
    - Linux runs on many hardware platforms, the commodity Intel-x86/IBM-spec personal
    computers being the most prominent. A typical UNIX is proprietary-hardware-bonded (and this
    hardware tends to be much more expensive than normal PC).
    - With Linux, you are in charge of your computer, whereas on most UNICES you are typically
    confined to be an "l-user" (some administrators pronounce it "loser").
    - Linux feels very much like DOS/Win in the 80s/early 90s, but is much sturdier and richer, while
    a typical UNIX account feels like a mainframe from the 60s/70s.
    - Some UNICES may be more mature in certain areas (for example, security, some engineering
    applications, better support of cutting-edge hardware). Linux is more for an average Joe who
    wants to run his own small server or engineering workstation.
  • Re:In practice. (Score:2, Interesting)

    by IOOOOOI ( 588306 ) on Monday March 03, 2003 @10:37PM (#5429759)
    ... good software with bad documentation will probably fail to achieve widespread adoption. There are exceptions, OpenLDAP for example (I wish somebody would point me to some comprehensive, well written documentation for this) is an exception, merely because of the demand for the protocol that it implements.

    Nothing is more frustrating than trying to configure a service that has crappy docs, and most causal/R&D users will move on once their BP gets high enough. Sure, if your job depends on it you'll figure it out, but that's not exactly the foundation for a glowing recommendation.

    I don't neccessarily need a man page, in fact, man pages are one of my last resorts. My ideal would be an XML doc that could be converted to whatever format I choose, but even a text file and some meaningful config.example files.... it really doesn't matter the format. It's the quality of the information that's key.

  • by ChTom ( 131973 ) on Monday March 03, 2003 @10:41PM (#5429783)
    Is there something akin to the SE Toolkit for Linux?

    SE Toolkit always seemed to the one of the "don't leave home without it" tools we must have when working on high volume/high capacity Solaris systems.

    Can the latest Linux kernels be tweeked at runtime and boottime as much as the internal of Solaris or HPUX or AIX?
  • by poopie ( 35416 ) on Monday March 03, 2003 @10:57PM (#5429887) Journal
    Herein lies the problem with automount on linux...

    everyone says: I use amd and it works for me...

    when you happen to find that amd doesn't work for you, you discover the ugly truth that amd is dead and buried and you'll get no developer support.

    So, then you look at autofs and find out that it's still alive, but it's constantly in beta and still has a lot of issues.

    Automounters have a historical propensity to suck.

    Both amd and autofs are a steaming pile compared to Sun's automount (probably because fortune 500 companies have nagged at Sun so much about glaring automounter bugs that they've now fixed most of them and are left with a relatively reasonable implementation).

    If you've ever worked somewhere that actually takes advantage of the automounter and pushes it to it's limits as opposed to using it for once in a while file access from the commandline, you're already painfully aware how feeble the linux automounter options are and how woefully inadequate their documentation is.
  • by ilctoh ( 620875 ) on Monday March 03, 2003 @11:00PM (#5429907)
    I've been using RH8 since it came out as my primary (only) operating system. I'm a highschool student, so in addition to all of the geeky programming and stuff I do, I have to write that occasional history paper. Though at the present stage, I don't believe OpenOffice has the polished interface that M$ Office does, it's free, and definetly usable. As for mozilla, konqueror, audio/video/image (playback, at least) it does it as good or better than Windoze. Its even coming to the point where someone not so technically inclined can web browser/email/word process without major troubles.


    For servers, I'm using a RH based system for a, brace yourself, beowulf cluster. Just install the necessary RPMS (which, btw, tend to install faster than Windows programs, and you don't even need to reboot your computer multiple times), edit some config files and learn how to use MPI, and your set. We also have SSH, web server, mysql, etc. on there, and there hasn't been a hitch (well, except for some hardware problems, but that's what you get when using Compaq's with 5+ year old hard drives...)
  • by Anonymous Coward on Tuesday March 04, 2003 @12:36AM (#5430407)
    Missing:

    1. Robust Journaled, Clustered filesystem supporting multiple concurrent mounts by seperate machines, with ACL and Quota Support, extensible by NFS V4 or other IP implementation, without giving up ACL's and Quotas in the process of networking it. That doesn't cost the firstborn's of all my staff, pluss arms, legs, and other vital parts of our anatomy.

    GFS is close...but not there yet.
    GPFS is closer, but has it's own API hooks that make it painful for some apps, and costs as described above.
    CXFS hasn't been ported to Linux yet.
    etc.

    2. A quota management suite that stores quotas limits in a highly flexible SQL DB and applies them to the running system quotafiles via a cron job. Right now, lose or corrupt the quotafile, lose your settings...or restore from backup and wait while you update quotas on a filesystem with 16M files.

    3. Access Database read/write access. Not strictly necessary, but would make that last-bit of selling SOOOOO much easier.

    4. Games - never play them myself, but can't count the number of people who tell me they would move in a heartbeat for their home machine if only the games were there.

    NOT missing
    1. Large System Support
    Got one with 1.8TB of user data files, 36,000 user home directories, and 16M+ files.
    BTW, EXT3 is quite stable, thank you. Thanks in part to one of my staff who beats it to death finding problems with ACL, Quota, Ext3, etc under heavy load and SMP.

    2. Performance - wind it up and watch it go...
    Our mainframe thinks its a big day when they do 180,000 transactions in a day. Our network servers think its a holiday and everyones at home.

    3. Full Commercial Support - GEEEZZZZ I get sick of this one. Of course it has full vendor commercial support. Pay as much to Oracle, RedHat, Dell, IBM, Consultant of your choice, etc. to support it as you do for a support contract for one of those overpriced OS's of yesteryear, and they'll happily support any damned thing you want!

    Or, take the view I cultivate. Be your own support. It's cheaper to hire a couple of kick-ass programmers, and 3-4 hot young sysadmins, than it is to pay full support for scads of OS's and Applications!.

  • by superpulpsicle ( 533373 ) on Tuesday March 04, 2003 @12:38AM (#5430425)
    1.) A kernel that doesn't require compilation... look at sun for example. 2.) Better SAN (storage area network) support. I think the kernel is too clumsy for HBA vendors to pull this one off successfully. 3.) Better Games. A game doesn't come to the linux world until it has sold 10 million copies on windows first. 4.) An easy to learn visual programming tool. How come I can learn MS vb and vc++ in hours, but all linux tools are so poorly designed and nonintuitive. 5.) Better compatibility with oracle. Ever try installing oracle on linux?! OMG, they have a lot to learn from sun here.
  • by ndogg ( 158021 ) <the@rhorn.gmail@com> on Tuesday March 04, 2003 @01:21AM (#5430648) Homepage Journal
    Is it just me, or does it sound like a lot of Linux's missing features have a lot to do with its x86 origins?

    It seems pretty obvious that x86 was never meant for enterprise UNIX.
  • Re:Well of course (Score:5, Interesting)

    by Bill Privatus ( 575781 ) <(last_available_id) (at) (yahoo.com)> on Tuesday March 04, 2003 @01:40AM (#5430737)

    What the other UNIXes -- Solaris, Irix, MacOS X, etc. -- have is dedicated programmers who are paid to pour over the code to create improvements and nice custom little routines to make it all run nice.

    Strongly disagree here. For years, as I became a guru-who-knew-all-the-UNIX-variants (from Interactive 386/ix to Xenix to System V/386 v3.2.3 to SVR4 to Texas Instruments to Bull to Motorola SysV/68 and SysV/88 to DG/UX R5.4.x...and many more) I came to be painfully aware of the commonalities between them.

    'awk' as released by AT&T was carried forth, for many generations of UNIX versions, and every vendor of every variant invariably failed to fix any of the common bugs.

    I applauded the arrival of 'nawk', where you could put as many comments as you wished in an AWK script without 'awk' crashing. And, as I've implied, this was the case on every UNIX I used.

    At least my shell scripts were "portable". Bugs on one platform were bugs everywhere.

    Those UNIXes also are written to work on specific hardware.

    Now here you (and others before you in this thread) have hit the nail on the head.

    From IRIX to DG/UX to AIX to Dynix (well, ok, let's skip Dynix, that's another soapbox), the vendors turned out an absolutely marvelous level of hardware control.

    I was able to, on an NCR 4550 (12 processors) detect when a failed processor board (paired CPU's) had been "dead-LEDed" and taken out of service. With a 50-line C program.

    On DG/UX, the logical volume manager - a system interface to not software RAID but to actual RAID hardware (c/o CLARiiON, thank you very much) is still unmatched by any other implementation I've seen. Amazing.

    I'd submit that OS vendors know (read: think they know) where the money lies: in the synergy between OS and hardware.

    <rant>

    Almost any UNIX variant, when installed on some other vendor's hardware, would be no more than regurgitated AT&T SVR4.

    Many, if not most, of these variants no longer exist. I'm among the last of the dying breed of those who can claim to have lived through them all - and made them work, despite, not because of, vendor practices :-)

    </end rant>

    Thanks for letting me vent :-)

  • A Hi-Res Clock (Score:2, Interesting)

    by ishmalius ( 153450 ) on Tuesday March 04, 2003 @02:36AM (#5430924)
    A milli- or micro-second clock that can generate signals() that can be used to awaken threads or processes in wait() states. This has always been a neglected feature of Unix, which requires programmer workarounds.

    Various Unices have avoided this feature for so long, which is regrettable because it is so useful for realtime processing. Anyone who has written a real-time timing loop for a Unix box knows how much that faking it sucks, and how much the real thing is needed.

  • by mexilent ( 469388 ) on Tuesday March 04, 2003 @03:33AM (#5431103) Homepage
    We run veritas NetBackup on HP-UX machines, though we're an all-linux shop. Why, you ask, don't we run linux when the newest version of veritas' (excellent) netbackup datacenter (4.5) is available for linux _servers_ (not just clients, as in the past)? Simply: Fibre-channel tape library support. Any F-CAL support for anything (besides IP over Fibre channel, which isn't very useful) would be nice!

    Granted, it's a total niche market, HP, sun, IBM, and Microsoft will continue to get their feet in the door at every company that backs up a lot of data--or uses SANs--because linux is years behind in that department (we've had our Surestore 6/60 libraries for 3 years now).
  • Re:Well of course (Score:2, Interesting)

    by jafo ( 11982 ) on Tuesday March 04, 2003 @04:08AM (#5431223) Homepage
    It's a two-edged sword... With Open Source, you don't really get to complain about, for example, missing example sections. Because you only have yourself to blame if you aren't contributing example sections to the man-pages. You *CAN* put examples in the man entries, it's just that YOU haven't. ;-)

    Contribute an example today...

    Sean
  • by Derkec ( 463377 ) on Tuesday March 04, 2003 @04:18AM (#5431246)
    I'm responding to this article as much to all the others. I'm sick and tired of reading "must scale linearly." When you have many processors (or even two) there is a need for some processing to be devoted to deciding where to do what jobs. This makes absolute linear scaling impossible. The more processors, the more time must be spent managing them. That said, Linux does need to get closer to linear. Sun/Solaris has demonstrated that on some benchmarks having 128 processors results in a machine 127 times as fast. That's damn close to linear. In most other applications even Sun won't hit near those numbers, but they will outscale most of their competition particularly Linux and Windows systems.

We are each entitled to our own opinion, but no one is entitled to his own facts. -- Patrick Moynihan

Working...