Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Operating Systems Software Linux

What High End Unix Features are Missing from Linux? 1264

An anonymous reader asks: "Sun and other UNIX vendors are always claiming that Linux lacks features that their UNIX provides. I've seen many Slashdot readers claim the same thing. Can someone provide a list of these features and on what timeline they might be implemented in Linux?"
This discussion has been archived. No new comments can be posted.

What High End Unix Features are Missing from Linux?

Comments Filter:
  • Well of course (Score:4, Insightful)

    by creative_name ( 459764 ) <pauls@nospaM.ou.edu> on Monday March 03, 2003 @06:19PM (#5427413)
    There are some features missing, after all - GNUs NOT unix.

    Seriously though, feasibly any Unix feature could be added to Linux, it just takes time and man power.
  • Here is my list (Score:5, Insightful)

    by Billly Gates ( 198444 ) on Monday March 03, 2003 @06:21PM (#5427438) Journal
    Read my previous comment. [slashdot.org]

  • NO POWER4 (Score:2, Insightful)

    by m0ntar3 ( 566330 ) on Monday March 03, 2003 @06:25PM (#5427499)
    Highend UNIX systems are tighter with hardware. What makes a vendor like IBM? Hardware. Just because it runs on on it don't mean it supports it efficiently and effectively. IBM don't make hardware to sell AIX, it makes AIX to sell hardware.
  • by Anonymous Coward on Monday March 03, 2003 @06:26PM (#5427509)
    Look at the high end UNIX systems and the features they have added for reliability. If the H/W supports a hot swap component, then Linux needs to as well or it will stay second rate on that platform.
  • Wrong Question (Score:5, Insightful)

    by turgid ( 580780 ) on Monday March 03, 2003 @06:27PM (#5427516) Journal
    Not to flame, but "Unix" is an API, ABI and set of conformant utilities and libraries. To ask "what high-end UNIX features are missing from Linux" is missing the point somewhat, since these features are necessarily non-standard, and therefore "not Unix". Of course, the immediate obvious answer is support for all the NUMA, ccNUMA and COMA hardware out there (which is specific to each machine let alone vendor), things like domaining, partitioning, hot-swappabe CPUs (again specific to each individual machine). Pehaps a better question might be, "How could the Unix standards be extended to encompass these developments, and how could the Linux kernel implement them (or provide an infrastructure)"
    Just my £0.02 worth.
  • One thing (Score:2, Insightful)

    by stratjakt ( 596332 ) on Monday March 03, 2003 @06:28PM (#5427528) Journal
    A non-beta, standard, production quality journaling filesystem.

    EXT3 is a backwards hack and Reiser, while good, is perpetually in the "testing" phase.

    I'd even extrapolate that to fault tolerance in general. When linux goes down, it goes down hard.
  • by puppetman ( 131489 ) on Monday March 03, 2003 @06:29PM (#5427536) Homepage
    The most common complaint I hear about Linux is that it can't replace Win2k on the desktop.

    Now we hear complaints that it can't replace Sun on the back end.

    Which one is it? A desktop OS, or a server OS? Granted, it does both well, but I think it's not the best in either category (no, not trying to troll).

    It doesn't have the games and apps on the desktop (though it's getting better all the time), and it's not as reliable on the back end. We have a bunch of app/web servers in our middle tier; some are Sun servers running the lastest OS from Sun, and some are Intel PCs running Linux. The Linux machines crash far more often. Granted, hardware could be at least part of the problem.

    On the other hand, we our database (Oracle) running on Win2k with dual P3 933 clones. One of our databases, with an average Oracle load of 10%, did not crash for over 300 days. That's pretty damned good. Our other machine (with a much higher load) crashes ever month or two (or at least needs a database-restart).

    Perhaps it's time for Linux to split into two seperate camps. A version for Linux for servers, and a version for the desktop.
  • Just my opinion... (Score:1, Insightful)

    by Anonymous Coward on Monday March 03, 2003 @06:29PM (#5427544)
    AIX's LVM flat out rocks. The day I see an LVM as good as AIX's in Linux is the day I will begin to take Linux seriously.
  • Propaganda (Score:4, Insightful)

    by shtarker ( 621355 ) on Monday March 03, 2003 @06:30PM (#5427549)
    Prehaps you should start by having a look at the websites of a few people who actually sell unix:
    http://www-1.ibm.com/servers/aix/ [ibm.com]
    http://docs.sun.com/db/prod/solaris.9u1202#hic [ibm.com]
  • by Shane ( 3950 ) on Monday March 03, 2003 @06:30PM (#5427562) Homepage
    NFS does not seem to get much attention on Linux. Most Linux admins I know use Samba for network file shares.
  • Desktop or Server? (Score:2, Insightful)

    by ekarjala ( 446184 ) on Monday March 03, 2003 @06:35PM (#5427623)
    A lot of the comments on this post seem desktop-centric so this has to be discussed in 2 parts:

    Server:Sun/HP/IBM look at Unix as a server solution. In order for Linux to compete with Unix in the enterprise data center, there needs to be a unified support model (RedHat, IBM, HP, etc. are beginning to address this). More SMP support (dozens of processors), memory addressing capabilities, etc. Mostly what it needs is more time and exposure - people will come around.

    Desktop:All it needs is to "be there". All other desktop 'unixes' are painful compared to what is available from Gnome, KDE, or Windows Manager.

  • by TheMidget ( 512188 ) on Monday March 03, 2003 @06:37PM (#5427643)
    Oh sure 'where's your proof', or 'give me an example' you might say, but to that I say bah.

    Yeah, whatever... And this is moderated as Informative ?!?

  • Re:Here is my list (Score:1, Insightful)

    by Anonymous Coward on Monday March 03, 2003 @06:37PM (#5427651)
    Funny that your post from last year claims that Sun would have the Ultra Sparc IV out by now. I haven't seen it and I don't think we'll see the Ultra Sparc V at the end of this year either. In fact, I haven't seen a Sun processor roadmap in a LOOONG time....

  • Soft Features... (Score:3, Insightful)

    by NOT-2-QUICK ( 114909 ) on Monday March 03, 2003 @06:38PM (#5427662) Homepage
    As another poster commented earlier, there are few, if any, features that the mainstream Unices can or do offer that cannot be ported in some fashion to Linux...it is simply a matter of time & effort!

    From my perspective, the primary components that Linux lacks over the commercial brands is more in the realm of "soft features"...aka non-technical features!

    The two biggest I would estimate as being (1)a unified product offering and (2) an active sales force...

    To address the first issue, I would submit that it is both a strength and a weakness (depending on your perspective) in have the Linux operating system splintered into the many unique distributions. An obvious technical strength is in the niche-filling capacity of the several flavors of Linux that can and do meet the needs of an extremely diverse market... Alternatively, a "soft" weakness exists in the sense that branding/commercialization of a product with so many various "names" is difficult if not impossible! Linux in and of itself is a generalization of the group of OS's that are build upon the Linux kernel...that is not an easily sold concept to a manager who wants someone to blame should things go south!

    As for the second concern of a non-existent sales force, that is a rather obvious (at least to me) down fall of widespread corporate adoption of Linux! Sure...every I/T department has a Linux zealot or two and can read positive write-ups on the benefits of Linux. However, this is not quite equivalent to the polished sales professionals (snake-oil salesmen?) who live, breath and die with the sole purpose of peddling their specific flavor of Unix!

    Anyways...just some food for thought! As always, I could be completely off base and living in my own happy little world! :-)

    n2q

  • by Anonymous Coward on Monday March 03, 2003 @06:40PM (#5427679)
    I'll use Solaris over Linux any day. There's no questions about hardware. Everything that's supposed to work together works together! x86 hardware, it's some kind of giant hardware junk storm that's hard to keep track of. Solaris works solidly and safely on Sun Sparc hardware .. but Linux on random x86 from Bob's Computers? It doesn't inspire confidence.
  • by McDiesel ( 447709 ) on Monday March 03, 2003 @06:41PM (#5427698)
    Yes- and I admit that I never have. I don't doubt that your machine was fast. But the question is, if you have a computing problem which can take advantage of multiple processors, and you add additional CPUs to the problem, how does processing scale? Does it scale linearly? Most people will tell you that Linux 2.4.x and before, using Symmetric Multi Processing will not scale linearly.
    This means that adding 8 CPUs does not perform your computation at 8x, but some lesser multiplier. If you have a particularly compute intensive problem, perhaps you want to add 64 CPUs to the problem...
    Supposedly the kernel developers currently debate how to scale to support such high end systems. Some have suggested implementing virtual partitions of machines, which is exactly how Solaris does on the E10K- so a 64 CPU machine actually functions as 4 16 CPU machines...
    Others would suggest that the problem should not be resolved by trying to divide computational problems on the same host, but straddle them across hosts, such as Beowulf is supposed to do (again not in my realm of experience...)
  • by Terralthra ( 618067 ) <terralthra@terralthra.net> on Monday March 03, 2003 @06:41PM (#5427704) Homepage
    5 CPUS being faster than 4 CPUs is not the same as scaling linearly.


  • Re:so? (Score:3, Insightful)

    by McDiesel ( 447709 ) on Monday March 03, 2003 @06:45PM (#5427747)
    Scalability inside one box might be desireable if the part of the problem of your application requires intensive bandwidth between components in your computation.

    Gigabit ethernet between two boxes in a beowulf cluster may be nice, but how does it compare to passing data back and forth over shared memory segments?
  • by martins99 ( 168363 ) on Monday March 03, 2003 @06:45PM (#5427748)
    (LVM) Truly rocks.. to be able to resize freely without interruptions is really just marvellous :)

  • by fleabag ( 445654 ) on Monday March 03, 2003 @06:49PM (#5427794)
    High end Unix scales just fine in my experience. If you have seen 10 & 15K partitioned into a number of domains, then they are being used for server consolidation - getting rid of several 6Ks, 4500s etc. The huge advantage is that you can keep 2 CPU boards in reserve and allocate when needed as applications grow.

    We are currently performance testing 24 CPU configs and with the intention to scale to 60 when we get the kit. 4 -> 24 has been linear improvement so far.

    I agree with the sentiment. 8 and 16 way motherboards/backplanes would be made properly, as opposed to the consumer grade items that Linux currently runs on. Linux would finally have a chance at running a big Oracle DB on decent hardware. This can only be a good thing in terms of good publicity.
  • Re:Well of course (Score:5, Insightful)

    by Dan Ost ( 415913 ) on Monday March 03, 2003 @06:49PM (#5427797)
    Funny, but sad in its truthfulness.

    The FSF has for some unfathomable reason decided
    that man pages are obsolete and so man pages for
    GNU utilities are horribly incomplete. Many Linux
    developers seem to agree that man pages aren't
    worth the effort to make them useable.

    BSD, on the otherhand, goes to great lengths to
    make the man pages clear, helpful, and complete.

    Why can't Linux be more like BSD in that respect?
  • by Anonymous Coward on Monday March 03, 2003 @06:50PM (#5427806)

    FreeBSD, another free UNIX varient, has had excellent SMP support longer than Linux has.

    The FreeBSD kernel in the 4.X series was not multi-threaded (only one app could be in the kernel at any one time); the 5.X series remedies this situation (though not everything has been cleared of the Giant lock).

    Disclaimer: I use FreeBSD on my home system and on my laptop.

  • by sean23007 ( 143364 ) on Monday March 03, 2003 @06:52PM (#5427833) Homepage Journal
    I disagree that Linux should split into a server version and a desktop version. Many of the things that make a server work better would also improve the desktop, and vice versa. Another note, while Linux may not be better than the best of the desktop nor better than the best of the backend, it is improving faster than its opponents in both categories. The future is promising for Linux, and even moreso for users (after all, if Linux fails, it will ultimately be because someone else has something better for the same price -- good for users).
  • by Doc Hopper ( 59070 ) on Monday March 03, 2003 @06:57PM (#5427882) Homepage Journal
    A few things that are very nice about some commercial UNIX variants you don't have on GNU/Linux systems:

    1. Integrated systems management, ala "Sam" in HP/UX. Although I'm first in line to say that systems administration should never be handed over to imbeciles, Sam is easy enough that non-professionals can use it, yet it covers all the bases of systems administration from your hosts file through recompiling a kernel. It seems to be what Linuxconf wants to be, but isn't quite yet. It also does this without royally screwing up particularly hard-fought configuration files. Just use Linuxconf to configure network interfaces after you've set up a beautiful five-lne config and see what it does to /etc/sysconfig/network-scripts/ifcfg-ethX. Red Hat's config tools are getting there, and YaST seems to have nailed it -- but it's not free software.

    2. Transparent X configuration w/3D support out of the box. When the installers get it right (about 75% of the time), Linux + X-windows is just fine. When it gets it wrong, the iterations are ugly:
    XFree86 -configure
    (blah blah blah)
    XFree86 -xf86config /root/XF86Config.new
    (dumps out, some obscure error)
    vi /etc/XF86Config.new
    (ad nausem)

    I miss how trivial it is to adjust X on my old Sun. Then again, there, instead of hacking a config file, you had to hack some obscure command options. And setting up dual monitors on XFree86 is much better than on Solaris (or was, back when Solaris 8 was the standard, haven't mucked with Sun equipment much since then).

    3. More on the X server: FAST X services. I've run XFree86 on really new, top-of-the-line Nvidia, ATI, and Matrox hardware, and not one of them can even touch the performance of X-windows on my old SGI O2. IRIX X is just amazingly faster. I'm not talking so much about 3D performance, but multi-head, full-window drag type stuff. Watching the ghosting as I wiggle this very screen I'm typing in back & forth on my RedHat 8.0 box at work right now on an Nvidia Geforce4 @ 1280x1024 is just painful. I know people are going to say "it's the configuration, stupid!" but if optimizing for decent X-windows performance isn't easy enough for a UNIX veteran of 7 years to do it without serious pain, it's not easy enough for an admin to want to deal with it.
    NOTE: I optimized all 686 at home on Gentoo with Nvidia's drivers. It's considerably better, but still doesn't compare. Then again, I don't have an O2 anymore for real head-to-head comparison, so maybe my memory is playing tricks on me. On the other hand, identical hardware in MS Windows gives immensely better 2D performance.)

    Then again, that's just a graphics professional feature, more than a server-type feature. Comparing any other UNIX to SGI's IRIX for graphics work is just no contest.

    4. Memory fault isolation. On Solaris, I'll actually get a message telling me which DIMM is bad, and which slot it is in. Admittedly, this is a failure not only of the operating system, but also of the hardware design. When you have 30-some-odd DIMMs in some E10K server, if you didn't have this kind of isolation, trying to find the bad stick of RAM would be beyond time-consuming. Ditto for HP/UX when replacing faulty RAM. Once again, though, IBM seems to be adressing this with their higher-end servers, and I look forward to about a year from now when it becomes more of a common feature on GNU/Linux servers.

    5. Something like "OpenBIOS" or Sun's OpenBoot (I think that's the name? Been a while, I forgot). This is great to work with, for instance, on Alpha systems. Fairly complete diagnostics before the OS even boots, and it all gets shucked out the serial port. You can compensate for this by installing some kind of lights-out management board in your PC, but if you ask any UNIX admin that has used the non-PC-BIOS stuff on pro UNIX systems, a PC BIOS just doesn't compare. For instance, on the Alpha I have at home, I can hook up fibre channel and enumerate all the available partitions, flag one as bootable, mount some filesystems and make changes, force boot to HALT temporarily rather than boot to full, stop the OS, do a memory dump, sync the filesystems and reboot... a whole lot.

    GNU/Linux on Alpha/Sparc inherits these benefits, and so it is a non-issue. GNU/Linux on X86 still really, really sucks in this dep't.

    That's about all I can think of for now. The difference between managing UNIX systems from Sun & HP, versus PC-based GNU/Linux systems, is still large but shrinking. As evidenced above, a BIG chunk of what still sucks about Linux is due to hardware & hardware integration, not the O/S itself, really. GNU/Linux is definitely getting there; I love running it on my Alpha at home, because I get many of those benefits mentioned and still use the operating system I love.
  • system information (Score:3, Insightful)

    by br00tus ( 528477 ) on Monday March 03, 2003 @06:58PM (#5427897)
    I can go on a Solaris box and find out all kinds of information out of the box. prtconf, sysdef...I can check on the temperatures of boards out of the box...this stuff can be done on Linux but it is just easier and out of the box on all Solarises.
  • by boschmorden ( 610937 ) on Monday March 03, 2003 @06:59PM (#5427908)
    PeopleSoft, SAP, and Siebold.
  • by rhfrommn ( 597446 ) on Monday March 03, 2003 @07:00PM (#5427916)
    I admin about 20 solaris servers and have been a Unix admin for about 5 years. The main reasons I won't switch to Linux can be summed up in a couple short points.

    1. Too small. It won't run on big enough boxes to do real datacenter work. My company runs data warehouses in the terabytes on servers with more processors and memory than Linux can handle. Before Linux can compete in the datacenter it needs to handle 16 procs at least, preferably as many as Solaris and the other commercial Unix implementations can. One other thing that is needed is for a volume manager and filesystem product with the functionality I can get from Veritas on Solaris. When you're dealing with 100-900 GB filesystems like the ones our databases live on the stuff built into Linux doesn't work.

    2. Too fragile. I've never tried running big Oracle databases on Linux but what I've heard from people that have is that it is too prone to crashing and corruption. Plus the stability of the hardware isn't there. You simply can't buy an Intel/Linux server that has the stability and reliability that a Sun/Solaris box has. Hot swappable hardware, the ability to route around failures without a panic or reboot, and so on just doesn't exist (or at least is extremely uncommon) for Linux yet.

    Both of these issues may well be fixed in the near future, but for Linux misses the mark too badly for me to even think about recommending Linux.
  • by jashbrook ( 109929 ) on Monday March 03, 2003 @07:07PM (#5427978) Homepage
    It seems that one problem Linux will encounter going forward against the Old School xNIX's is it is VERY portable. The OS supports a huge array of hardware. When people want linux to work, they want it to work on whatever PC they have in the back room. If it doesn't work with this, something gets changed in the code so that it eventually works. This provides a huge number of input factors to the linux code base.

    "Old School" UNIX traditionally runs on a small set of proprietary hardware. Less support means more bandwidth for features and hardware-specific implementations. Also, this means fewer hardware configurations need to be tested for reliability.
  • by antifun ( 648481 ) on Monday March 03, 2003 @07:10PM (#5428002)

    Linux's NFS server support has gotten leaps and bounds better since about 2.4.14 or so. The "bleeding edge" NFS stuff works quite well. Is it quite up to Sun's standard? No, it's not. But it's getting close.

    Of course, the perception of Linux's NFS support was probably done a fair amount of harm by Red Hat's bastardized 2.4.18 kernel that shipped with 7.3. BROKEN NFS client support out of the box with anything but Linux servers. Sent our big Sun servers into the ozone every time the load grew beyond "trivial."

    If you're interested in good NFS performance, throw your Red Hat kernel away and build a clean 2.4.20, or one with the NFS patches* if you're running servers.

    The patches are here. [sourceforge.net]

    Now, the Linux automounter (I'm talking about autofs, not amd) behaves very badly at times, but that's another story...

  • by Enry ( 630 ) <enry@@@wayga...net> on Monday March 03, 2003 @07:12PM (#5428024) Journal
    Ecch. It's pretty broken, especially if you're going cross-platform. autofs doesn't support indirect (or was it direct?) mapping, ypbind frequently times out while attempting to contact a NIS server (even though we have 6 in the subnet that are all relatively unloaded), and the usual NFS fixes.

    C'mon, we've known that 1024 recv and xmit are really bad values, why is that still the default instead of 8192?
  • Re:Here is my list (Score:5, Insightful)

    by ivan256 ( 17499 ) on Monday March 03, 2003 @07:30PM (#5428167)
    1.)I have hot swapable drive support. HP is working on this for w2k but does dell have this?

    This works fine in linux. If you're crazy you can even do it with IDE disks.

    2.)I can upgrade the hardware while the system is running!

    This is a hardware feature more than an OS feature. Linux supports hardware that supports hot-swapping. Hot swap PCI, pcmcia, USB, and SCSI are all great examples of this.

    3.)I have 64 bit memory access and integers for workstation cad apps as well as database access. Type double in C/c++ does not allow enough precision. Int64 ?? I can use larger numbers with more decimal points.

    Again hardware related. Buy an alpha, or an ultra sparc, or an Itanium, or... You get the idea.

    4.) I have a scalable server that has supperior clustering software that NT and Linux lack

    You need superior cluster software? I'll sell you superior cluster software. :)

    5.) With up to 128 processors I can have one fast mutha.

    Again with the hardware. Linux supports huge numbers of processors too. It's your i810 motherboard that's the problem here.

    6.) World class stability. Linux has serious VM problems and the filesystem has been known to corrupt under large disk loads. Ask any database admin who uses oracle in Linux. Real servers need 24x7 support and linux is close and is very stable but has some rough edges in heavy server use. A reboot could be disasterous and cost tens if not hundreds of thousands of dollars. May god help you if your wharehouse database crashes or if your factory goes offline for a system reboot.

    Give me a kernel panic in Linux and I'll give you one in solaris. Better yet, I'll give you highly available clustering software so you don't have to worry about those pesky and rare panics. Can't have down time? You won't even notice it. Really.

    7.)WOrld class support. If a chip fails you can have an engineer from Sun with a replacement part be at your office within a matter of hours if your a gold member!

    You're talking hardware again. There is plenty of world class linux software support out there. If you want hardware support you simply have to pick the correct hardware vendor.

    I'm not saying linux has it all, but it's got everything on your list.
  • Re:One thing (Score:3, Insightful)

    by zurab ( 188064 ) on Monday March 03, 2003 @07:39PM (#5428264)
    Ext3 is very stable. I've used it for over a year on my laptop and have never lost anything

    Being "stable" on your laptop means nothing. The question is about high-end Unix servers, very heavy load, mission critical complex apps.

    XFS (SGI) and JFS (IBM). XFS has been there for years and is not exactly what I'd call experimental stuff...

    Obviously those file systems have been around, but their implementations are still "experimental" under Linux; even though they have been working on implementations for some time now.
  • Re:Well of course (Score:4, Insightful)

    by jrstewart ( 46866 ) on Monday March 03, 2003 @07:51PM (#5428408) Homepage
    info was a great idea before html and web browsers existed. Even then the info program was almost unuseable. Now we're probably better off using the texinfo toolchain just to generate html.

    And GNU's high horse stance on man pages just pisses me off. Manual pages still serve an important purpose even with HTML or info docs available.
  • Re:Wrong Question (Score:3, Insightful)

    by Wateshay ( 122749 ) <bill@nagel.gmail@com> on Monday March 03, 2003 @07:57PM (#5428468) Homepage Journal

    The Application Binary Interface (ABI) is the interface between the operating system and executable applications at the compiled binary level.

    ...and, to say something on topic, I disagree with the original poster. Unix is not a standard, Posix is a standard. Unix was an operating system developed back in the '70s, which has evolved and forked over the years into a number of different operatings systems, all collectively referred to as Unices, as well as a mostly source compatible workalikes such as Linux. All of those different Unices offer similar interfaces and implement standards such as Posix to differing degrees of conformity. Above and beyond those standards, each Unix offers its own set of features that set it apart from all of its bretheren, and discussing those features is quite valid. Of course, that doesn't mean more standards wouldn't be a good thing.

  • Re:Well of course (Score:5, Insightful)

    by sbaker ( 47485 ) on Monday March 03, 2003 @08:25PM (#5428715) Homepage
    Oh - for a creative way to say "Me Too".

    'man has just exactly what you need in exactly the right order.

    First, the bare minimum - the name of the program or function an a one sentence
    description of what it does.

    Secondly it's usage with a well thought out meta-language - that is
    generally enough to nudge your memory if you already know the command.
    For functions, it also tells you what odd-ball header files you might need.

    Thirdly, a *slightly* more detailed description - and a concise list of
    the options/parameters - not spread out over many pages...right there.

    Fourthly...more stuff...that you may or may not care about.

    The information cleverly gets more and more detailed - so you generally
    get 99% of what you need right there in the first screenful.

    If I want more info than a two screenful man page can delivers, I want
    it on the web in a browser in HTML. I don't want to have to learn another
    markup language - or another navigation scheme - and I want a choice of a
    dozen convenient browsers.

    info does neither of these things - it sucks and needs to *DIE*.
  • Re:Well of course (Score:5, Insightful)

    by Drakonian ( 518722 ) on Monday March 03, 2003 @08:37PM (#5428835) Homepage
    OK, I have a humble request. Is it against the rules to put some examples in man pages? The language of man pages is sometimes so arcane. I think people learn best by example, why can't man pages have a couple?
  • Re:Well of course (Score:4, Insightful)

    by Tailhook ( 98486 ) on Monday March 03, 2003 @08:45PM (#5428897)
    "RMS does have a point, but the answer to a bad help system is not no help system at all. His Info system was an improvement fifteen years ago, but coupling the help system to the largest program on the machine ain't exactly bright, unless of course your real objective was to force everyone to use emacs."

    Borland had this right about 15 years ago. Ever used the help system in Turbo C for DOS? Veranice. Basically worked like a web browser. I miss it.
  • by g4dget ( 579145 ) on Monday March 03, 2003 @08:49PM (#5428924)
    SMB is fine for office and simple day-to-day activities, but it does not preserve UNIX file system semantics. SMB is also fine for accessing files on a Windows file server because, no matter what you run on the file server, it already won't behave like a UNIX file system.

    However, if you are serving files from a UNIX/Linux server to UNIX/Linux clients, SMB is not the way to go. NFS and a few other network file systems were designed for UNIX and they do a better job at preserving UNIX file system semantics. That really does matter in more "heavy-duty" applications.

  • by Morgaine ( 4316 ) on Monday March 03, 2003 @08:58PM (#5429009)
    Let me dispell a major misconception regarding the stability and lack of problems on big iron like large Suns and Netapps. The problems are every bit as prevalent as on little iron running Linux ... you just have to push the big systems harder.

    Loading up Unix/NFS systems from such vendors to meet the needs of multi-million customer ISPs can produce no end of nastiness in the native software of their machinery, especially in networking and filestore kernel functions. A professional outfit doesn't push its systems to such extremes by design, but alas multi-million customer ISPs have nightmarish management structures that grind exceedingly slowly, and sometimes planned capacity is reached and exceeded before extra boxes become available. In the ensuing month or two of desperate firefighting to keep the systems up, eye-opening problems sometimes arise that don't help reduce the general air of panic ... ... like storage systems that start serving only a fraction of their rated NFSops owing to internal filestore management bottlenecks despite their disk, CPU and I/O resources not running hot. ... like total freezeups when internal unadvertised limits on sizes of directories are reached (yay, triggered by a really bad spam which you couldn't keep up with because you were overloaded before it even started). ... like the realization that not all parts of a vendor's kernel are equally optimized, and if you select something unusual to give yourself an extra couple of percent of performance you might find a truss of "ls -l" showing each syscall taking 30 seconds to complete despite the system allegedly working normally. ... like large hardware beasts that under pressure give up the ghost and die, despite passing all the vendor diagnostics, and despite all internal components being swapped out during days and nights of engineering visits, until in total dispair you raise it to board level and the respective MDs (over a game of golf no doubt) decide finally to replace the entire thing just to keep it out of the press. ... and plenty more. When things go wrong, it's not the most relaxing job in the world.

    Furthermore, don't think that having extortionately priced platinum maintenance contracts saves your bacon every time. Sometimes the response is extremely good if someone else has suffered the same problem and it's recorded in their support database and they have a fix. But on other occasions the big vendor's analysts just look in bafflement at the performance indicators and recommend extra boxes (well they would), and on a few rare occasions they simply refuse to admit that your very thorough measurements and timestamped traces indicate that there is an internal problem in their machinery. Now that's bad.

    And finally ... big vendor support helpdesks. If you've ever placed a desperate support call in the middle of the night, only to be greeted with a response of "What is telnet?", or if you've been requested to send in an urgent diagnostic system dump, seen it fail in the file transfer to the vendor site, and get told "Oh yes, we only have 10 meg free on the server" ... then you too may wonder why you're paying them all that money every year.

    Fortunately there's more good than bad coming from the big iron boys, but to think that all is roses in that area and in big-iron Unix would be a misconception.
  • by bruthasj ( 175228 ) <bruthasj@@@yahoo...com> on Monday March 03, 2003 @09:09PM (#5429069) Homepage Journal
    # You simply can't buy an Intel/Linux server that has the stability and reliability that a Sun/Solaris box has.

    Sure I can. Just depends on the software load that you put on the system and its overall architecture. Solaris has advantages on some things, but its becoming more and more marginalized by Linux as it moves forward. As far as hardware on Sparc -- in my experience I have seen it crash just as much as Intel. Of course, you say Intel, but what Vendor is producing the materials for the "Intel" system? It requires a little initial fact-finding prior to purchasing the hardware as compared to Solaris/Sparc... since Sun is the only one that spits out those systems.

    The software that I'm involved in is Manufacturing Control Systems. Our architectures are quite varied from factory to factory. We've run on Xenix, Interactive Unix, Solaris 7/8, SunOS and Linux. I wasn't with the company with Xenix, but Interactive is a Dog and terrible at filesystem stuff. We used that until Sun bought it and integrated it into SunOS to become Solaris. Then we moved along with it and began using Solaris on Sparc machines. This worked quite well at one factory, except it was a bane trying to train the customer on how to setup the systems.

    Then we went to Linux. Linux brought us not only more bang for our buck, but -- on an OS level -- more stability for our buck. Yes, we did purchase copies of RedHat Linux ... not just download them. The first Linux/Intel system sucked entirely because of the Hardware. Well, to point the finger, it was the darned power supplies ... (does that count as Intel Hardware??) ... we purchased from a cheap distributor of Linux/Intel 1U servers. I won't name them here, but if you want to know, email me. It was a big mistake. So, this justifies a little bit of what you said about hardware.

    But, our next system was Solaris/Sparc. This time we used jumpstart and a bunch of nifty things to make it easier for the customer to get it setup. The integration on Solaris/Sparc for these kind of things is quite cool and I hope Linux/Intel can put something similar together. Anyway, we began using NFS in our last architecture and then used the same arch on Solaris/Sparc. Huge Mistake. Don't ever run something worth over 1 billion dollars on NFS/RAID with Solaris. Sorry. The downtime/crashing that occurred with it is way above the norm. It crash 2 or 3 times last year. Horrible on the network performance too because the system may have scaled way beyond Solaris' capacity. (60 Nodes communicating with 1 Node grinds the CPU terribly on Solaris.) I know I don't have the numbers to back these up, my only benchmark is how loudly the customer yells.

    Our latest system uses IBM xSeries with dual hard drives in a RAID 1 configuration. Excellent systems. The per computer cost reaches about the same price for Sparc and I would risk to say the hardware stability is there. IBM HDs are extremely reliable and the design of the systems are quite fault-tolerant. Maybe in six months, when /. dupes this story again, I'll give you an additional opinion on the matter.

    The use for Linux in the Enterprise is here and now. If you cannot envision that, then you'll be left behind, plain and simple. The next stable Linux kernel will make it even more so.

    Just my .02, thanks for reading this tome.
  • man vs info (Score:5, Insightful)

    by jefu ( 53450 ) on Monday March 03, 2003 @09:14PM (#5429103) Homepage Journal
    I want to add my voice to the throng.

    I like man. I like man -k . I don't like info much.

    A well written man page provides a minimal description of the program/system call/... and provides the information a user/programmer really needs quickly and easily. Do man in a terminal and use "/" to search quickly. Do man in emacs and get the ability to do more with the result. Do man: in konqueror and get more.

    Info tends to provide long winded description of this that and t'other, usually completely unindexed, unsearchable and for most of my purposes unusable. Let alone that I now need man for some things, info for other things, html for other things and so on....

    Personally I'd rather like to see an xml format that would enable documentation writers to build both html pages (I personally think "info" is obsolete) and man pages at the same time. (That is, with tags like <synopsys>, <see-also> and the like, as well as with tags to mark indexable terms). Ideally it should be possible to generate man pages, a howto, a set of html pages for users all from the same input.

    But I'd rather have everything documented in "man" style than any of the rest.

  • by Tailhook ( 98486 ) on Monday March 03, 2003 @09:22PM (#5429157)
    Stuff Linux "needs":

    Support for hotswap CPU/RAM etc. This is tough without hardware vendor support. Getting the info to write the driver (under NDA or whatever) is one thing. Proving the OS can actually cope with a CPU hotswap is another. Without high end hardware for testing, this ain't gonna be real. Solution: force the vendors to make Linux a priority on high end hardware.

    Mature LVM. Mature enough that you bet your career on it, like HP/Sun/IBM admins do every day while barely understanding what's really involved. Having multiple competing (diluting?) implementations doesn't help.

    >8 way scalability. If I had to pick from amongst my wish list, this would be one of the last. However, it does matter. For credibility, if nothing else. Solution? Hmm. Breakthrough in OS engineering, where the big boys get the scalability they want without compromising the low end. Ain't been done yet. But then, that's where the real opportunity is huh?

    Compatibility with some significant percentage of the bazaar third party hardware in the world. Like EMC^2 arrays and the wild world of Fiber Channel. On one hand, Linux can/does thrive quite happily in the edge/cluster/small-database/terminal market. On the other, until you can manage a high end drive array from Linux (no, NFS doesn't count) that is where it's gonna stay. Only market share will make this happen.

    Diagnostics that don't suck. Again, low level hardware vendor support required. So you paid extra for that nice ECC memory in your self-built machine. Do you know what would actually happen if a bit went bad? What would you get in the way of diag from the machine? Bet most of you don't know... Not good enough. Solution? See "hotswap" above.

    Time. Linux is competing with OSes that are 3 times as old in some cases. PHB instinct is going to shy away from something less mature. Truth is those instincts tend to keep planes in the air, whether it fits your agenda or not. Linux isn't exactly new, but it hasn't really met the test of time yet either. Solution? Patience.

    Software issues need fixing. GNU compilers suck. The native compiler on a *nix machine needs to not suck. This is basic. Linux has some real POSIX issues too. Threading only being the most obvious. Solution? Someone with the pragmatism and skill of Linus on the compiler/library side.

    Mature advocacy. The way to be an effective Linux geek is to not try to sell it. If it's worthy of your advocacy, it doesn't need it. When opportunities appear, out in the "real world", step up. Otherwise, keep your geek mouth shut. Solution? Look within.
  • Re:Well of course (Score:2, Insightful)

    by eyegone ( 644831 ) on Monday March 03, 2003 @09:23PM (#5429172)
    So, in other words, just because someone doesn't write a manual--because they don't have the necessary linguistic skills, or because they don't have the time, or they're catatonic, or a plethora of other legitimate reasons--the package is considered broken?

    Yes. The Debian package is considered broken until someone, not necessarily the author of the underlying software, creates a man page. It's not a judgement of the underlying software.

  • by 13Echo ( 209846 ) on Monday March 03, 2003 @09:31PM (#5429241) Homepage Journal
    The Linux kernel, by default, does not load sound support modules if there is no soundcard. Please, get your facts straight.
  • by maynard ( 3337 ) on Monday March 03, 2003 @09:51PM (#5429417) Journal
    If you move video drivers into the kernel then you risk kernel stability!

    The big controversy was Microsoft moving GDI and other aspects of the windowing system into kernel space, not physical device driver support. I'm talking about initializing the physical device and managing video device registers, blitting, etc. in kernel space, while userspace apps (or better, a general purpose library) use traditional system calls to handle userspace communication to the device. This was hashed out repeatedly long ago (with many flame wars ensuing) on the lkml. Unfortunately it was a good idea that never happened.

    Think about it like this: you wouldn't want your IDE device driver running as a deamon with root privs in userspace, would you? So why should your X Server do the same? Why should you have ten different X servers tailored to every popular video card on the market? Have you ever switched from an X session to a virtual terminal only to see your console completely hosed? I can't count the number of times I've seen this and wondered 'Why the fuck hasn't this been fixed yet?!?!?!' XF4.x with its modular device drivers notwithstanding, this is still a serious issue.

    Managing hardware recognition, initialization, and physical attributes in kernel space is cleaner than having a bunch of usespace apps doing the same in ways that are almost certainly mutually exclusive. It's supposed to be the kernel's job to handle simultaneous contention for a device between various apps. That was the point behind GGI, and is currently the point behind the Linux kernel Frame Buffer support. Unfortunately, Frame Buffer device drivers are horribly out of date so people just use X instead.

    Of course, this is JMO and many people (including Linus at the time) completely disagreed many years back. I, however, think the GGI folks were completely right and wish Linus had given their ideas a better chance at kernel inclusion. OTOH, you don't see my name in the kernel tree and there's a good reason for that: I'm not qualified. So take my opinion with a grain of salt. :)

    Cheers,
    --Maynard
  • Re:so? (Score:3, Insightful)

    by Doomdark ( 136619 ) on Monday March 03, 2003 @10:07PM (#5429539) Homepage Journal
    There are reasons why loose coupling (distributed systems, Beowulf etc) is sometimes better, and tight coupling (SMP, virtual domains in Solaris) is sometimes better.

    Good things about "intra-box scalability" include:

    • Lower maintenance costs. Much of hardware can be dynamically allocated and/or shared, either in real-time or at least without rebooting.
    • Much better I/O throughput between different computational tasks. Potentially back-plane / bus speeds as opposed to network speeds (order of magnitude faster inter-process communication).
    • Potentially lower hardware costs (related to first item). This depends a lot though... but in case of, say, Sun servers, it makes sense to buy couple of more CPUs instead of more systems.
    • Easier maintenance / administration (related to first item). Depends on tools, but it is generally easier to maintain single "big" box than multiple "small" ones.
  • Re:binary (Score:3, Insightful)

    by jbolden ( 176878 ) on Monday March 03, 2003 @10:09PM (#5429563) Homepage
    If you look at Rhapsody, OSX is NeXT with an easier interface and OS9 compatability. If they had gone with Linux they might have had to reimplement everything from NeXT over again. I guess we will see with the progress of GNUStep.
  • Re:I Got One... (Score:3, Insightful)

    by rifter ( 147452 ) on Monday March 03, 2003 @10:10PM (#5429565) Homepage

    I got the $3000 price for the HP Linux Distro from the Slashdot article announcing its availability. That's what I get for believing slashdot, I suppose. ;) (or more likely, HP came out with a free version that did not have the support the $3000 version had.)

    As for the pricetag on Solaris x86, well, that's too bad for you. If you run Solaris on SPARC like God intended, it is free to download the iso from the link I so thoughtfully provided. Solaris 8 was free for download for both architectures, IIRC. Sparcs are cheap on Ebay and even several years old desktops are reasonably powerful compared to PCs and can run the latest versions of Solaris with no problems.

    I would imagine that Sun is charging a bit for x86 Solaris because they make money on Hardware, though they do sell x86 hardware as well. You will also notice that as of Solaris 9 they have slowed down the x86 development. I was actually surprised to see they went ahead and came out with it after many moons passed while the future of x86 Solaris was left unstated (starting with the announcement that Solaris 9 SPARC was available, but Solaris 9 x86 was not.)

    I think also some of this threw off the people who, in stark contrast to the hobbyists x86 Solaris seems to have been meant for, started to buy real x86 servers and throw x86 Solaris on them instead of buying SPARCs. These people are likely the ones Sun would like most to discourage from using x86 Solaris, and from what I can tell the uncertainty factor worked very well for Sun in that regard.

  • by More Trouble ( 211162 ) on Monday March 03, 2003 @10:22PM (#5429645)
    devfs: Please, when are we going to finally transition away from static device nodes to devfs? Solaris had it right, dynamically name the device on detection after its physical properties. This is really important and hasn't been implemented for anything more than testing.

    Um, you're totally right about how cool devfs is. I'm quite fond of it on Mac OS X. What Solaris does, however, is not at all devfs. devfsadm on Solaris is the devil. Ever have your drives dynamically renumbered out from beneath your vfstab? Try it some time.

    :w

  • Re:Well of course (Score:5, Insightful)

    by orangesquid ( 79734 ) <orangesquid@nOspaM.yahoo.com> on Monday March 03, 2003 @10:31PM (#5429716) Homepage Journal
    I often feel this way: Why write manuals for XXX, when it's obvious what it does and how to use it?

    Fourteen months later...

    What the fuck is XXX, what does it do, and why did I write it?!?! ;) I don't think there's any harm in spending time on making (at least) simple documentation for anything.
  • Re:Well of course (Score:4, Insightful)

    by Zero Sum ( 209324 ) on Monday March 03, 2003 @10:44PM (#5429805)
    Possibly the most important thing I have learnt in a lifetimes programming is to write the manual first.

    Then you have some idea of what your code is going to have to do...

  • by mattACK ( 90482 ) on Monday March 03, 2003 @10:46PM (#5429823) Homepage
    While I agree that Active Directory 1.0 (Windows 2000/NT 5/pick one) is impressive in mid to large-ish environemnts, the design methodology absolutely falls apart in a true enterprise environment (inasmuch as AD integrated DNS is concerned).

    Consider this: the AD DNS zone is required to be in your domain container. This means two major things: ALL DCs in your domain have this information replicated to them (whether they are DNS servers or not) and NONE of your DCs in other domains can host these zones.

    Stretch item one out, and you will see that when a user in Japan powers on his workstation, it replicates to my DC here in the states. Do I care to access that guys data SO BAD that his replication storm^H^H^H^H^H event hits my DC? Even though it isn't running DNS? Kinda silly, really.

    Taken the other way, if I want a multimaster DNS zone to cross a domain boundary even in the same forest I cannot do it. It simply cannot be done. You could set up a zone transfer and work some mojo, but you lose the benefits cited in your post. Active Directory DNS doesn't support stub zones, either.

    Active Directory 1.1 (Windows 2003/Windows .NET Server/Pick one) fixes these complaints with enlistable name spaces that can cross domain boundaries, but just try to get THAT pushed through in a large environment until 3 months after SP1. Not very fscking likely.

    I actually find the automagical functionality of AD fascinating, and I do not mean to troll. I just find that most folks who extoll AD haven't seen it with over a couple of thousand clients.

  • Re:Here is my list (Score:3, Insightful)

    by Billly Gates ( 198444 ) on Monday March 03, 2003 @10:50PM (#5429848) Journal
    Customers who buy Unix buy it for the hardware as well as software.

    Not all of the features of Solaris or AIX or in Linux (yet). My list might be a little outdated since I left tech work back in 2000 but the fud of Mindcraft and the Brown Associates report stated all 7 of the things mentioned above as well as a lack of a journaling filesystem and volume managment tools. Since both of these are now available in Linux its no longer an issue. However real enterprise hotswapable support in SGI and SUN is not available in Linux( yet ). I am talking about replacing cpu's and memory while the system is on. Not raid 5.

    In a wharehouse or factory a single reboot and disk check could take literally hours! Especially if a database is installed and needs to check itself for corruption. This is the same kind of environment where the systems must remain 24x7. A crash can cost tens if not hundreds of thousands of dollars. Infact most fortune 500 companies do not even trust Sun or Unix and still use IBM mainframes.

    Hot swapable cpu's and memory is a must because of this for any serious Unix user. No, Linux does not have this. USB, firewire, raid 5, and PMCIA were designed to be hot-swapable hardware. Cpu's and memory modules(x86) are not and even on sun hardware need os support. Intel will bring this out with Itanium and hopefully linux will support it.

    Some other things I did not type that Linux is lacking is hardware/software integration and backplane bus's. For certain software apps the pci buss can get oversaturated while Sun boxes run without a hitch. Think terminal server and database.

    Yes, you can run linux on sun hardware but all the linux apps besides opensource are only available on intel so the bus issue is important in ERP and database software. People who need them obviously can only use Unix or a mainframe.

    Cmd Taco just posted a question on what Linux is lacking compared to Unix and I gave an honest answer. Relax. People who buy Unix need a solution for both hardware/software/support and if they have a certain need like lots of i/o, high availability or hardware then only Solaris and AIX is the answer.

  • by BollocksToThis ( 595411 ) on Monday March 03, 2003 @11:19PM (#5430012) Journal
    'Score 3: Interesting' for an unsubstantiated opinion! Great!

    Not saying I don't believe you, as such... but what improvements put NDS 'light years' ahead of AD?

    I'm willing to bet that NDS is more robust, and perhaps a little better designed, but I can't believe 'light years ahead' without some actual information.
  • by Feign Ram ( 114284 ) on Monday March 03, 2003 @11:33PM (#5430068)
    How come Google's servers that handle more hits than any Solaris/AIX box/cluster in the world , handle so much traffic with only low-end Linux ?
  • by towatatalko ( 305116 ) on Tuesday March 04, 2003 @12:45AM (#5430469)
    What you mean by non-PC platforms?, kernel graphic support?, any serious argument you can provide?, claim is not enought!
  • Re:Here is my list (Score:5, Insightful)

    by ajs ( 35943 ) <ajs.ajs@com> on Tuesday March 04, 2003 @02:54AM (#5430974) Homepage Journal
    However real enterprise hotswapable support in SGI and SUN is not available in Linux( yet ). I am talking about replacing cpu's and memory while the system is on. Not raid 5.

    The poster you're replying to never mentioned Raid 5. He was talking about things like hot-swappable devices from PCI to USB to SCSI. In case you're not aware, RAID (Redundant Array of Inexpensive Disks) is a specification for concatenating disk drives in efficient ways that provide increased data integrity at the cost of storage space. RAID5 is one of the 4 that are commonly used (0, 1, 4 and 5... bonus points if you can tell me where RAID4 is most commonly used). So, while the technologies that the poster was describing might be applied to disks and/or controlers (and often are), that has no bearing on how you structure your storage.

    As for "real enterprise"... I don't need or want systems that can hot-swap memory or CPUs. I want systems that do what I want fairly quickly and do a good job of talking to eachother. Everything else is negotiable. If you build your environment correctly, all applications can behave this way.

    On to your points....

    if [purchasers of hardware/software] have a certain need like lots of i/o, high availability or hardware then only Solaris and AIX is the answer.

    Coming from a production environment which has a great deal of need for "lots of i/o", "high availability" and "hardware", I can say that Linux fits the bill very well. I've admined large environments based on Solaris and HP/UX in my life and I can tell you that Linux is not a second-rate platform by any measure. It *is* a product of its upbringing, and you *do* need to keep that in mind to admin it correctly.

    For example, Linux supports a huge range of hardware. Some of it is supported poorly, but most of the stuff that you would generally get your hands on is solid. The problem is that while the software is solid, some of that hardware is crap. It's easy to think of Solaris has more stable than Linux when your average Linux installation is running on crap for hardware.

    Yes, it's true, there are hardware giants (like Dell and IBM) pushing Linux on solid hardware, and that's good. However, I find the crap-box hardware to be more interesting. It's prices/performance in terms of raw uptime is truly staggering. This is how we build our environments. We buy dozens of dinky little 1us and configure the software so that we can pull any 10 of them out of the mix and no one cares.

    I prefer this model, but your milage may vary.

    In a wharehouse or factory a single reboot and disk check could take literally hours!

    Why was it designed so poorly? I've got several terabytes of disk sitting in the corner, and when I reboot it it comes back in about 96 seconds. Mind you, I don't recommend storing terrabytes of disk on anything that runs a general purpose OS (I use dedicated storage devices for that), but if you are going to be that stupid, Linux is just as good (or better) a choice as the competition.

    Especially if a database is installed and needs to check itself for corruption.

    I hate to sound like a broken record here, but why was it designed so poorly? My database comes back from a reboot with a quick message about recovering any lost transactions from its log? Are you not using a real database? I don't like Oracle, but you might want to try it out. Works very nicely.

    This is the same kind of environment where the systems must remain 24x7

    There's no such thing. Ask the folks at AT&T who tried their damndest to build a phone switch that could make that claim. They came close. For millions of dollars you could shave off an extra 9. What did everyone learn? Mostly that in the real world that last 9 you just bought is a rarer problem than some idiot letting a cleaning person into the "super secure data center" with a mob and bucket. Oh well, chalk that up to expensive lessons and don't repeat it.

    The point I'm getting at here is two-fold: 1) every major OS is capable supporting produciton environments... it's a matter of how you use them and 2) if you start off with the assumption that product (x) sucks and cannot compete with product (y), you're probably going to find evidence to support your claim... regardless of your local values for (x) and (y).

    Good luck!
  • by esbjerg ( 130970 ) on Tuesday March 04, 2003 @04:36AM (#5431295)
    You're absolututely right about the missing LVM. Having seen veritas, solaris lvm, vinum (freebsd), lvm and evms (linux) I have to say Linux should not take the same road. It must be done better than those LVM's. A good LVM should make it possible to change an existing disk into an lvm disk without destroying data (even on /). Also the LVM should be capable of creating raid systems transpaerently as well as resizing partitions on the fly. Generally I feel that those who impement LVM's do not understand the word Logical. The LVM should just provide the admin with 'a disk' which he/she can configure in any way (almost). I have been told the AIX LVM comes close to this.


    A thing which sucks in Linux is networking. Here I'm talking about drivers and the configuration. I still have to take the time to make shure all interfaces use the right speed and duplex mode. To do this I have to use mii-tool which does't work well with all drivers. My complaint is the lack of conformity among the drivers. Look at the BSD drivers and see how it's done. Second provide a man page for every driver with good documentation especially about bugs and flaws.


    The last thing I want to bring forward is the missing mpath daemon. Solaris has a very clever daemon which can group interfaces together to let the OS us more than one NIC on the same network. Thus if a wire/switch/interface fails the daemon automatically moves routes and IP's from one interfcae to the other. NB I'm not talking just about link failure but also the event where packets just gets dropped on the way through the network. I know I can by my way out of the problem but I think it should be a part of Linux.


    Generally I believe that Linux needs more consistency. It still feels like a bunch of tools smacked together with a kernel created by thousands of people with different oppinions.
    It needs to feel like one OS from the bottom and up.

  • by totierne ( 56891 ) on Tuesday March 04, 2003 @05:59AM (#5431494) Homepage Journal
    Just thinking conversely, as if linux is always considered as catching up on Unix or Windows, it is in danger of being seen as non innovative, and second best at everything.

    The truth is basically GPL will set you free, from beowolf to Sharp Zaurus to a (motorola) mobile phone, to ... presumably there is a linux watch or even pacemaker in prototype somewhere...

    Actually being non innovative and 2nd best at everything, ubiquitus, and bundling everything, never did Microsoft Windows any harm.
  • just one thing (Score:1, Insightful)

    by compubomb ( 612155 ) on Tuesday March 04, 2003 @06:20AM (#5431535) Homepage
    I think man pages suck. They never have any examples. How do you think people learn in programing, it's called examples. How do people learn to program from books "EXAMPLES" man might be powerful, but untill people change the rules and put examples many will use anything to make it easier to learn an already hard to learn/use OS.. If man pages were filled with examples i'd be an expert by now. I think the reason man pages suck is because linux programers feel too uber to give others examples EXAMPLES! EXAMPLES! EXAMPLES
  • My list (Score:1, Insightful)

    by Anonymous Coward on Tuesday March 04, 2003 @12:19PM (#5433292)
    Commercial UNIX has:

    • C2 Discretionary Access Control
      • Audit
      • File ACLs
    • B1 Mandatory Access Control
      • Object labels, labelled printing, labelled GUI
      • Role-based administration
    • OS support for real hardware
      • Dynamic system domains
      • Dynamic hardware reconfiguration
      • Hot-swap hardware (incl. PCI, system boards, etc)
      • CPU error/miscalculation detection/replacement
      • Multipathing
    • POSIX compliance
    • NEBS level 3 compliance
    • Telecom alarms
    • Volume managers that work
    • Processor-level job control
    • Cluster software that works
    • Cluster filesystems
    • NFS that works
    • NIS+
    • CDE (as opposed to cheap immitations)
    • Design specifics for the vendors' own hardware
    • Support for all peripherals that go in the box
    • No open source (but that's a different debate)
    • Vendor's-ass-on-the-line support
    • Standards
    • Stability

    Linux has:

    • People who migrated from Windows to Linux at home and therefore think they are as qualified as me to run a UNIX system/network
    • People with blinders on that think Linux can do no wrong, and all things proprietary are evil
    • People who can't understand that quality hardware and software actually cost money
    • People who believe software produced by anarchy (open source, usually written by college students who are still learning how to design software) is better than software produced by a highly paid team of professional software engineers

    Some of us with real jobs have real system requirements. Downtime might not be an option - not even a quick reboot. Production stops might cost hundreds of dollars per minute. Files stored on the servers might be DOD, DOE, NATO, etc. classified, where C2 or B1 might be required.

    Sorry for posting anonymously. I don't want my name associated with that last paragraph.

    *Note: I do not consider all commercial UNICES (MacOS X, A/UX, SCO Xenix, etc.) to be in the same class as UNICES such as Solaris, HP-UX, AIX, IRIX, UniCOS, UTS, etc.

One man's constant is another man's variable. -- A.J. Perlis

Working...