Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
News

HURD For 'Big Iron'? 151

Julian Stoev wrote in with this query: "Recently I've seen quite a lot of conversation about Linux on 'big iron' and talk about a possible fork due to the fact that the Powers That Be do not want to include the necessary features that are necessary. A guy from IBM in an interview sounds a little desperate. He sounds like IBM is not very happy with Linux's direction to small devices. The guy is cautious not to make kernel people angry and does not speak directly about kernel fork. But why don't they (IBM, SGI, et al) grab HURD and add to it all the things they find important for 'big iron' support?"

"HURD is not good for smaller machines because it needs more memory, but for big machines/clusters it can be very promising design. Why don't these vendors who moan about the lack of 'big iron' features start a 'HURD foundation' to help uplift the sleepy project? They could add compatibility for Linux binaries for compatibility, even which would be great as there would then be a powerful kernel (HURD) for 'big iron' and Linux for smaller and embedded systems.

Yes, HURD needs quite a lot of work to become stable, but large vendors have the resources to give it a push. In this way they will prove that they really appreciate open source by helping a project which is going through some hard times right now."

This discussion has been archived. No new comments can be posted.

HURD for 'Big Iron'?

Comments Filter:
  • See your missing the point... They are spending millions of dollars to get linux on to their big iron. Their s390 campain has cost them a fortune... but given great returns. That is the future for many medium to large ASP's. Period.
  • It's common practice to say the entire phrase and put its acronym in parenthesis just following before you begin using an acronym in a document. I'm sorry but what the bloody hell is HURD? Sorry, I spend more time developing than keeping up on the latest propaganda.

    Regards
  • by Anonymous Coward
    The kernel keeps track of users (UIDs), groups (GIDs) and processes (PID/PPIDs) as 16-bit values. Having a 64K limit for these values definately impacts some of what is desirable to do on "big iron." There is little point in investing in a machine that can handle file/print services for 75,000 users (and the horse power is out there now!) when your kernel can only allow the defination of 64K of them. Linus did clearly refused the SGI patch to fix this situation.
  • Allright, let's go through the math, 9 platforms, actually 10 according to their web page. 5 of these are 680x0 machines, so let's really call it 6 ports. Of these 6 ports, alpha and pmax are currently without maintainers, and 680x0 is a dead platform (unless you're into antique computers, which is definitely cool, or you're trying to run on a palm pilot). That leaves us with 3 ports. The powerpc port seems to be fairly new, so the only ones I'd trust are i386 and sparc (which incidently doesn't cover ultrasparcs, only sparcs). In my head, that means openbsd has two solid ports. By my reasoning, Linux also only has 4-5 solid ports (alpha, i386, sparc, ultrasparc, powerpc), although there are many more you could include (680x0 ports, ia64, s390) which I would regard as well supported.

    Not to rag on openbsd, I'm sure it's a great system, but be careful saying what's actually a supported platform. That's like saying netbsd runs on almost anything. Sure it does, but you may have to netboot the machine, run the file system off a better supported computer over NFS, and use a serial login. So basically you're using the CPU on your machine and the network card, while the Hard drive, video, keyboard, and mouse go to waste.

    Also, 99% of the code within different Linux distributions is the same. Instead of just borrowing code between different distributions (or forks, in the *BSD case), the different distributions actually use exactly the same code, it's just the packaging which is different.

    One Caveat, with the change of license of OpenBSD such that they can directly incorporate NetBSD code, I expect the code between these two forks to become more and more conserved.
  • Thanks Valdrax. That's the best description I've read describing the issues around Linux kernel design for SMP performance. A real GEM !! I learned a lot from your explanation that I didn't know before.

    Macka
  • Give me some facts to back this up please. I'm
    not interested in your random assertions.

    I gave you some places to start looking for what the
    thought process is on the kernel mailing lists.

    If you really believe this, why don't you fork the
    code and maintain a big iron kernel on your own?

    It's your right under the GPL and there is nothing Linus could possibly do to stop you. _If_ this is Linus goal (which I don't believe for an instant... I think it's much more likely you have some kind of agenda), we can just say "So Long and Thanks for all the Code" to Linus and take Linux back. That is the beauty of Open Source code....


    ---
    RobK
  • Yes Linux can be patched.

    No this isn't an issue.

    Yes the IBM fixes will get in.

    Yes the IBM people are being too impatient and not conforming to the development schedule.

    DOES ANYONE REMEMBER 2.4 IS IN A CODE FREEEEEZE.

    HELLO? PEOPLE?

    Freak.

    When 2.5 comes out, by the time it gets to 2.5.10 if the IBM people still havne't worked out the development speed for getting their stuff in, then it will be an issue.

    But not until.

    -Nathan
  • big companies [...] are upset with linux for aiming at small systems, but aren't immediately splashing out with their own cash to rectify it themselves.

    This may smell too microkernel to please some people, but since some anwesome things have been done with kernel modules already, why not a few more? If somebody wants a new feature that speeds some things up enormously but requires (say) 8 megs of RAM to start up, why not do it as a kernel module?

    That way, your distro could boot ``minimised'' and then load the module if appropriate (Hmm. This is a 2MB 386SX. Should I load ramgulper.o?).

    One way of doing this with extremely core items would be to make the core item as a pre-loaded module; that is, the code is a module, but a copy of it is compiled into the kernel so it is present at boot (you might conceivably need some memory management, for example, to get to the point of being able to load your new ramgulper.o hyperfast memory management kernel subsystem module).

    If Two-Kernel Monte can replace an entire more-or-less running kernel on the fly, surely it is not much more technically difficult to replace significant subsystems on the fly?

    As to the idea of splashing out with the cash, the problem isn't the cash itself, but getting the companies to commit. Politics and memes are bigger blockers than dollars ever were. If this kind of thing is shown to work, no matter how shakily at first, you will get some joiners from false-floor territory.

    If Bill Gates could sell an 8080 BASIC interpreter to MITS more than eight weeks before writing an alpha version of it (ie, it hadn't even been planned when he sold it), what's stopping ``us'' from ``selling'' advanced kernel features to IBM or Fujitsu a few weeks from now, by which time the bones of them actually exist?
  • Political neutrality is not the issue.

    Linux is in the pre-2.4 code freeze. The stupidity of people not connecting that with the fact that IBM can't dump 10MB of patches in to change the systme components in is the issue.

    IBM knows what ifdef is. They just need to figure out how to work with Linus' kernel development system.

    -Nathan
  • "A guy from IBM in an interview [...]"

    Pointers? to the interview? or at least who was it with?

    "[...] large vendors have the resources to give it a push."

    umm... not really, resources are VERY tight in most companies - ask the architects who have in the last few months had their grand vision killed because we can't get enough skilled people to implement and support them? (remember that in manager speak YOU are a "resource", not the hardware, or the lab space, or the desk.)
  • It has been observed that there are several different versions of linux currently available (MkLinux, ELKS, etc).

    Why would it be unreasonable then to have another division between a linux geared more toward embeded systems and one toward large scale systems?

    There is much concern that compatiblity will be lost. Is there not a way to change the engine without changing the leather seats, so to speak?

    I'm not clear about where exactly the resistance comes from.
  • Why bother? BSD has nothing that hasn't already been leeched away by the commercial UNIXs. AIX has advanced SMP, kernel level multithreading, and journalled filesystems. Besides, if IBM just wants to avoid a code fork, BSD isn't going to help any. BSD is all about code forking. Just look at the new branding policy FreeBSD/BSDi put out. If you want to put FreeBSD on the name of a FreeBSD based distribution, you need to provide the Walnut Creek CDs pristine. If you want to change the installer or packaging system, you'll just have to call it something other than FreeBSD.

    But leeching is considered ok, because everyone who works on BSD can do it too. The Walnut Creek/BSDi employees who run FreeBSD will incorporate everything good into closed source BSD/OS. The closed source version will always have the most advanced features.

    The model is sound, it's the model BSD has used for 20 years. It's helped commercial UNIX venders, and UNIX venders rewarded Berkeley with equipment donations, research grants, and jobs for BSD developers. BSD code became a shared resource, a skeleton which commercial closed source venders can use to make a finished product. The model just has nothing to do with promoting open source over closed source.
  • I commented in an earlier article that I have talked with some of IBM's Linux people at job fairs this fall. They've told me that they see most of the Linux kernel developers as "baby's crawling in the sand." They see them struggling with highly complex kernel mechanisms for the first time, while IBM has developers that have been doing that sort of stuff for years and years in AIX (this is all paraphrasing what they told me). But they said that, while they could just whop down a big set of kernel patches to make the kernel more efficient and whatnot, they're trying not to alienate the community. The guy I spoke with cautioned though, that they aren't trying to say Linux is a bad OS or anything, but rather that it has a lot of potential to be even better than it is. The IBM dude said they see Linux becoming the dominant overall OS in as few as 4 or 5 years.
  • Linus turns down _lots_ of patches. Lots and lots and lots of patches.... some of them end up being
    part of commercial distros anyway.

    I can fully understand him not accepting patches that are going to end up being a _lot_ of work to satisfy less the 1% of his "customers." [either as a maintainance issue or initial reworking of the infrastructure.] In terms of effort spent, it may simply not be worth it when the people who make these machines can easily make their own kernel. People have (for example) been using the devfs patch for quite a while and it only just got into the kernel in 2.4. I don't remember any conspiracy theories about any of the thousands of other patches Linus turned down...

    Also, it could just be that the thought it was too late in the 2.4 development cycle for such a big addition.......

    ---
    RobK
  • Suffer from WHAT? Do you seriously think that a lack of mindshare is what's keeping the "I have Linux installed, it kicks ass and I have no desire to change" people from jumping ship?

    Scared is not the issue with hurd vs. Linux on the desktop/server. HURD being what IBM is helping to make Linux for Big Iron is a scarry thought.

    If I were IBM I would be scared of the amount of control RMS exercises over GNU/HURD. I mean think about it, he wants to be able to insist that people who use any of his code use the GNU/ extension to the product name.

    It's not GNU/Linux dangit (unless you run Debian but I don't care what that is called, it just kicks ass.)

    If I were IBM I would not want to lay hopes and development money on a system driven by a fanatic (granted a good intentioned one) who lets his ego govern the direction of the system.

    -Nathan
  • My understanding of Hurd is rather limited (basically what I have read in the Kernel Cousins at Linuxcare [linuxcare.com]) so correct me if I'm wrong but isn't Hurd rather focused on doing things "properly" in a somewhat academic/research sense?

    If so is it not reasonable to believe that the vendors would find it just as restrictive (if not more so) working with the Hurd?
  • And translators KIK MAZZIVE AZZ !!!!

    database access:
    grep query /database/instance/table/*

    wget equiv:
    cp -r /network/http/www.slashdot.org/ .

    /EVERYTHING/ _IS_ a file/filesystem !

    Back away Amiga/Linux zealots - the HURD Religion is coming !

    (too bad i couldn't install-it on my machine due to hardware incompatibility :( )

    --
  • Why's a Linux code fork necessary though? Can't it just be patched? Or is it a fundamental design of Linux that it doesn't work well on mainframes?

    I have more questions, but if you could answer these one's first I'd appreciate it.

  • by 1010011010 ( 53039 ) on Wednesday October 18, 2000 @03:16AM (#697453) Homepage
    Some competition would be a good thing; and since all the code is GPL, they can re-merge later. Linux has gone through fork-and-merge several times already. A lot of developers keep their own little fork going while in development. XFS, for instance, ships as a patched kernel.

    It would be good for SGI, IBM, HP, and other big players to create an Advanced Linux Kernel Project. I would even host it (I'm a director at a hosting company) and contribute code (filesystem, device drivers, unicode functionality).

    Even if the Linus-headed effort is the One True Way, it doesn't have to be the Only Way. They're not infallible. Let's get a second project going! It won't be a threat to Official Linux -- just a development track for enterprise situations!

    Email me if you're interested!

    ________________________________________
  • by sales_worldwide ( 244279 ) on Wednesday October 18, 2000 @03:17AM (#697454) Homepage
    You wrote: Think about what an IBM sees in Linux.
    • massive momentum
    • probable emerging standard
    • huge (unpaid by them) developer community
    • existing high-quality implementation
    • existing widespread hardware support
    • easy to find techies who know it
    • non-techies are comfortable with it
    • liberal licensing
    Now which of these does HURD offer?

    The question should be: Now which of these does linux offer?

    Are you really telling me that linux offers *any* of:

    • probable emerging standard (don't make me laugh - linux is anarchy - BSD is more standard since it is committe based)
    • existing high-quality implementation (have you ever looked at linux code? And it changes every two minutes)
    • existing widespread hardware support (DVD? Hardware RAID even? ....)
    • non-techies are comfortable with it (please ...)
    • Liberal licencing (GPL!!!)
    And as for stuff like "massive momentum", you must remember that the momentum behind linux is its use as an application not as an OS. Linux boys use linux as their primary app, not as an OS to run another app. Their app *IS* linux. It *IS* spending the whole evening downloading the latest Hungarian font patches and recompiling the kernel just so their system is "complete". Ten minutes later they're downloading the next patch that is issued, for hardware they don't have ....
  • Since HURD is a mutually recursive acronym, it's quite hard to say the entire phrase. :)

    It's the GNU kernel [gnu.org], which has been in development for well over a decade, based on the Mach microkernel. And yeah, it's still not ready for prime-time, although you can install Debian's unstable distribution for it. The Kernel Cousin project has a section for its Debian mailing list [linuxcare.com].

  • by Anonymous Coward
    must acknowledge the post is correct (and funny).

    A previous one is also accurate, in that all the big commercial vendors have their own proprietary Unix variants (AIX, HPUX, Solaris, plus whatever Compaq's calling DEC-Unix lately). These systems _all_ suffer from their previous, self-inflicted incompatibilities (the forked-Unix conundrum, complete with fragmented market shares and some evolutionary dead-ends).

    Linux offers a way out of this dead-end for these very large commercial vendors, and IBM for one is adopting Linux for this reason (plus some others already mentioned). But their intent isn't for replacing their bulletproof, decades to perfect, commercial Operating Systems - instead, its only to expand their offerings to include recent technologies (like IBM held its nose about NT).

    I'd like to see a good analysis of how well (or badly) each of the major commercial hardware and software systems vendors are adapting to Linux.
  • What BSD do I use?

    I use FreeBSD, but OpenBSD would also be fine. As would BSDi. And NetBSD if I wasn't a PC user.

    Regarding linux kernels and their so called high kwality implementations: I would not dream of using a linux kernel, since I need a reliable datastore. When linux gives me raw devices, I might consider using it.

    Translation: The day I can tell when to remove a floppy from a drive by not looking at the light but looking at the command prompt returning, is the day I'll use it - since some programs *NEED* to know when data is on disk. Linux doesn't give me this.

    And don't spout on about reiser fs etc. since that is a) not reliable and b) not in the linux kernel - for exactl;y the same reasons that the Big Iron patches aren't in the kernel - Linux Torvalds himself.

    DVD is supported...just not every DVD device. Hardware RAID is largely a non-issue...that's why it's hardware RAID.

    OK, I see one or two DVD device are supported. But we're still a year off having good support and having it mainstream. Not so for windows.

    Drivers for hardware RAID, on say, Dell servers, are not supported. I know - I tried to find one recently. The only options for linux are software RAID (joke - there is no raw device!) and transparent hardware (very expensive).

    KDE or Gnome, for two examples. Once given 'the computer', most people don't care or see any real difference; examples - two of my sisters.

    Please don't compare KDE (and definately not GNOME!) to Windows. The day I can cut and paste and drap and drop OBJECTS (i.e. images, spreadsheet cells, text with font information, etc.) from one app to another, is the day you can compare the two. Until then, WINDOWS OWNS THE DESKTOP. You are dreaming if you think otherwise.

    The GPL isn't entirely free, but it's damn close to it and largely misunderstood.

    I understand the GPL. That's why I never submit bugs reports or fixes to GPL'd projects.

  • You retard. Shut up.

    Oooh, that hurt didn't it. I repeat: perhaps "Big Iron" means files greater than 2 gigabytes? Hardly an unreasonable request for a machine with that size main memory?

    For all of you that are not aware of this: LINUX CANNOT HANDLE FILES BIGGER THAN 2 GIGABYTES.

    Here's another one: Perhaps "Big Iron" means a single large swap file, instead of lots of 128 meg ones - Ha de ha :-)

  • It's not that Linux is aiming at small systems, it's that the kernel is currently frozen in the hopes that one day we can use a released 2.4.0 final.

    IBM is spending an incredible amount of money on the Linux issue, it's just going to take time for their managers and development people to understand how life works when the other 9/10 of your development team is outside the company and under no contract whatsoever.

    -Nathan
  • I run OpenBSD, and just upgraded to 2.7. My NE2000 worked fine during the install, but then it crapped out on me when the machine first booted up. "Device Timeout" it kept saying...

    Turned out that it had an IRQ conflict with my soundcard. Since this was on my firewall, I just took the soundcard out (who needs to listen to MP3s while they're routing packets?) -- But since then I've had zero problems with NE2000 cards.

  • NetBSD runs on almost every platform, so its kernel should be portable.

    You blasphemer! The NetBSD kernel runs on ALL platforms, not just most!

    This man must be taken out and shot for his heretical views!

    -----
  • Of course one Linux Kernel cannot fit every need, one kernel binary that is.

    However, using the magic of ifdef and SOURCE (you remember the source that kernels get compiled from right?) you can have one source tree with multiple targets with multiple abilities.

    It gives a central focused driving point for a movement. Yes the source is and will be huge with 30% of it actualy being for a users system, but as UNIX learned the hard way,

    Splitting the development into different paths is not the way.

    On a side thought, look at how RT/Linux is getting it's ass kicked PR wise now that another vendor is going to make the mainstream Linux Kernel RT. Time to backpeddle boys.

    -Nathan
  • Wladawsky-Berger: I know, and I know that Linus [Torvalds] and the team are resisting forking the kernel.

    At first I was amused when I read this. "Resisting forking the kernel"? ANYONE can fork the kernel (and any piece of GPL'd software) as they see fit. If IBM wants a mainframe-optimized, heavily altered Linux kernel to run on they machines. they can do it NOW. It's them who are resisting, not Linus. But then I started to think, why?

    Buzzword compliance. Linus can't stop a kernel fork, but he CAN stop people from calling such fork "Linux" -- it's a trademark. And IBM wants badly to shout to the media, "We run Linux!" It's a better PR move. That's why they're trying to coax Linus into cramming the mainframe changes into the main kernel tree. Linus won't do that, and rightly so IMHO.

    On the other hand, IBM could do the fork and negotiate a "blessing" from Linus (read: buying the right to call the new kernel "Linux something" for an obscene amount of money). I'd guess that's what's going to happen eventually.

  • Windows? Never heard of it. :)

    Your objections are largely valid, so I'll not try and talk you out of them.

    As for the horrors and defects of Linux; I generally agree. I know it and other *nix aren't perfect...far from it. A Hummer isn't a Porche, and a 747 isn't a turbo prop. Agreed?

  • Silly rabbit, don't you know HURD is for programmers with a political agenda?

    But seriously, HURD is Richard Stallman's baby. If this guy is upset with the Linux kernal people, I would that RMS would not be an improvement...

    Now, maybe an OS like BSD.. that seems to be the most "political neutral" operating system... not to mention that the licensing is the least restrictive.
  • Can any kernel Guru's help me out? I'm under the impression that even if you use Linux modules, you still need a hardcoded stub in the kernel to access it. For example you can't compile a newly released module without recompiling the kernel.

    If I'm right, that'd be the most appealing reason to use a micro-kernel in my opinion. If I'm wrong, please let me know.
  • Management have heard of Linux. They know it is good. It must be good because Lots Of People Are Talking About It.

    HIRD OTOH is unknown. It could be demonstrably better in every respect, but People Aren't Talking About It so it gets ignored.

    Even techies suffer from this a bit. They don't know who else is using HURD, so they don't trust it. Added to this is the proprietry software mentality. Of course 99% of Linux Software can be recompiled for HIRD, but people remember the switch from Windows to Linux. All their software had to be replaced with equivilents. Of course they don't have to with this switch, but their subconcious is telling them that they do.
  • Not only an open source one, but a stable, fast, reliable and a whole bunch of other features were needed.

    Like a cool name (which is why I originaly used it over BSD, thank God for luck granted that day.)

    I think you nailed it, there are alot of reasons Linux flurished it did, neither BSD or ESPECIALY hurd have all the required ones to even make a dent in Linux's force.

    -Nathan
  • But why don't they (IBM, SGI, et al) grab HURD and add to it all the things they find important for 'big iron' support?"

    Because no one buys an operating system named HURD. That's the short answer.

    The longer answer is that they want linux because right now linux has a name which is infinitely marketable. Even my father has heard of linux. A better question is why don't they run AIX/IRIX/whatever on their big iron instead of linux (if linux won't develop in the way they want)? Which is what they are already doing. Why would someone suggest taking HURD and developing it? That's repeating a lot of hard work already done with linux on an operating system that no one outside of the slashdot community has heard of, tied to even more radical open source ideas than linux.

  • by sales_worldwide ( 244279 ) on Wednesday October 18, 2000 @02:30AM (#697470) Homepage
    Why not run BSD? After all, this'll be more reliable, and the code is more controlled. And The NetBSD boys would be more than happy to have another set of patches added to their already amazingly portable source.

    Also, I would guess that the boys at IBM know about the '#ifdef IBM' statement.

  • OK, Objective C was written by Brad Cox in 1980. He started out working with Bjarne Stroustrup but they had differences of opinion and split up. Mach was started on in 1985 at CMU. It is written entirely in standard C.

    Although early Objective C runtimes had some inefficiencies, this has now been corrected and a method is almost as fast as a regular C function call. Even so, the reason NeXTSTEP was slow probably had more to do with Mach messaging and DPS than Objective C.

  • by Anonymous Coward

    The HURD is really pushing the envelope for "piece of software least likely to ever do anything". It's been in development now for donkey's years and yet it is still only in a state where the only people that can actually set it up and program it are the half a dozen people who actually code for it.

    Sorry, but the HURD is nothing more than another piece of Stallmann ego-stroking designed to get back at Linux for not being called GNU/Linux (or "Lignux" or other abominations Stalmann wanted). Just because it isn't ideologically sound enough he's pushing the idea of a new kernel, something which we really don't need.

    Bah, who cares about the HURD? Nobody is the answer from what I've seen. Yet again, Stallmann has let his ego take precedence over rationality. What a suprise.

  • Hmm. I'll respond to this almost-troll.

    • (DVD? Hardware RAID even? ....)

    Yes, and yes. As always, it depends on what you want to do. Can we play DVDs under linux? No (well, I can, but I've got a Creative Dxr2 board which has a driver), but can we read data? Damn straight.

    Hardware raid? Lots of those are completely independent of the OS anyway. Raidtech, Falcon, etc.

    I won't address the others right now, I'll leave it to other fortunate readers.
  • In an interview with "IBM's Linux point man" Irving Wladawsky-Berger (available here) [linux-mag.com] in Linux Magazine. He specifically states that IBM does not care which direction goes. And judging from the text of the article it seems they would kind of prefer to keep AIX as the high end and have Linux support the low end.

    Excerpts:

    ON THE DIRECTION OF LINUX

    Wladawsky-Berger:
    Now the thing that I don't know is the priority that the Linux community puts in making Linux enterprise-ready. There is so much going on with linux in high volume applications: Linux in embedded client applications, Linux in desktop, Linux in appliances. This area is so full of possibilities that the community could say, "Irving, this is very nice, but this is our hightest priority right now. So, given that you have AIX already, this Linux compatibility in AIX is perfect, because then you have a totally complementary Linux strategy." Linux on Linux, and then Linux applications on AIX."


    ON FORKING THE KERNEL

    LM: There seems to be a sense that some of these enterprise features may detract from Linux on the low end.
    Wladawsky-Berger: I know, and I know that Linus [Torvalds] and the team are resisting forking the kernel. That's always one possibility: to have multiple kernels, and I know so far nobody wants to do that. And if that is the wish of the community, we are cool with that because that's where AIX is complementary to Linux.

  • by hairychest ( 140038 ) on Wednesday October 18, 2000 @03:35AM (#697475) Homepage
    I work on the linux for s390 project & have worked with the mach microkernel but not with hurd. The mach micro kernel is a fantastic peice of code with fantastic ideas let down by the rpc abilities of C. The guys who developed the mach microkernel I think realised the limitations of C & sent some guy into the bushes for a few years & he invented objective C, this unfortunately runs like a dog with all the message passing & is why Open Step is so slow, I even considered writing my own language ( which would probably have taken me decades so I gave up on the brainfart ) which could while running bind as to local objects while sending messages to remote object rather than using messages everywhere. I even suggested the mach microkernel when I first joined in the early days of the project & personally am glad I was completely ignored for the following reasons. 1) Linux is pretty much bog standard unix no learing curves for most developers, the mach microkernel is a complex poorly documented beast with API's documented by doctorates & which is not exactly easy reading. By the time we'd have got as far as we have with this project I'd be about half way through reading about & fully getting to grips with Mach. 2) If we started on Hurd I'd be dead by the time we got it running as good as we have Linux running now, my life as most peoples is too short. ( we initially had only 5 developers on a skunkworks team ) & the Hurd project most likely will never grab the mindshare or momentum of Linux, Linus is a great marketer & great at rallying new support & it is getting better every day rather than every decade ( IBM couldn't buy this stuff, it didn't with OS/2 ). 3) The linux project is very well supported by documentation & web sites it isn't like you need to know someone or be on an inside track to become expert in it just web access . 4) There are a few places where the the basic Mach kernel is weak for instance no support for drivers as modules, lack of driver support, messaging is pretty slow & used for everything, great for building clusters but overkill otherwise. As mach uses seperate address spaces for everything this improves protection but decreases performance. 5) Getting new code ( e.g. Firewire or s390 support ) is pretty easy to get into the standard Linux kernel, admittedly they are pigs for accepting replacment code for stuff which works in most cases, even if it improves things. Plenty of IPV4, scheduler & filesystem improvments are posted regularly but are seldom accepted ( I'd suspect Hans Reiser has a lot of grey hairs at this stage ). From what I hear the a lot of developers left the BSD project because their patches simply weren't being accepted by the chief maintainers who liked their names over 98% of the source code.
  • If the 'big iron' vendors want a linux
    compatible OS with support for their hardware,
    why not just add support for Linux binaries to
    their existing OSes? The problem of course is
    that Linux is what is being demanded, right or
    wrongly.
  • Linus did clearly refused the SGI patch to fix this situation.

    Yes, he refused (as I said) top apply the patch to the 2.4.x kernel. Changing the size of key id numbers this late in the game is too much. There is after all a code/feature freeze on right now. That says nothing about what will be allowed into the 2.5 tree when it is opened. IIRC, he didn't say no, never, just not in 2.4.

  • by thule ( 9041 )
    The Linux 2.4 kernel has much improved SMP capabilities. Linux as already moving from SMP to NUMA machines with the help of SGI and IBM.

    Check out http://www.rsbac.org/ for what Linux *already* has for B1 level security.

    It seems to me that Linux has so much going for it now that it will be hard for other projects to catchup. They may be able to match or beat certain areas, but Linux as a whole will continue to be even more compelling.
  • Objective-C wasn't developed by the folks who did the Mach kernel. Furthermore, an Objective-C method call is roughly the same speed as a C function call in a good implementation.

    Objective-C has a few problems inherited from pre-ANSI C, but those would be fixable. I think it would be a good language to write kernel components in. IMO, it would certainly beat the mess of dispatch tables and dynamic loading hacks currently found in the various operating system kernels based on C.

  • Hurd is based on the Mach microkernel, which is released under a BSD license. You do not believe me? hear this: pnm://media.cmpnet.com/technetcast/cb/tnc_0381.rm No realplayer? Use trplayer. It is a text-based real media player that uses the realplayer libraries. http://freshmeat.net/projects/trplayer/?highlight= real+audio+video
  • I used to hang a lot of hopes on the Hurd. I even ran NetBSD and FreeBSD for a while just because they were the OSes of choice for bootstrapping the Hurd.

    But the Hurd just doesn't have nearly the critical mass Linux does, and likely never will. It's still enshrowded, to some extent, in the early days of Hurd's totally closed development model, for one thing.

    Like it or not, Linux is the premier free kernel, particularly in terms of mindshare, and mindshare means a LOT to a company like IBM.

    A linux emulation layer sounds great, but it's a little risky; you never know when something will be added to the real linux, that would be a total PITA to emulate with your sorta-kinda-almost-similar kernel. If I were running a big business, I'd seriously frown on that kind of unnecessary risk.

    I say if Linus doesn't want big iron patches in the mainline kernel, that's a shame, and oh well, let's fork it.
    I'm guessing it really hurts a company like IBM to commit to linux like it has, and then be told, "sorry, you're s second class citizen in the linux world". I'd think twice being saying that to IBM, if we want to continue getting big-company support.

    And yes, big-company support is very valuable for linux.

  • It might still be a hardware issue. I will always suspect the hardware first before the software. Having replaced the following all on one machine:
    1. Video card.
    2. Monitor (Got the shakes after about 30 minutes of being on)
    3. Hard drive (bad firmware)
    4. Mouse (Netrek damaged the left button :))
    5. CD-ROM twice (Power died and loud operation)
    6. Motherboard and memory (DOD - fried)
    7. Memory again (thank you ECC)
    Here is a link concerning PCI cards showing up twice: i386/10935 [freebsd.org]. Is this the problem you are referring to?
  • I went to an SGI information sesion at my collage a while back. They talked about the chalanges of getting linux to run on 128+ processor systems. They talked about things in linux that were hurting performance, like the single kernel lock in linux, as opposed to the 10^9 different locks you can invoke in IRIX. But they stressed that they whould not even consider forking.

    The reason they started useing linux rather than IRIX was that application vendors wanted them to fund ports of there software from other unixes to IRIX, and convincing some vendors to port to IRIX at all was imposible. Linux solves this problem because every company on the planet seems to be coming out with a linux port of their software.

    You can't seriosly argue that linux is better than IRIX or AIX on big computers. Linux was made with single processor systems in mind, and when a design desision in the kernel will slow down every computer with less than a hundred processors but speed up an Origin 2000, guess what's gona happen? But it makes sence for SGI to give it's customers the option of running an OS that will run lots of unix software. If they don't, and the app they want is only certified on Solaris and linux.......
  • Not to mention that BSD has only very very recently added SMP support whereas Linux's has been maturing for years.
    SMP support has been in FreeBSD (RELEASE) for about two years now. This is according to the release notes for 3.0.

    Are you thinking about the advanced SMP code that is coming from BSDi?
  • Yes, HURD needs quite a lot of work to become stable,

    Fact is Linux is already stable. It is usable, there are device drivers for it is is already quite mature for an OS that is almost 10 years old. Hey Mac and windows are both older than Linux (including win 3.1). Maybe in a few years hurd will be a contender. It has take these developers to port to the MF and get bigger hardware support less time then it would take to get hurd in stable shape.

    I don't want a lot, I just want it all!
    Flame away, I have a hose!

  • You wrote: Linux is more popular and therefore better?

    In a democracy perhaps - but democracy sucks.

    But by that token windows is better. Why not run NT on the BigIron machines then? I'd personally sooner see that than linux.

  • ActuaLLY the HURD is not a kernel. MACH is the kernel HURD is a collection of servers that runs on MACH to present a Unix environment.
  • by Myddrin ( 54596 ) on Wednesday October 18, 2000 @03:48AM (#697488) Homepage
    WHAT?????

    The whole _POINT_ of Crusoe is that you _DON'T_ run native code. EVER. That ruins the whole beauty of the architecture. That would be like taking the eiffel tower and removing two of the legs...

    The whole point of the code morphing stuff is that they can change the native instruction set when ever they want w/o having to worry about it affecting in production proggy's/OS's. Thus they can move to the latest, greatest ideas in microchip design w/o losing customers. This is clearly stated on their website and in numerous articles about transmeta.

    Linus isn't working on this because the idea goes counter to everything Transmeta is working on.

    According to one article I read... maybe it was on linuxtoday.com.... I disremember. These patches are not being accepted simply because the memory management patches to support these large machines doesn't scale back down to desktops, laptops or handhelds. In other words, making the ness. changes in the kernel to roll this stuff in would affect performance on your server, desktop, or PDA. Considering that 99%+ of Linux users are running on these platforms it doesn't make sense to apply these patches.
    ---
    RobK
  • Linux succseded becasue there was a desire for an Open source OS. Nobody has yet conviced me that HURD is going to make my life better. That it is going to do a better job.


  • Yeah, but for some Hardware RAID you need drivers. The Highpoint 370 IDE controller on the Abit KT7-RAID specifically.

    http://www.linux-ide.org/ [linux-ide.org] has the drivers for the Highpoint controller, and many other supplimentary IDE drivers.


    --
  • Ever see Conspiracy Theory....?

    I swear to God their after me. Linus is trying to destroy his life's work for his own financial gain
    by rewriting all of linux in Crusoe assembler....

    Their goes the neighborhood....

    Just because your paranoid doesn't mean I'm not out to get you!!!!

    :)
  • Hurd was started in the 80s. It was already considerd woefuly late when Linus started his OS in 91. Read this

    http://www.kde.org/food/linux_is_ obs olete.html [kde.org]

    It gives a good outline of the status of Open Source and small *nix comunity in the early 90s. It also shows Linus didn't become a bastard overnight :)

  • can one linux kernel fit all?

    You're right, of course it can't, it already doesn't. Take a look in the linux/arch directory of ther kernel source tree and you'll see the different linux kernels for all the different architectures. If I understand it correctly each different architecture has it's own core kernel (though most are probably strikingly familiar to the i386 one which they were all ported from). Each 'core' kernel provides the same interfaces as the other ones so the rest of linux can be shared among them, but at the very basic level they are all different kernels. I don't foresee it being a problem if we have an entire 'big iron' tree with a specialized kernel core and special modules which meet the requirements of 'big iron'.


    The linux kernel can be the best solution on any hardware, you just have to pick the right one.

  • ... Linux is more popular and therefore better? ... But by that token windows is better. Why not run NT on the BigIron machines then? I'd personally sooner see that than linux.

    Note that to some limited extent this has been tried and failed, namely NT for Alpha, PPC and ..er.. one other I can't recall. The problem is that the OS is in the hands of one group with one set of goals, which may not match the goals of the platform implementors, and these implementors cannot have the freedom they get from an Open OS to make whatever changes are necessary (by definition: to implement a non-Open software package on another platform you can bet there are licence agreements to be signed complete with a whole bunch of restrictions and lists of what can/can't be done. Likely including the requirement to pass back any innovations made to the original owner who now gets R&D done free by a 3rd party...)

    Essentially the implementors will always be scrabbling behind the OS's owners, trying to keep up, instead of being part of the development process.

    Also note that the definition of "popular" you're using may not match that of those who make these decisions. Certainly there is more public discussion of the benefits/shortcomings of the Linux environment at the moment since A) NT's been done to death over the last n years B) since we can all get at the code we've got more to talk about.

  • What exactly do they need to run on 'big iron'? I am curious. I can't think of anything offhand...but I don't know much about the hardware architecture of mainframes, and I don't know the OS philosophy used in the past.
  • "There are the L4Linux, L4KALinux and MkLinux microkernels."

    Well, there is actually no such thing as L4KaLinux. There is L4Ka [l4ka.org] though, which is another implementation of the L4 -kernel, and which is able to host the L4Linux kernel (leaves you with one less code fork ;-).

  • > I can fully understand him not accepting patches
    > that are going to end up being a _lot_ of work
    > to satisfy less the 1% of his "customers."

    I agree completely.

    I didn't mean it in the context of "Couldn't linus make it such that..." I meant more of the "I doubt he would reject it if the patch was submitted as an optional replacement"

    It wouldn't have to be alot of work for him and others - just for the people wirtting maimframe code. Let them maintain the code that they submit.
    Even if its completely broken - its their own problem. I don't mind the kernel source getting a tiny bit bigger for their conveienece.

    certainly it already contains many many device drivers that I will never use - and thats true for most people. The only downside to including it - IF it is submitted as a proper OPTION so it wont degrade performance on "miniscule iron", then its only downside is making the code larger - adding a feature that most people will never use.

    Really - only a few core kernel options can be said to be used by "most people". Take the NE2000 ethernet driver. I use it on a few machines. "Most people" don't. The same is true for every other ethernet card driver. There is no one single ethernet card that "most people use" (there may be one that is used more frequently than the rest - but I doubt its enough that more than 50% of linux users have it)

    My point simply being - I would bet, from the discussion about how it would "effect small systems" that the patch was rejected because it replaced the original code - rather than simply being an optional drop-in. - a problem which the original authors should fix and resubmit rather than complaining.

    > Also, it could just be that the thought it
    > was too late in the 2.4 development cycle

    While very true, its not something that most kernel hackers could test/work on anyway. Since it only effects the small subset of people who are on big iron - well it doesn't matter does it...
    that would be a perfectly valid reason for rejection. However - the reason being talked about is the effect on smaller machines - which is a completely different issue and can be made insignifigant as I said above.

    -Steve
  • > microkernel == mental masturbation
    >
    > Microkernel is technically more "correct". Macrokernel WORKS.

    Great. You read the original Andrew/Linus flame. Now, you may want to use your brain a bit.

    First, with this kind of mentality, you should still run DOS. When linux started, DOS was the No1 of the Operating systems (Win was DOS-based), and actually worked. There was games, productivity applications, dev environments, etc, etc. Or you should still run MacOS. Anyone could have said (and many did, at this time)

    preemptive-multitasking == mental masturabation

    preemptive-multitasking is more correct, cooperative multitasking WORKS.

    You may think that it is not comparable, but it unfortunately is.

    A micro kernel enables you to run multiple OS at the same time. It enable user-space drivers. It enable personal OSes for each user at the same time. It enable hack and modifications beyond what is possible with Int21 (Ooops, I meant modules), without crashing the OS. It enable a much better network transparency too.

    Maybe it'll work one day. May it won't. But having something that WORKS NOW, don't mean that it'll work forever.

    Cheers,

    --fred
  • Did your group consider Plan 9? They even designed a new protocol (Internet Layer) for efficient RPC. Maybe at the time your group got started it wasn't "open source". It still isn't completely open, but it's getting closer. Plus, Lucent (Bell Labs) isn't exactly unknown, like GNU/HURD.
  • Can any kernel Guru's help me out? I'm under the impression that even if you use Linux modules, you still need a hardcoded stub in the kernel to access it. For example you can't compile a newly released module without recompiling the kernel.

    This is false as far as I've seen. I've been able to select additional modules, make dep, then make modules && make modules_install && depmod -a for some time now (all thru 2.4 at least, and probably for 2.2, though it's been awhile since I've built a 2.2 kernel).. There may be some static dependencies for classes of functionality which then allow for particular drivers/functions to be modularized, which would require a kernel rebuild and reboot, but I haven't come across them.

    Your Working Boy,
  • If you call another OS a duplicate effort then you would have the only instance in which you would be correct.

    Duplicate efforts are fine in Open Source because it means someone thinks something needs to be changed. Linux itself is a duplicate effort of other Unixs. Sorry by the Anonymous Coward above clearly has no idea what he/she is talking about.

  • by jguthrie ( 57467 ) on Wednesday October 18, 2000 @04:10AM (#697505)
    AC Wrote:
    The HURD is really pushing the envelope for "piece of software least likely to ever do anything". It's been in development now for donkey's years and yet it is still only in a state where the only people that can actually set it up and program it are the half a dozen people who actually code for it.

    As it happens, I got Debian GNU/Hurd up and running just yesterday. It took about two hours, counting the time it took to install Debian GNU/Linux and the time it took to download the installation .DEB files over a 115K ISDN line.

    Of course, I can't actually do anything with it because the kernel is only about 1/3 done and there are essentially no applications available for it. Perhaps if the FSF decided to put their efforts into the Hurd instead of yet another major version of GCC, GNU EMACS, and the GNU LIBC, they would be able to actually finish it to the point where Linux was at kernel V0.10, the version I first used. Some additional stability would be nice, too.

    I think I'll reformat the disk and see if I can't get OpenBSD on it.

    AC also wrote:

    Sorry, but the HURD is nothing more than another piece of Stallmann ego-stroking designed to get back at Linux for not being called GNU/Linux (or "Lignux" or other abominations Stalmann wanted). Just because it isn't ideologically sound enough he's pushing the idea of a new kernel, something which we really don't need.

    Ummm, that's not factually accurate. I don't know if the code for the Hurd (the name for the collection of services running under Mach) antedates the arrival of the Linux kernel or not, but the project as a whole has been going on since the early 80's and the overall design of the system has been frozen for at least that long. In other words, the Hurd was not created in response to the success of Linux. It couldn't possibly have been.

    However, I do share your doubts as to whether or not they're ever going to finish. They definitely bit off more than they could chew with the Hurd.

  • The xxxBSD guys are really underestimated as far as I can understand the situation.

    Take FreeBSD as an example. They have joined forces with BSDi and got fine grained SMP to their project. The SMP is developing in a fast pace and I think we can expect FreeBSD to become superior to Linux on computers with many CPU's during next year.

    Trusted BSD is also a FreeBSD project that aims to be portable to the other xxxBSD distributions. As you perhaps know, Trusted BSD is aiming at 'B1' security (except for the validation part).

    All in all, during next year FreeBSD will be a very interesting OS for computers with lot's of CPU's and where security matters.

    There is really no reason to avoid using FreeBSD and it cousins for 'big iron' computers.

    //Pingo
  • by firewort ( 180062 ) on Wednesday October 18, 2000 @04:52AM (#697508)
    Here's a short answer to why IBM doesn't fork the Kernel.

    1) Public Opinion / Community perspective

    Linux is built on popularity. It gets marketing's attention because it's a popular choice among Admins. Therefore, IBM has to jump on the bandwagon and support the thing in its many incarnations. IF IBM FORKED the Kernel, it would be a marketing nightmare as Slashdot and others would do a 180 from "IBM is doing Linux solely for PR" (untrue) to "IBM wants to take our baby and twist it for its own evil purposes!" (also untrue.)

    Don't believe Slashdot would be this unkind? Just look in the archives for RedHat 7.0 and see how supportive the community was at that time.

    2) IBM has a huge investment in Linux.

    IBM has a Linux Dev Center newly established in India. IBM has a Linux Compatibility Org to test and ensure that every IBM application for Linux will work with the 8 standard NLS Languages and doesn't break with standard libraries, and will function on most all recent distibutions.
    IBM has invested huge amounts of resources in the IBM Journaled File System for Linux. IBM took part in developing Linux for the S/390. It also runs on the AS/400, and there's work on the RS/6000 distribution as well.

    The investment is too great to move to HURD.
    The HURD community and installed user-base is tiny. Linux has name recognition.

    Currently, most new IBM Web Application Server / WebSphere type products support NT/2000/AIX/Solaris/Linux.

    Moving to BSD would be smarter, but BSD hasn't got name recognition, even if it is more widely installed as a serverOS.

    Moving to HURD would be suicidal.

    Yes, there's AIX for Big Iron, but AIX is a commercial server OS and doesn't get the community's collective hearts pumping.

    There's mindshare, which is what we logically know. AIX exists for big iron, why do we need a linux for it?

    and then there's *HEARTSHARE* which is the emotional based decision. We know in our hearts nothing makes us growl with manly pride at having Linux, the Free-Little-Operating-System-That-Could-TM, running on the biggest Iron made.

    Heartshare will keep IBM focused on Linux as long as there's a community of people running Linux.

    AFAIK, Debian is the only big-name that's bothered to propose a HURD distro. http://www.debian.org/ports/hurd/
    I'll believe it when I can install it.
    (Tho, it uses Mach and linux hung around it... if it was Mach and BSD, it'd be a relative of Darwin-x86! hmmmm....)

    In Summary

    IBM has 20 top-level links to Linux related dev sites on its Intranet. These snowball into deeper levels. IBM has a huge investment in Linux, and in having it run on their boxen, big iron or not.

    And you want them to drop it all and move to HURD?


    A host is a host from coast to coast, but no one uses a host that's close
  • The last story about Big Iron and Linux said that there were problems on machines with 256MB of RAM. Damn! I have that much in my desktop, and I've been thinking of upgrading it. In fact, I know quite a few software developers with 512MB in their desktop machines. So what is Big Iron? It's been a few years since 256MB of RAM was considered a lot.

    Please tell me that it means SMP (something like 64 procs as I have 2 in my desktop). Please tell me that it means fiber channel (or whatever it's called) RAIDs. Please tell me it means GBs of memory. Please tell me it means multiple 1000BaseT connections. That's what I would imagine is meant by Big Iron.
  • by tytso ( 63275 ) on Wednesday October 18, 2000 @05:04AM (#697517) Homepage

    Linux grew from humble beginnings, small i386 machines with little memory and scant resources. unfortunately, it's kept a lot of baggage from those days even though it's come a long way and been ported to many architectures.

    That's simply not true. Take a look at the latest kernel. It can support up to 64 Gigabytes of memory, and it can scale quite well up to 4 and 8 way SMP boxes. People have booted Linux on 32-way ccNUMA machines. Yes, it's not optimized for ccNUMA yet, and it probably doesn't scale all that well for > 8 way SMP yet, at least for many workloads, but it's currently quite a very long way from "small i386 machines". A "small i386 machine" is orders of magnitude than the original i386 back in 1991, and a "small i386 machine" today is comporable with the vast quantity of Unix machines which are used as servers today.

    A lot of the "we scale to 64 nodes" is I think more machismo more than anything else. Sure, for system vendors they have better margins than the smaller machines. But you don't sell that many of them. One of the reasons why Cray was getting killed was that they only focused on the high-end, and they got their lunch eaten by the "low-end" machines moving upwards and killing off all of their market, so that they only had one or two customers. (Or should I say "One Agency". :-)

    So personally, yes, people are interested in making Linux scale to the bigger machines, but some of this I susect is either (a) because of the technical challenge (for the Linux Kernel developers) or (b) as a marketing exercise (for the non-technical folks who are interested.)

    There are many things in the kernel which do things the x86 way and force the other architectures to munge the way the native system does things so that they look like the x86 way. When I last looked at the SPARC port, the memory management system had to jump through hoops to change the way the SPARC processors do VM so that it looked to the rest of the kernel like the way the x86 architecture does it.. it was very inefficient. No doubt these problems haunt the other architectures too.

    That's simply not true. There is an awful lot of that "complexity" which is optimized out by the compiler. So you should take a look at the assembly language before you make these sorts of complaints. Secondly, on the UltraSparc particularly, all of the virtual memory translations have to be done in software, since the hardware basically only provides a TLB which is manually programmed by the OS. This may be the source of some of the complexity which you saw, and which is required by all operating systems for that platform. This is actually a good thing, though, since the OS can do a much better job of managing the TLB than a typical hardware platform, and there are hooks in Linux that are especially designed for the capabilities of the UltraSparc VM architecture. So to say that Linux is only optimized for the x86 architecture is very much overstating the case.

  • . . . another bloody operating system. When will it penetrate the hacker mind that the vast majority of computer users really don't care about the OS? 90%+ of the great unwashed are using Windows and are likely to be still doing so in 10 years' time. The real success stories of the Internet/big computer scene are Solaris and FreeBSD. We have servers that stay up year-on-year running those systems.

    3 replies beneath my current contempt
  • by Valdrax ( 32670 ) on Wednesday October 18, 2000 @10:05AM (#697523)
    One of the problems is that for many mainframe systems, massively parallel processors are a common feature. Linux's SMP support isn't really that great. There are some massive improvements in the 2.4 kernel, but some fundamental design decisions get in the way.

    For example, Linus is a big proponent of a non-preemptable kernel. A preemptable kernel is one that can allow tasks within the kernel to be preempted by other tasks. The Linux kernel does not allow tasks to be preempted by anything other than interrupts.

    As an aside, interrupts in the Linux kernel typically run a small bit of code to set up some state information about the interrupt, flag a "bottom half" to run later, and then get the hell out of dodge. The bulk of the work is in the "bottom half," which is a bit of code which is enqueued to run when the kernel gets the time to get around to it. For example, the timer interrupt increments the jiffies (an internal measure of sub-second time), lost timer ticks (ticks that haven't been handled by the bottom half), and flags the bottom half for later execution. The bottom half will later update time-related statistics and service kernel timers.

    Basically, though, once the "top half" of an interrupt is handled, the kernel must immediately go back to what it was doing. It is guaranteed that kernel code will not be preempted. This makes the kernel cleaner, easier to understand and maintain, and faster on uniprocessor systems or systems with few processors. However, there's a reason that the Solaris kernel is preemptable. Preemptability is a serious performance enhancer for massively parallel systems, but Linus is not budging on this. There are a number of alternative solutions that the Linux kernel is following, such as its system of "bottom halves," some of which can now be run on other processors in the 2.4 kernel (if I understand correctly). Essentially, a preemptable kernel requires that fine-grained kernel locks be scattered all over the kernel, and that more code in the kernel be considered "critical sections." This is somewhat of a hassle.

    Solaris scales extremely well partially due to its kernel's preemptability and heavy use of threads. Solaris's kernel can be preempted by another task and can dispatch kernel threads to work on other processors. This keeps system call hungry processes on other processors happy and well fed. However, it does incur some overhead which isn't justifiable on desktop machines. I'm reminded of a funny quote from the 2.2 kernel mailing list I found once while looking for info on this once: "MVS spends a lot of time running OS algorithms to allow full preemption that Linux wastes on running user applications."

    Basically, this is just one of a variety of wide-sweeping changes to the kernel that go many levels beyond a simple patch that "Big Iron" vendors are pushing for. Linus is against it happening in the first place. While Linus and Co. are coming up with a lot of innovative alternative solutions to the problems, "Big Iron" wants to go with proven solutions that they know will work for their systems. In addition, the changes would impact performance on desktop uniprocessor and SMP machines. In essence, these are political issues.

    RTLinux has already forked the code to handle this. Basically, RTLinux runs a hard-RT capable microkernel that runs the Linux kernel as its idle process. Basically, when a RT process needs something at exactly a certain time, it will get it and all normal processes will be run under Linux when the system isn't busy doing more important things. There's also the uLinux fork to attempt to make something Linux-like on systems without hardware memory-management units, like the 286. Code forks already exist, often for good reasons.

    There's a great article kernel design and impact on real-time systems here:
    TradeSpeak.com - White Paper Library: Linux for Real-Time Systems: Strategies [tradespeak.com]
  • Instead of the Hurd, why not start with Darwin? Both the Hurd and Darwin derive from the same open CMU project. Darwin probably has seen a bit more use and bug fixing than the Hurd.

    The design of the Mach kernel also should make it easier for companies to put in their favorite pet "enterprise" features (LVM, JFS, whatever) without messing up the system for everybody else.

  • Added to this, it's a microkernel system. Big Iron doesn't need microkernels. They're a little bit slower by nature, but you get the advance of a small, configurable "mostly user-space" kernel. That's pretty cool for palmtops and everyday PC's but not Big Iron, I'd say. That's why QNX doesn't run on mainframes. Actually, a microkernel would be excellent for a huge massively parallel mainframe. The more elements of the system you spin off into seperate userland processes, the more you can run at once. Most of what a good microkernel does is pass messages between processes and processors and schedule things. This minimal overhead means less kernel blocking and more ability to spawn off tasks on seperate processors. There is less "critical code" to worry about. The overhead in a microkernel that everyone speaks about is mainly an issue for desktop, not mainframe systems. Oh, and QNX can run on SMP systems. Read about it here [qnx.com]. It only seems to go up to 8 procs currently (wheee...), but it sounds as if it could potentially do more.
  • Linux has never had a Code Fork in all of it's long existance.

    "What about the Alan Cox series?"

    Ok. Apart from the Alan Cox series, Linux has never had a Code Fork in all of it's long existance.

    "There are the L4Linux, L4KALinux and MkLinux microkernels."

    Ok, ok! Apart from the L4Linux, L4KALinux and MkLinux microkernels, and the Alan Cox series, Linux has never had a code fork in all if it's long existance.

    "There are patches from SGI (XFS, kernel profiling), IBM (JFS), and various other developers (timing patches, nanosecond clock patches, reiserfs, ext3fs, gfs, the international patches, freeswan, etc)"

    OK!! So, apart from the patches, the microkernels, the Alan Cox series, Linux has never had a code fork in all of it's long existance.

    "What about the UserLand Kernel?"

    SHUT UP!

    (Apologies to the Monty Python crew for this horrible bastardization of their excellent Roman sketch in Life of Brian)

  • Folks - IBM already has a stable OS for mainframes - the only reasons they are bothering with linux is to hop on the bandwagon and capture some good PR.

    That being the case, why would they adopt a non-linux OS?? This wouldn't make any sense at all. If not for the PR value of adopting linux, they could simply stick with what they already have.

  • >_ Because none of them could use my NE2000?

    That's news to me, having run FreeBSD with an NE2000 for years . . .
  • :) imagine big vendors trying to get along with RMS...
  • Part of the reason why some of the big server types mentioned above want Linux is because it already has a large mindshare and installed base. (a lot bigger than BSD and Hurd anyway)

    These guys already have their own Unix OSs with the high-end features that they want to see in Linux, and they aren't going to be dropping their current commercial Unix OSs quickly - that'd annoy a *lot* of their customers, many of which would then go straight to Sun, which is the last thing they want.

    Though there will be many different reasons and objectives etc, I think the higher ups will want the mindshare particularly...

  • Ain't the new gigaHertz pentium IIIs pretty close to alpha-level (microcode fixes every few weeks)? Actually, though, I think the definition of beta is that you give it to other people.
  • Alright. Normally I don't respond to posts like this that simply don't understand what GNU is really about, but the fact that this gets a (4, Funny) shows that those doing moderation don't understand it either.


    RMS (and the FSF) do not forbid making a profit. They encourage it. Check out this philosophy page [gnu.org] for more information.


    Two other issues here: "pirate" is really the wrong word to use in almost every instance. Try "unauthorized sharing". And having source but only being allowed to write a patch for it severely stunts software development. That's why Minix didn't fly, but Linux (as a kernel) did.


    One of the few things here that was almost correct: nobody has a right to make a profit by restricting others. In any way.


    There's "funny 'cause it's true" but this post is only funny if you believe a (rather common) misconception.

  • by MROD ( 101561 ) on Wednesday October 18, 2000 @04:39AM (#697546) Homepage
    What's wrong with the HURD

    The HURD is based upon the MicroKernel technology which was in vogue in the Computer Science community about 10 years ago. Time has moved on, people have taken the ideas from this technology and incorporated the best bits into the semi-monolithic kernels of today, Microkernels are looked upon as a relic.

    MicroKernels are theoretically so much cleaner and efficient. They are built on the idea that every part of the system has its own thread of execution and that the "kernel" contains nothing more than a facility to pass messages between the threads. In some extremes of the idea even the scheduler is just another thread. This design means that everything is compartmentalised, clean, organised.

    The problem with this approach is that in the real worl this deisng has a massive performance hit. Every thread context switch, every message passed needs CPU and/or memory overhead. If a processor was designed with internal massively parallel instruction streams and internal message passing this wouldn't be a problem.. with current processors it is.

    What's wrong with Linux

    Linux grew from humble beginnings, small i386 machines with little memory and scant resources. unfortunately, it's kept a lot of baggage from those days even though it's come a long way and been ported to many architectures.

    There are many things in the kernel which do things the x86 way and force the other architectures to munge the way the native system does things so that they look like the x86 way. When I last looked at the SPARC port, the memory management system had to jump through hoops to change the way the SPARC processors do VM so that it looked to the rest of the kernel like the way the x86 architecture does it.. it was very inefficient. No doubt these problems haunt the other architectures too.

    There is also the problem of the "one size fits all" mentality.. it doesn't work!

    Sure, you can have a family of kernels, each aimed at a different niche with what common code they can share shared, but don't try to force the same kernel onto everything otherwise everyone will loose out.

    Before anyone tries to label me as a supporter of some other system saying that I'm only bashing their favourite because I have an axe to grind, I don't see any of the free or commercial operating systems today being able to be all things to all men.. and I don't see any being so in the future either.

  • probable emerging standard (don't make me laugh - linux is anarchy - BSD is more standard since it is committe based)

    IBM seems to think so, according to recent articles [linux-mag.com] ("We think that Linux could do for applications what the Internet did for networking. That is, become the standard of choice for developing applications."). Why do you think all the Unix vendors are working on Linux binary compatibility? Sometimes a standard is just where everyone is, and everyone's going to Linux.

    existing high-quality implementation (have you ever looked at linux code? And it changes every two minutes)

    So some of it's not pretty, but in the end it works damn well for a lot of purposes. Remember, this is an industry that puts up with Windows NT.

    existing widespread hardware support (DVD? Hardware RAID even? ....)

    Hardware RAID is coming along ok. DVD is a special case. The important points are that Linux supports more hardware than anything but Windows now, and that support for new stuff will almost certainly get better.

    non-techies are comfortable with it (please ...)

    If you thought I meant comfortable using it, I was unclear. I meant comfortable about it being used in their organizations.

    Liberal licencing (GPL!!!)

    Look, BSD isn't in the picture here. If BSD advocates know what's good for them, they'll ride Linux's coattails. The alternatives to Linux in this context are Windows, Monterey, and other proprietary stuff.

  • by Malc ( 1751 ) on Wednesday October 18, 2000 @04:51AM (#697552)
    You have to admit RMS is rather wacky though. I subscribe to NT Emacs mailing list. Not so long ago there was a discussion about the new version of the NT Emacs FAQ. It seems that RMS wanted the section about using Emacs with Opera pulled. Why? Because Opera isn't free. Fortunately there weren't too many upset people as demand for the information is low, and it can induced by by reading the work-arounds for other products. Most of the information in that FAQ concerns using Emacs with non-free products. In fact, it's all about using Emacs under *NT*, so perhaps there shouldn't be an FAQ at all. RMS is cracked.
  • There seems to be a widespread mis-conception. Linus has NOT to my knowledge refused to consider appropriate additions/changes to the kernel to better support big iron. He DID refuse to make any radical changes late in the 2.4.x development cycle so that it can hopefully be released this year. He also refused approaches that make half the code conditional based on defines (which would be very messy).

    The challenge is to find an elegant way to support big iron without sacrificing usability on small machines. Perhaps it can be managed by making the scheduler and VM more modular. That could also make the real time audio ('soft' real time) people happy.

    The HURD has an interesting archetecture, and could be very interesting on clustered machines. Cray loads Unicos/mk (micro kernel) on T3E and probably other MPP machines already. Possably, HURD could solve the crash on heavy I/O problem.

  • So what is Big Iron?

    Big iron is six 9 uptime. Big iron is fault tolerance and redundancy. Big iron is support. It has nothing to do with how much RAM or how big the disk is, those are secondary concerns.

    Really what mainframes provide is stability to the business, in areas where "tee hee, we need a reboot" isn't an option.

    Calum

  • by Anonymous Coward on Wednesday October 18, 2000 @02:42AM (#697560)
    imagine big vendors trying to get along with RMS...

    Vendor: "Hey Ricky darling! We're writing a killer App which will make everyone want to use Hurd.
    RMS: "Are you going to let everyone copy it for free?"
    Vendor: "No, but we're making the file format public domain. We're also explaining how the algorithm works in detail so that the GNU can make a free version if they want. We're supplying the source to anyone who wants to write a patch for it, or reverse engineer it. We just aren't allowing people to pirate copies of it."
    RMS: "You're all evil scum. You have no right to make a profit from your work by restricting others from making a profit"
    Vendor: "But they can make a profit. They just have to write their own version"
    RMS: "You should give it away for free"
    Vendor: "But we spent time and money developing this. We deserve a profit"
    RMS: "You don't deserve anything"
    Vendor: "Oh, sod this. We've changed our mind. We're going to make it secret closed source, with a restrictive license. Anyone who tries to compete we'll sue into oblivion"
  • by drfalken ( 43743 ) <drfalken&geekreader,com> on Wednesday October 18, 2000 @02:47AM (#697561) Homepage
    I think the real question is whether one Linux Kernel can - or even, should, fit all systems. Obviously the current scalability and flexibility of Linux to run on an enviably diverse range of platforms is impressive - but will it ever be the best OS for everyone?

    I guess you can please some of the people all of the time and all of the people some of the time, but no more - and frankly this makes sense.

    Big Iron is bound to have requirements that differ greatly from handheld computing. I'm pretty sure that this will continue. My fear is that code forks will reduce the impact that the Linux community is having in attracting applications developers. You want to make it as easy as possible for people to write once and run anywhere.

    And, fundamentally, isn't that really the point? After all the end user won't know/care whether there's a special patch on the kernel powering their system, as long as it's possible(hopefully easy) to run the apps they like. That's when you get real flexibility and power out of having a common denominator.

    I think the stakeholders in this discussion (Linus, IBM etc) should be encouraged not to take their eyes off this goal.
  • by Dodger_ ( 51556 )
    HURD is nowhere near mature enough for what mainframe computers need.

    > But why don't they (IBM, SGI, et al) grab HURD and add to it all the things they find important for 'big iron' support?

    Because they've already written their own operating systems for these systems! These aren't companies looking for an OS for their system because they don't have one, they're looking, I assume, for a unix[-like] OS for the large number of programs already written. A free OS like Linux is perfect because they aren't paying per license and have access to all of the source code.

    What I don't understand is why IBM doesn't just take the Linux source code and adapt it to their computers. SGI already has their own Linux tree. Just because it isn't in the Linus' tree doesn't mean it's not worth doing. How many people actually have these huge computers to use the changes IBM would make, anyway? That's one of the nice things about open source, Linus can take the patches he likes from IBM's tree and apply it to his own.

    There are already lots of kernel forks out there, however, everyone who HAS forked the kernel(SGI, embedded people, realtime people) are all following the GPL and make their changes available. Get used to it already, forks are here and now.
  • by henley ( 29988 ) on Wednesday October 18, 2000 @02:50AM (#697564) Homepage

    The closest big-iron manufacturer's management gets to appropriate technical decision-making is in counting headlines.

    All these say "Linux good". *BSD is *possibly* just about on the bottom of their radar-screens, but probably not. You can guarantee that no-one in that chain has even heard of the HURD.

    Sadly, this applies to the techies too, although to a lesser extent. Never underestimate the herding mentality, especially within corporate organisational structures.

    The other side of this is that the technical decision makers - the architects if you will - have generally come up through the techie trenches but have left all that behind. They won't necessarily be current with the minutia of OS design or implementation, but will have a strong background and understand the basic facts.

    Key to this thinking is that industrial-strength OSes require years of development. Linux has had years of development (though not as many as big-iron OSes, by a factor of 3 or more), and has only recently passed this magic threshold where it can be treated seriously because it's been under live-use development for X years.

    These are all points in Linux's favour. HURD on the other hand, regardless of any architectural or technical merit, hasn't been discussed, hasn't been *in use* for X years, has a quiet/non-advocating user community (RMS notwithstanding), and is a break with everything these guys know about. HURD is therefore out of the running

    Now, if this whole question had done s/HURD/*BSD/ (for some value of net|free|open), then it would be more appropriate. I still think the answer would be the same (rightly or wrongly, Linux is more popular and therefore better).

    Never underestimate the impact of inertia...

  • by Pflipp ( 130638 ) on Wednesday October 18, 2000 @02:50AM (#697565)
    The Hurd rocks, but it's not "rock solid". And not yet ported to anything else but i386, according to what I've heard. I had troubles installing it; first my ne2000 card gave troubles, then something as yet unidentified went wrong.

    Added to this, it's a microkernel system. Big Iron doesn't need microkernels. They're a little bit slower by nature, but you get the advance of a small, configurable "mostly user-space" kernel. That's pretty cool for palmtops and everyday PC's but not Big Iron, I'd say. That's why QNX doesn't run on mainframes.

    If Linux can't do the "big iron" thing, I think BSD would be a more serious alternative for now. NetBSD runs on almost every platform, so its kernel should be portable. And I hear rumours that BSD can do some things that Linux can't as of yet. (And vice versa, of course; that's not the issue. The issue is, maybe BSD will just be able to do the trick.)

    GNU's Not Unix: this sentence is more important than you may think.

    The HURD microkernel can be set up to listen to POSIX if you wish, but it can also listen to other stories. So here's a possibility to move away from ugly ol' POSIX in a decent manner, without having to completely abandon the ship like e.g. BeOS does, providing only POSIX as a portability layer.

    The HURD microkernel allows you to mount filesystems in a "new, radical" way which takes away the requirement of seperate "/usr" dirs, etc. (The mounted fs will be mapped on top of the current one, IIRC.) The HURD microkernel allows for user space device drivers and kernel modules, which I think can in the end make the "UNIX experience" a lot less harder.

    So I think that when it's finished, the HURD will be there for the common people and will supply more ease of use and programming freedom than Linux or BSD do now, because it's very flexible and can be set up very friendly towards the user (in my imagination, anyway). It will slowly move to liberate us from the less nice UNIX inheritance, as well.

    But I simply do not think that Big Iron is a target of the HURD.

    But I'm not very afraid about Linux loosing Big Iron. All we need is an open source kernel (isn't Caldera opening UnixWare?) for it. I don't care if it's a fork, has a different name or implementation. That's what standards are good for. All the tools and gears are here already, so it would be "easy" for e.g. Debian to make a Big Iron port, just as they did a Hurd and FreeBSD port once.

    It's... It's...
  • by The Pim ( 140414 ) on Wednesday October 18, 2000 @02:52AM (#697567)
    Think about what an IBM sees in Linux.

    • massive momentum
    • probable emerging standard
    • huge (unpaid by them) developer community
    • existing high-quality implementation
    • existing widespread hardware support
    • easy to find techies who know it
    • non-techies are comfortable with it
    • liberal licensing

    Now which of these does HURD offer?

  • The HURD is based upon the MicroKernel technology which was in vogue in the Computer Science community about 10 years ago. Time has moved on, people have taken the ideas from this technology and incorporated the best bits into the semi-monolithic kernels of today, Microkernels are looked upon as a relic.

    Au contraire. Pretty much every new OS of the past ten years uses a microkernel. Look at BeOS, QNX, JavaOS (built on top of Chorus), Windows NT when it was still relatively stable (i.e. pre-version 4.0), Windows CE and Palm OS.

    You're thinking of the microkernels of ten years ago which weren't very "micro" (e.g. Windows NT and Mach).

    The problem with this approach is that in the real worl this deisng has a massive performance hit.

    It's not nearly as massive as you might think, and you can win it back in other ways.

    There are a lot of things in a modern OS which count as a "massive performance hit". Device drivers, for example, are there to abstract away hardware details. It would be faster if we didn't use device drivers at all and just talked to the raw hardware, because we'd eliminate a layer of glue code. But we think this is worth the price because the layers above it can be coded more simply and thus with fewer bugs. You lose in one place and win it back in another place.

    Virtual memory incurs a massive performance hit, too. But you win it back because user programs are freed from the responsibility of managing what is in core and what isn't. Simpler algorithms, more efficient algorithms, fewer bugs. You win the performance hit back.

    Incidentally, the same argument comes up occasionally about garbage collection, but that's another thread.

    There is also the problem of the "one size fits all" mentality.. it doesn't work!

    Absolutely. That's why modern microkernels work. You can mix and match different bits to make the OS that you want.

  • LaBola wrote:

    HURD is THE future, I mean that this tecnhology is the future, at some point Linux itself will be microkernelized or die in the "big-iron" scenery.

    perlmonky wrote:

    Microkernel is technically more "correct". Macrokernel WORKS.

    If you calm down a bit, you'll see that these statements are entirely compatible. Linux technology is the present. I don't think anyone here seriously doubts that. I also don't think that anyone seriously doubts that the open source movement should be producing good quality software for the present, like Linux. But it's ridiculous to think that Linux in its present form will serve the industry forever, any more than the great OSes of the past did (e.g. VMS, which is finally being phased out as we speak).

    We (and I mean both the computer industry and the open source movement) should be looking to the future as well as producing good code for the present. There is a more than twenty year gap between Unix and Linux. If we want to avoid the similar gap in the future, we need to be thinking about the future now, as well as we think about the present. That's why statements like "this technology is the future" need to be said.

    Think of Linux as the Empire and Hurd as the Foundation, if that helps. :-)

  • The problem with this approach is that in the real worl this deisng has a massive performance hit.

    This may be true in theory. But have you ever played with BeOS? It's the most ridiculously responsive, zippiest OS I've EVER worked with! I guess the theoretical performance hit can be worked around, given enough ingenuity.

    --
  • I'm so sorry that all these pseudo-geeks post answers without an idea of what the HURD is.

    HURD is not another kernel like Linux nor BSD.
    HURD is not the reaction of RMS to the Linux kernel.

    HURD IS a microkernel based architecture, did you say 64 procs. machines? (I hope you all know what a microkernel is).
    HURD IS GPLd, I'm not so idealist to be a BSD geek.
    HURD IS older than Linux (ok, it's also harder at this moment, but if this is a problem you can always reinstall Windoze).
    Moreover HURD is THE future, I mean that this tecnhology is the future, at some point Linux itself will be microkernelized or die in the "big-iron" scenery.

  • can one linux kernel fit all?

    Of course not. Who cares what kernel is running on the system? It's the applications that matter. Applications depend on API's, not on kernel implementations. I don't expect my mobile phone to run the same kernel as a mainframe. If they do run the same kernel, one of them is running less than optimal because these two types of computers have very different, most likely conflicting requirements (think of real time vs. multiprocessing requirements for instance).

    The linux community needs to learn to deal with this fact of life. X windows is not the best solution on all platforms. The linux kernel is not the best possible solution on all possible solutions.

Kleeneness is next to Godelness.

Working...