Forgot your password?
typodupeerror
Linux Software

Could Linux Become A Microkernel? 22

Posted by Cliff
from the can-it-be-done dept.
Kris Warkentin writes: "This question is not entirely intended to start a debate about the pros and cons of microkernels vs. monolithic ones. What I would really like to know, however, is how _feasible_ it would be to convert the Linux kernel to a microkernel. I was looking at how the QNX kernel offers only core services like threading, IPC, process creation, memory management, initial interrupt handling, etc. Everything else functions as a process within its own memory space. Linux can be configured so that it is much like this with other things (filesystems, etc.) compiled as modules. The key difference is that all the modules are operating in kernel space. So, the question is, how difficult do you think it would be to devise a communication protocol to let modules function outside of kernel space and merely talk to the kernel? What would be the cost and benefits? Would it be possible to have both types in the same source tree? (say, as a compile option)"
This discussion has been archived. No new comments can be posted.

Could Linux become a Microkernel?

Comments Filter:
  • If I recall correctly, the Macro / Micro kernel decision was one of the first steps of design of Linux ... to re-implement it would be to re-start from step 1.

    Now the HURD is a micro-kernel architecture, but is unfortunately still in Alpha developement. This may be more what you're looking for.

    ( and yes, NT was originally designed as a micro-kernel, but has since become the 60-foot-microkernel from hell ! )

  • by sohp (22984)
    Hey, go back and brush up on your History of Linux. Linus specifically argued the whys and hows of microkernels with Tannenbaum, (here [dartmouth.edu]) and he's repeated in various interviews (like this one with Yamagata [tlug.gr.jp]) his reasons for not going with a microkernel.
  • I think it would be far better to turn the monolithic kernel into a microwave. I'm quite sure that doing so would dramatically increase Linux's popularity. I mean, who wouldn't want their computer to have the ability to microwave food or AOL CDs without additional hardware?

    Skip the desktop, put Linux in the kitchen!
  • a microkernel may not be what linux was planned to be, but considering the "bloat" that is accumulating in the kernel, and the inherit limitations on monolithic kernels, it may be wise to create a linux microkernel which would be developed concurrantly with the present kernel.

    A microkernel could be invaluable in the emerging world of embedded linux distros and competition over consumer information apliances.
    --
    Finish the project. We'll buy you a new family.
  • Also...

    NT 4.0 introduced the video system into kernel space. This helped speed the video system -- albeit at the cost of "stability."

    You may recall the "use of a third-party driver may cause problems" message when installing that shiny new video card in NT...

    -sid
  • Think about it this way: how does the QNX kernel load any other module (fs included) in the first place?

    My guess: a microkernel has the most basic ability to mount the root fs itself.

    Oh, and you just damn well make sure that the vm server doesn't crash :-P

    Seriously though, how often does the linux mm code fail? Core kernel code like this is fun. It gets a lot of attention, and many eyes make bugs shallow.

  • In linux, at the time and hopefully forever, just one CPU can be in the (micro-)kernel at one time.

    Are you sure? - I don't think so.

    Access to sensative areas of code are protected by locking the kernel (shockingly, using calls to the functions 'lock_kernel();' and 'unlock_kernel();' :-)

    As I understand it, there are many areas of the kernel that are not brilliantly written for SMP (can you say IP stacks :-). Kernel locks are held when they are not needed, so in practice what you say may often be true. But this is changing, and is not the nature of the way the kernel works.

    cheers
    G

  • Not that size is the only concern (so my wife keeps telling me),

    It's not how big your OS kernel is, it's what you do with it that matters ;-)

    Hurd [is] now obsolete too.

    This is really bad news, since it hasn't even hit a 1.0 release yet :-)
    Why do you say this? not being arguementative - just interested.

    I just want to highlight this link [mega-tokyo.com] from the previous post. I hadn't seen this site before, and I really wish I had found it a year ago when I started getting into OS coding.
    If you are at all interested in OS coding, check it out.

    cheers,
    G

  • by roryi (84742)
    ...but if that's sufficient to make NT a microkernel, then, well, err, umm, Linux - or {Free,Net,Open}BSD, or Solaris, or HP-UX, or AIX, or Digital UNIX, or... - are also microkernels if they're running X; in systems running X, the rendering code runs in "a process within its own memory space", i.e. the X server, in user mode.

    Um, Digital UNIX (now TRU-64, formerly OSF/1) is a true microkernel-based OS. Just about everything within the "kernel" can be reconfigured on the fly, and each sub-system is protected from the others.

    I know there's talk of Compaq opening (or, better still, *freeing*) the TRU-64 base source. Even without the "crown jewels" (LSM/AdvFS/Tru-Clusters etc), the advanced microkernel architecture would be a very valuable contribution to the community - TRU-64 is probably the most "comfortable" proper UNIX, and is certainly one of the most advanced in terms of features.

  • NT is.
    Not in the sense of (from the original article)
    I was looking at how the QNX kernel offers only core services like threading, IPC, process creation, memory management, initial interrupt handling, etc. Everything else functions as a process within it's own memory space.
    t's not - in NT, file systems, drivers for devices such as disk and network controllers, and network protocol implementations run in kernel mode in a fashion similar to the way they function in various UNIX systems.

    I Belive NT *was* a microkernel (3.5.1 ), even the GUI ran in userspace (and netorking would contintue even if the gui crashed)

    /*
    *Not a Sermon, Just a Thought
    */
  • Check out SawMill Linux [ibm.com], which is a multi-server version of Linux on the L4 microkernel.
  • Bzzt. MkLinux is also available for HP PA-RISC and can also be jammed into x86 systems (replacing the kernel in a RedHat install, for example). David
  • NT is.

    Not in the sense of (from the original article)

    I was looking at how the QNX kernel offers only core services like threading, IPC, process creation, memory management, initial interrupt handling, etc. Everything else functions as a process within it's own memory space.

    it's not - in NT, file systems, drivers for devices such as disk and network controllers, and network protocol implementations run in kernel mode in a fashion similar to the way they function in various UNIX systems.

    Some of the Win32 semantics are implemented in the user-mode Win32 subsystem process, but some Win32 calls just get mapped into native NT system calls by the Win32 library.

  • I'm interested in microkernels for high reliability systems. One of the problems with most operating systems is that a device driver or major subsystem, such as networking or graphics, can crash the kernel. What if each device driver and major subsystem ran in its own address space? The address space would be restricted to the module's code, data and the address space associated with the I/O device. If the module crashed, the microkernel could recover by reloading and restarting the module.
  • First off, let's face it guys:

    • 95% of applications can be served by the monolithic design
    • Linux is built on a KISS principle -- microkernel conflicts with that
    • As Linus has shown, microkernels can have their own weaknesses and performance issues (and this argument dates back to 1992!)
    • Re-inventing the wheel -- other things are more important
    • Issues, issues, issues -- new design, new drivers, etc...
    • If it ain't broke, don't fix it, and ...
    • Very imporant, you can run a microkernel OS underneath a monolithic one!

    This is where RT/Linux, and other Linux pre-emptive microkernels come in. Advantages:

    • It addresses hard real-time
    • No re-inventing the wheel, you only implement what you need in real-time in the microkernel (e.g., drivers, etc...)
    • You still have full-blown Linux, which runs as a non-RT task in the microkernel
    • You can address, change and do all the little things you need, without having to address the whole kernel and compatibility with modules you could care less about.

    You get the best of both worlds. Minimal redesign, maximum reuse. The whole microkernel argument is old, very old. Linus has gotten Linux to the best it can be as far as soft real-time can be in 2.4. RT/Linux is the microkernel that addresses hard real-time and other size and response time issues. And it is a microkernel, running the main Linux kernel as a regular process. A perfect solution.

    In a nutshell, it's impossible to get Linux to do everything without major modifications. There will have to be non-direct kernel implementations to do those unique applications. I really don't see any other way to do it. And besides, I don't see QNX, VxWorks, nor any other RTOS being as flexible as Linux is at doing many other things.

    -- Bryan "TheBS" Smith

  • QNX is realtime and the Be kernel lacks 80% of the function and features of a Linux kernel. It's not exactly a fair comparison or even a valid one.

    The opensource L4 is called fiasco. I don't have the link handy but you can find it on google and I think it's on freshmeat too.

  • Are both "microkernel based Linux."

    I'm not sure if both are still worked on. MkLinux was only ever supported under Mac but supposedly you could compile it and run it on x86. I've never done it though.

    Taking MkLinux and putting GnuMach under it (I have no idea how involved it is, probably very.) seems like it could be a quick way to get a hurd lite or something similar running. It might be an interesting experiment.

  • QNX is realtime [...]

    ...as is BeOS, and as should every desktop and server operating system, but that's another rant...

    [...] and the Be kernel lacks 80% of the function and features of a Linux kernel. It's not exactly a fair comparison or even a valid one.

    But that's the point! You don't need all that bloat in the kernel. You might argue that unlike some OSes we won't mention, you don't need your GUI in the kernel, and I'd say you were right. But you don't need USB support, file system drivers, device drivers, networking or swapping in there either. That 80% (in the case of QNX it's probably more like 95%) can be implemented in user land.

    Or, to turn the argument around: Do you really want to have to reboot to install a new networking protocol? Is it any different from having to reboot to install an application?

  • by jas79 (196511)
    there is a version of linux running on a microkennel, but it is for the apple.
    mklinux [mklinux.org]
  • Linux really should become a microkernel, since
    it actually easily is outperformed in SMP systems.
    In linux, at the time and hopefully forever, just
    one CPU can be in the (micro-)kernel at one time.
    And the kernel handles, beneath Interrupts, also
    the other, un-important, stuff (sorry for my bad
    English) like networking, fs etc. which all could
    be done by user-space stuff. For example even net
    and I/O could easily be done fully by ring-3 stuff
    via the IOPL that now per default allows, and not
    disallows any more, all currently unused I/O ports
    for access by this task. Then the CPU which actually
    serves the task does the I/O, and that's why the CPU0
    which handles the IRQs branches to the other CPU
    serving the I/O task, lets say IDE-PIO, if an IRQ
    occured which has to be handled by it.
    Microkernel hin und her, but the current kernel image
    should get through since I _hate_ those large directories
    containing "device drivers" (which for linux really spoken
    don't exist) but _do like_ the way where one can choose to
    build it into the kernel image or as module. I SAY that
    "build into the bzImage" does NOT automatically mean
    "included in the (micro-)kernel" !!!

    Tnx, Greetings
    Mirabilos(TM) openprojects:#icewm
  • by Guy Harris (3803) <guy@alum.mit.edu> on Sunday August 06, 2000 @11:51AM (#876870)
    I Belive NT *was* a microkernel (3.5.1 )

    No. In NT 3.5.x, just as in 4.0 and, as far as I know, 5.0^H^H^HW2K, drivers for devices such as disk and network controllers, and network protocol implementations run in kernel mode in a fashion similar to the way they function in various UNIX systems.

    even the GUI ran in userspace (and netorking would contintue even if the gui crashed)

    Yes, rendering was done in 3.x by sending messages to the Win32 subsystem process...

    ...but if that's sufficient to make NT a microkernel, then, well, err, umm, Linux - or {Free,Net,Open}BSD, or Solaris, or HP-UX, or AIX, or Digital UNIX, or... - are also microkernels if they're running X; in systems running X, the rendering code runs in "a process within its own memory space", i.e. the X server, in user mode.

    BTW, "the GUI" runs, in part, in user mode even on NT 4 - the low-level rendering is done in kernel drivers, but the toolkit - the equivalent of Motif or GTK+ or Qt or... - lives, as far as I know, in user32.dll, which is a library that calls routines in gdi32.dll to get stuff rendered.

    user32.dll is, as far as I know, just user-mode library code, as is gdi32.dll; on 3.x, gdi32.dll sent messages to the Win32 subsystem process, and, in 4.x and later, it goes through the kernel driver in at least some cases. (The fact that it's a shared library means that binaries built for 3.x should just continue to work - the ABI for drawing stuff on the screen is, in effect, a bunch of "call routine XXX in this library, with these arguments" items, and the way routine XXX accomplishes that can change from release to release without affecting programs that don't go around the back of the library.)

    (user32.dll probably roughly corresponds to your toolkit library or libraries in X, and gdi32.dll probably rougly corresponds to Xlib, although there may be differences.)

  • by Pseudonym (62607) on Sunday August 06, 2000 @05:54PM (#876871)

    Others have mentioned MkLinux [mklinux.org], which is a version of Linux which runs on top of the Mach microkernel. By modern standards, Mach isn't so "micro". On my Hurd [hurd.org] partition, the gnumach executable weighs in at 726kb compressed, and about 1.6Mb uncompressed. Compare with ntoskrnl.exe, which is 907kb on NT 4.0 enterprise server. Both of these are comparable with the size of an average linux or BSD monolithic kernel, which sit around the megabyte mark uncompressed.

    The QNX kernel, on the other hand, is something like 8kb in size, which fits in the cache of a 486. Even the BeOS kernel is only something like 78kb compressed. Not that size is the only concern (so my wife keeps telling me), but in general, the less code that runs in the kernel, the easier it is to say something about how secure it is. Also the easier it is to change things while the system is running.

    I hate to sound like Andrew Tanenbaum [cs.vu.nl], but MkLinux and the Hurd are now obsolete too. Mach belongs to the old school of microkernels which were popular 10-15 years ago, but with the benefit of hindsight, we know better. Nowadays, for example, we know that you don't even need to do VM swapping inside the kernel.

    There are some projects of note which may result in a product which is cleaner and better designed than Linux. Here are some suggestions:

    • chaos [chaosdev.org], which has a very clean, pragmatic design without sacrificing its microkernel philosophy
    • VSTa [vsta.org], which is loosely based on Plan9 and QNX
    • There's one out there somewhere which is an Open Source re-implementation of L4. Can anyone provide a link?
    • Or you could always roll your own [mega-tokyo.com]...

Reality must take precedence over public relations, for Mother Nature cannot be fooled. -- R.P. Feynman

Working...