Distributed Operating Systems? 204
ayejay asks: "Are there any models/designs for a totally distributed operating system, possibly utilizing AI to learn patterns of use, resource need, and anything else that might be relevant? What -would- be relevant to such a thing? Given Napster and all the load balancing kernel enhancements and SETI@home type programs out there, it seems the idea is ready to be developed into a feasible paradigm. What do you think some of the major concerns/design issues are? I'm talking about nuts and bolts..." Now I'm all for distributed applications, but applying such paradigms to something as critical as the operating system seems to be taking the issue a bit too far. Would creating a 'distributed' operating system gain us any advantage over what we are currently familiar with?
There are lots.... (Score:2)
NeXTStep? (Score:1)
Suns... Plan 9 (Score:4)
Plan 9, as part of its design, is designed with distribution in mind. Check it out!
check out mosix (Score:1)
Plan 9? (Score:1)
Plan 9 [bell-labs.com]
Chris Williams
Uhm... (Score:1)
But that wouldn't be so much different from a cluster I guess.
Don't quote me on that tho
Problem with current programs (Score:1)
Mosix (Score:5)
It provides preemptive process migration among cluster members. If you log into your "home node" and start a process, it will get migrated around the cluster according to its memory and CPU needs. Take a look at their remote monitor [huji.ac.il].
Currently it's Intel-only, but a mixed-architecture version would be sweet. Imagine a cluster of intel, alpha, PPC and sparc CPUs such that you log into any of them, run any Linux binary, and the loader cranks it up on the appropriate machien for you, transparently...
From the website:
---- ----
building the real 'net' (Score:1)
I forsee intelligent programs wandering the net, finding out watering holes such as processing power and memory resources. I see people setting aside a percent of their computer for a distributed community effort.
Eventually, the use of the new net's processing power would be transparent to the user. People will barter processing power for processing power, give away to friends, sell to others, etc.
- coyo
Inevitable? (Score:1)
But then again, what does distributed really mean? Does it mean working across multiple workstations? Or processors? I guess what is meant is a dynamic distributed OS?
Maybe it depends on the degree of distribution and what parts are actually distributed?
We should ask Jane!
Jane? Oops, forgot to put in my earplug...
-Sleen
Google is your friend. (Score:4)
A listing of the major OS research projects involving distributed operating systems [arizona.edu]
IMHO (Score:1)
Interesting idea, possible dilemma (Score:1)
However with the recent (ongoing) problem of script-kiddies and general morons, the whole project statistics-wise could be foiled. Yet this is a problem for everything, so perhaps its not even something worth considering.
AI (Score:2)
Be wary.... (Score:1)
1.5 processors... (Score:1)
How I kinda envision it (Score:1)
My thoughts on the way forward would be to use something like beowolf to build a huge 120 node distributed system and then have each of the nodes also run as an Xterm.
Perhaps this is too distributed and we should have a slightly thicker client but conceptually i'd love to have it so when I want to compress a short mpeg clip I suddenly harness 100 times more cpu power than is in my box.
However implementing it is more of a nightmare and potentially not very worthwhile since comparatively few end user tasks lend themselves to distributed processing.
Re:There are lots.... (Score:1)
Google, anyone? (Score:1)
Some good links:
Why not (Score:1)
Java (Score:1)
He managed to make a java applet hosted off of server A, that when computer B connected to it (through the browser), A would assign it a computation task. A would leave an open listen for connections and would shell out wordloads to be computed and returned
Now I know its not at all effiecent, but since he could take a powermac 6100, an intergraph NT box, a 486 debian box, and a solaris workstation, and have them all compute, this way pretty damn impressive. Thats where Java's platform independance really pays off.
As for what he did with it, I believe his test run involved Some Pi calculations, and it apparently ran quite smoothly.
Just throwing my two euros in the pot.
AI and opaque user interfaces (Score:1)
What makes an OS easy to use is simplicity and obviousness. It is far, far better for an OS to be stupid in an obvious way rather than clever in a non-obvious way. DOS, despite its clunky user interface, was often easier to use than today's Windows boxes, even for the novice users I observed, because when you told it to do something, it did it. And yes, figuring out how to make it do something was often a pain, but once you did, it was all rather simple. Too often, today, the OS makes things difficult because it thinks it knows what you want to do. A classic example on Windows being the behavior of the OS when copying a hard-drive. It tends to move the short-cuts because it thinks you are moving the OS. Mucho-paino when you are really just backing up, or moving hard drives.
An "AI" controlled OS just sounds to me like more of this, and even worse. Suddenly there's this entity out there, deciding what how the resource usage should go. Yeah, perhaps it will often get it right without me intervening. But when it gets it wrong, I get lots of frustrating hours trying to get the damn thing to see it my way. Better to have something simple, stupid and clear that I can easily direct to do what is correct. Just to have an AI that can do quasi-intelligent things in average cases is not enough. Until the AI is smarter than me, I don't want it controlling my OS.
Of course there would be an advantage (Score:3)
Seti@Home has to be able to route all its necessary functions and information around its network. Why is that necessary? A distributed operating system should be able to handle the tasks of distribution for the applications. It's almost as if every distributed app developer has to re-invent the wheel every time he/she wants to create such an app. Why do you think there aren't many distributed apps out there? They're too bloody hard to code. Joe Schmoe VB developer cannot create distributed apps because like as not, he knows very little about networking. Most developers know squat about networking (keep in mind that most developers don't read
Soon, every appliance in your abode is going to have a processor in it. That processor may be much more powerful than what is really necessary to operate the appliance. Especially if a web browser is built into your fridge. The processor has to be able to run the browser, so lets say it's Pentium class. Do you really need a Pentium to measure the temperature of the fridge and turn on the compressor? No. So every time the browser is not being used, clock cycles are wasted.
I see no reason why future homes don't have the standard PC. They could use the collective power of all the processors in all of the appliances in the home to make a PC-type of interface for a user. It would also lend a certain amount of fault tolerance. Many functions would be duplicated on the home network, and data loss and downtime would be minimal if at all.
QNX (Score:1)
Distributed Computing (Score:1)
Useless (Score:1)
I happen to be in the other 1%. If I could write multi-threaded applications that automatically distributed across a network of computers, I'd be very happy. Zero-effort solutions like this are the way of the future. Now, it doesn't necessarily have to be the OS that has all of these smarts - the compiler could take a big portion of that.
Anyway - good idea for us people who manage 20 machines and batch-process hours of work on them. But, for your average John Q. Citizen, totally useless.
Please define (Score:1)
I can think of two possible interpretations.
Network management You want to check diskspace on ALL your fileservers or create a user that can log in from anywhere or something else network-wide. There are products that handle all this stuff, although it's arguable whether a "distributed operating system" could do it better.
Seamless multi-processing You want to submit jobs to "the system" so that unrelated jobs can go fast and related jobs can go fast AND communicate easily. That's a worthy goal, but isn't a "distributed operating system" overkill? Wouldn't a job control system work just as well? Or even, if you don't mind spending the money, an SMP machine?
I suspect none of this is what the poster had in mind. Probably s/he (or Cliff) is just playing buzzword bingo with us. Watch for upcoming Ask
--
Re:AI (Score:1)
If the Internet were alive and intelligent, it would commit suicide [stileproject.com].
Linux? (Score:1)
RIght now we already have processing distributed over multiple processors. I guess what you're asking is complete distribution between two separate machines. Is this not what Beowulf (sp?) tries to do?
Either way, multiple shareloading over the internet should come around eventually (maybe this is more on subject). The only problem with this at this point is that sending bits and bytes back and forth over the net takes more time and CPU power than just doing it locally. It's kinda sci-fi right now. You know, Star Trek ships have two computer cores co-operating full time and a backup core. Hell, even the twenty year old space shuttle has two computers effectively working together.
So what's the question? =P
Re:Problem with current programs (Score:1)
Network speed must be an order of magnitude slower than bus speed.
Re:building the real 'net' (Score:1)
--
Amoeba (Score:2)
Unclear (Score:2)
If you just want better clustering, shared drives, that sort of stuff, check out Mosix or LinuxNOW, as many other people have already pointed out.
If you want the kernel or other fundamental, low-level parts of the operating system to be distributed, then you have a fundamentally bad idea. If you want the kernel to be distributed, you don't have a clue what you're talking about -- The kernel is designed to be low-level and small. It can't be distributed because it is inherently specific to the machine. It is also small enough that the performance loss in distributing it would be bad for time-critical kernel-space functions. If you want system commands like the shell and things in
--
Hello? Amoeba anyone? (Score:1)
Amoeba is to distributed OS as Mach is to microkernel OS.
another good project (Score:1)
Adam Beberg of distributed.net [distributed.net] fame has been working hard on a distributed, encrypted system named Cosm.
Check it out here:
http://cosm.mithral.com/ [mithral.com]
Excuse me, distributed? (Score:2)
Exactly what do you mean by "distributed"? What about the OS will make it "distributed"? I don't understand what you're asking... any multi-CPU system is already "distributed" -- even more so in cases where the CPUs are in different geographic locations (i.e. a trans-puter, or "cluster".) [And, Solaris has had this ability in it's "HA" versions for several years. I've seen it in use linking two E4500's 12 miles apart.]
What do you distribute? (Score:1)
Now, this isn't really applicable for an operating system. If you are dealing with just a UMP/SMP type system, then yes, since resources are shared, you can actually distribute parts of the actual operating system. However, with a true distributed system, resources are not shared.(in the real sense of the word)
So, it doesn't even make sense to distribute a process whose very purpose is resource management. What do you distribute? Memory management? I/O? It just isn't practical.
Now, you might instead think of a "Distributed" OS as one which features "plug and play" distribution. So, it might have infrastructure in place to handle distribution (fault tolerence, networking, etc.) However, this really comes back to the application level. Napster, etc are really no more than an infrastructure layer on top of the OS.
Now, you could probably start bundling these tools with the OS, however, can it really be said that the OS is "distributed"? For example, is "emacs" a part of the *nix operating system (good god, every emacs user everywhere hates me)(including myself)? It's just an incredibly useful "tool".
hype (Score:1)
if it ain't broke, then fix it 'till it is!
tao-group "elite" os (Score:1)
-Daniel
Need to look at where this would be useful (Score:1)
What we would need is a Seti@home type distribution scheme but with a scheduling scheme like a supercomputer.
Everyone would submit jobs to be excuted on the distributed machine and the central server would coordinate and pass out calculations to all the machines. This would be useful to everything from universities with idling (but powerful) workstations sitting in its labs to even places like NASA that occasionally need more computing power than they even have (Searching for a signal from Mars).
And btw, fortran SMP works rather well for beowulf type systems. Just code like normal, compile and run. -Kashent
I dont see the usage. (Score:1)
However when the computational need is large, ie rc5/seti/etc, then distribution becomes important. It looks like a user space issue, not an operating system issue.
A single distributed client could handle multiple tasks and be directed to work on different problems at different times in a distributed/cooperative manner. It looks like a user space problem.
How do you handle people who dont want to share there disk space or memory?
Maybe I misunderstand what a distributed operating system is.
Distributed Operating Systems (Score:1)
It talks about Amoeba [cs.vu.nl], the V-System [uni-bielefeld.de] (I couldn't find a good web page), and Chorus [berkeley.edu].
It's a textbook so it also has a lot of theory about general distributed OS's.
Distributed? (Score:1)
SkyNet anyone?
Anxiously awaiting those HK's
jream
GnuSpace (Score:2)
Before we go rewiring the whole frikin OSs. Let's try it in applications first!
http://sourceforge.net/project/?group_id=7829
From the Link:
"GnuSpace" is an advanced Gnutella client that let users share both files and computation time. Unlike Gnutella, GnuSpace combines thousands of PCs unused CPU power into one coherent power-source to fuel super services to benefit all.
Linux NOW? (Score:1)
Re:AI (Score:1)
Re:At least give a good reason. (Score:1)
Yeah, the really reason that C sucks is things like:
if( i = 4 )
DoFoobar;
else
ReportError();
Compile and run perfectly.
Re:There are lots.... (Score:1)
Re:Inevitable? (Score:1)
>Jane? Oops, forgot to put in my earplug...
She doesn't want to talk to you now... she feels neglected...
--
Re:Excuse me, distributed? (Score:1)
Kinda stretching the definition of "Client/server" there.
Programming for distributed systems. (Score:3)
Seti@Home has to be able to route all its necessary functions and information around its network. Why is that necessary? A distributed operating system should be able to handle the tasks of distribution for the applications. It's almost as if every distributed app developer has to re-invent the wheel every time he/she wants to create such an app.
You are already running a distributed application whenever you run a threaded application on a SMP box. Writing applications for a distributed operating system is no easier and no harder than this.
You _will_ have some programming overhead no matter what - by nature, a distributed application needs to have multiple pieces running concurrently, and so has to manage synchronization and communication between these parts.
The good news is that everyone already understands multiple processes and threads, so we already have a well-established programming model for it.
Now, in the real world, client/server computing will always tend to have an advantage for wide deployment, as you can run those on heterogenous platforms (a la SETI-at-home). For small deployment... you're looking at either a high-processor-count SMP machine or a cluster, depending on the degree of coupling, and those are already well-understood.
So, I'm a bit puzzled as to what you think needs to be developed. It looks like we have distributed computing already.
Re:At least give a good reason. (Score:1)
already seen it:
if( 4 = i )
will not compile but has the same semantics as
what you have. Get in the habit of putting making
the LHS something non-assignable.
Re:At least give a good reason. (Score:1)
microsoft.net (Score:1)
microsoft.net - online OS and apps services.
subscription/rental use.
all stored on a centralized set of hardware (centralized meaning that your account is in one location... there will be many hosting facilities holding many accoutns)
your environment will sit on the microsoft.net and you will just pay an additional $5/mo for office, $2 for frontpage etc....
then if you want to buy games - you buy/rent them and when you click - it will schedule an auto install onto your system.
then the future is that - microsoft has no middle men, they get all the $ for their products. they get to market this is the greatest INNOVATION in computing history - even better than the invention of computing itself. and all the stores and computer dealers can no longer sell MS products. (good and bad - many will go out of business - hopefully most will just sell linux on the boxes instead)
so - it will have a big impact on the high-tech economy, and it seems that it is M$' way of giving DOJ the finger with a message "See? See? See what happens when you screw with a company that is big enough to be able to change the course of the computing industry with one fell swoop."
we will see what will come of this - but be ready. gather your alternative OS' and get ready to fight.... I sure as hell will not let MS hold my OS and force me to rent it, and turn it off, corrupt or lose my data at any time (hotmail anyone?)
rant rant rant...
p.s. - its real - its in beta now!
Re:QNX (Score:1)
however (Score:1)
-coyo
Distributed OS (Score:1)
Example #1, you know how some users seem to royally screw up with no effort? Try those users on that system...
Example #2, those goofy problems that you have that don't seem important, and you can't track down, so you just format and forget? Try formatting an entire company and see how that flies. "Sir, we have to format your network" hehe
Example #3, instead of one user being down, possibly working on another station, EVERYONE is down. Not a pretty picture. Unless there was some way to hot-swap your Distributed OS Servers, I don't think it's feasable.
Actually, on the other hand, if you COULD hot-swap your DistOS, formatting and problem users wouldn't be a problem. You'd just have 2 ready backup hot-swap servers, and hot-swap away.
Sounds like it might work, but also like it might not.
Luv,
Brady
OS info, including distributed ones (Score:4)
I find all the "pure" distributed OS stuff (systems build from the ground up to do distributed processing and not much else)relatively uninteresting on its own, but a lot of good ideas from those projects can filter into general purpose operating systems, especially when you start talking about clustering or even NUMA. You might want to see MOSIX [huji.ac.il] for a cool, distributed/clusterd Linux version.
--JRZ
Several Options... (Score:5)
Each has some somewhat different insights to bring to the table; there is no unambiguous way of saying "this is all vastly superior."
Re:Mosix (Score:3)
-AP
Success Depends on Application (Score:4)
The true success of a distributed OS will be in the applications in which it is applied. Obviously, if you don't have need for the advantages that a distOS brings to your computing, then you don't need a distOS, however cool it might be. My mother (who finally checks her email every night, bless her technologically-crippled heart) does not need the problems associated with attempting a distOS. What she does would not benefit from the extra resources.
Of course, supporters of this idea (and I'm not saying I'm not one) would state that you don't think you need the distOS because we haven't actually made a reason yet to need it. Kind of like how everyone didn't NEED the Internet until, of course, we had it. Now there are sites like
This is true, I think, in many ways. However, I think when implementing such an OS consideration needs to be had for exactly what is being accomplished by it being distributed. I can see mainframe-like systems being extremely benefitted by such a system. A game system could really benefit from the extra horsepower, given that the connections were strong enough. Playing music, DVDs, etc...all very high CPU and memory applications could see some interesting benefits.
How about stability and redundancy? How would you like an OS that ran even if a bomb knocked out part of its system? Rewrote and/or re-routed itself to account for the damage and still get the job done? Wow! What a disaster-safe way to compute! Of course, you have one of these OSes inside your head right now......
End fact is: Good idea, needs lots of consideration into the practical application of such a thing so that we aren't playing solitaire with a distOS.
Distributed OS (Score:2)
Re:How I kinda envision it (Score:2)
I guess an important thing is to emphasize what it is that should be "distributed". Allready, most operating systems function "distributed", i.e. have the ability to access remote file systems, remote printers, support execution of a given process on a given remote "machine" etc. This is one form of "distributed operating system", which has proven to be well functioning in many settings. This is imho basically "distributed ressource sharing". A little more advanced is the case of one process, executing on one CPU in on one "machine" utilizing memory in another "machine". Still, this is not far out (see f.eks. Berkeley NOW or some such project). This is basically a matter of providing some services and presenting them in a way such that they appear as if they were "local" services.
Another, different "wish" is the ability to "execute any one process on 4½ processor". Which basically amounts to two issues, namely writing applications such that they can take advantage of an arbitrary number of underlaying processors (it's not gonna do much good to take a strictly sequential program and execute on any "multiprocessor-like" platform). The other issue involves automatic parallelization of programs by the operating system - something which is not a trivial matter, and often hardly worth it in "real applications". This basically amounts to providing a set of "handles", usefull for the programmer when writing a process and used by the operating system when scheduling and executing the process. Such exists allready both in academia (The Actor Foundry [uiuc.edu] or Emerald are examples hereof) or in "real life" with MPI et.al.
But the "dream" of having an operating system which is just "undefined distributed" and which is able to execute "just any" process distributed is not realistic - for many reasons, including those above....Unfortunately it is also a common "wish" to see caught out of the blue...
How distributed? (Score:2)
For a lot of applications, many of today's OSs can be considered distributed. Both CORBA and DCOM (or is it DNA nowadays?) provide mechanisms to abstract the location of a particular service, which in the end is what "distributed" really is all about, right? A lot of enterprise apps nowadays are quite highly distributed and often use OS capabilities to achieve that (certainly in the case of Windows).
In the end, the question is how highly you want to distribute the OS, and what the benefits and tradeoffs are. If you want to achieve smaller unit sizes, eventually the unit might be not powerful enough to do much useful work--like a Beowulf cluster of 386 machines. If you just want to make it fault tolerant, it might be worth it anyway. And so on...
Uwe Wolfgang Radu
Check out medusa (Score:2)
Some Reasons for a Distributed OS (Score:3)
2) Performance Benifits from Parallelism: distribute threads of execution across the global computational grid.
3) Share Resources Efficiently: don't waste those idel CPU cycles. Don't waste that extra main memory. This may be the least valid reason, as cpu cycles and memory have a big head start over bandwidth on the value vs. time scale. Moore's law has all of them getting exponentially cheaper over time, but right now bandwidth is the most valuable of the three.
4) Support a New Generation of Applications: Distributed operating systems can offer unique support for things like shared virtual environments, or widly distributed databases. It is a classic point of contention whether the distributed system services should be implemented on the application layer, or on some lower layer. However, I don't think anyone can argue that in terms of ease of application development, it is often very nice to have a really nice abstraction available on which to base your app.
"A distributed system is one in which the failure of a computer you didn't even know existed can render your own computer unusable." -- Leslie Lamport
Re:AI (Score:2)
Unless, as an open source or independent project, someone creates one of these entities, and does not engineer such safeguards. What happens from there, depends entirely on what the creator engineered into it. If the creator decided to engineer in a "survival instict", or a hatred towards humanity, or even a random element so that the entity would "decide for itself", the danger exists that it would fight for it's rights, and it's survival
Doesnt this negate your assertion that it couldnt happen because people(tm) will program it, and apparently humans can do no wrong?
Re:AI (Score:2)
Re:Excuse me, distributed? (Score:2)
You're intermixing hardware and application terms. The thing you go download from a web page and run is the SETI@home client application. It downloads work and reports results to a SETI@home server application. For the purposes of discussion, both applications could be running on the same hardware "server". The SETI@home application (client) running on your machine doesn't talk to the SETI@home application (client) on any other machines; it can only communicate with a SETI@home server application. This is the definition of Client/Server Computing.
In contrast, look at Gnutella. The application serves both as an information/processing client and server (i.e. a node). Your Gnutella node connects to N other Gnutella nodes who in turn connect to N other Gnutella nodes, and so forth, forming a complex web. You can remove any number of nodes and the web will heal in short order. USENET is built in much the same fashion (albeit much slower and less interconnected.)
Recalling Wrong (Score:2)
Both NeXTStep and Digital Unix were monolithic OSes, despite the association with Mach.
What you may be thinking of is that NeXTStep included a "distributed objects" scheme, lately being "cloned" as GDO (GNU Distributed Objects). [embl-heidelberg.de]
What *kind* of OS design? (Score:2)
At a larger scale, and as others have rightly mentioned, Plan 9 is one of the first major rethinks of fundamental OS design policies and goals. Unix has at its roots assumptions buried in a single large timesharing/batch system, with networking and thus distributed behavior stapled on afterwards. To whet your appetite, the X Window System is fundamentally irrelevant in the Plan 9 environment, except for legacy code. It is safe to say that the Plan 9 papers are required reading for your goals. Note that this really doesn't get into kernel level design -- the Bell Labs team freely admits that the kernel (at least pre-Brazil) was fairly conventional in design.
Last but not least, don't fall into the trap of a Solution looking for a Problem. Don't try to use "AI" (no offense, but whatever the heck you mean by that -- it's so overbroad as be like saying "I'll solve it with Science!") when you don't even have a specific problem in the domain of distributed computing identified. Understand the real problems, which I'm guessing in your case are large-scale systems design and usability issues... THEN look for appropriate solutions.
Good luck!
Re:How I kinda envision it (Score:2)
That way when I ask photoshop to rotate a 4096x4096 image by 37.241 degrees it checks the lan for free machines and splits the task up and deals it out.
Mozart (Score:3)
Many of the important theoretic issues [geocities.com] have been addressed at the nuts-and-bolts level by the Mozart Programming System [mozart-oz.org]. Specifically, if you read Distributed Programming in Mozart - A Tutorial Introduction [mozart-oz.org] you'll have an idea of the kind of distributed programming power provided by a network of Mozart systems.
The key to Mozart's power is its use of ultra-light-weight threads that can share single-assignment distributed variables within heirarchical computation spaces. What this means is you can have unlimited "processes" that are waiting on all sorts of things all over the network -- and failures are easily confined to the minimum logical spaces.
By "ultra-light-weight threads" I mean a virtual unification of process structure with data structure.
"divide processor time for a single task"? (Score:3)
How exactly do you propose that the operating system do this?
Unless the programmer or compiler parallelizes the code, you're out of luck for running it on more than one processor at a time. What is the OS supposed to do? Recompile it on the fly, adding all of the MT-safing, rebuild it, and hope that it's faster?
Unless an application is designed from the start to be parallel, it can't be run as a parallel program.
Re:Please define (Score:2)
A CPU in a box that sits under your desk, manipulating the bits that you tell it to, is able to make certain assumptions that make writing the operating system easier. The challenge of writing an operating system that can operate across platforms--where, perhaps, not all machines are equally trustworthy, or maybe where some processors may disappear completly (how do you handle lost data efficiently?)--is still the same question ("how do you use these resources to get work done?"), but the answer isn't the same.
You are correct in that being distrubted doesn't help manage resources--in fact, it's a pain. The advantage being distributed offers is in having the cycles available to get more/bigger stuff done.
Now, to answer the original question:
An AI would probably find use in such a system. It could conceivably be trained and/or learn to recognize, for instance, unreliable nodes in the system, and perhaps only distribute less important work to that node. Where the AI itself would run would be an interesting problem, and is really an extension of the question "is the distributed OS symmetric?" (Note that things like Seti@Home are /not/ symmetric, as it has a central "OS node" that dolls out work to other nodes, which then respond with answers. This is the same thing as a current day consumer OS that runs the OS on, say, just one CPU, and never runs any part of itself on any other CPU, even if they are idle.)
An AI could be used in any number of other jobs that such an operating system might need to do (e.g. allocating memory, scheduling jobs, etc.), but really an AI--as I usually think of them, anyway--is probably overkill. The simple algorithms currently employed in traditional OS's are probably sufficient...but you never know. That's why it's an interesting question.
Distributed OSes are here (Score:4)
There are several real, full-featured distributed operating systems out there. One good example is Legion [slashdot.org]. It gives you the illusion of running programs on your desktop, while they are actually running lord-knows-where. Yes, you often need a lot of network bandwidth to get good results. Depending on the exact details, you can run programs on other machines with either no or small modifications.
Lest you think this has nothing to do with today's operating systems, the Linux desktop folks have started using Corba quite a bit to link things together. Well, Legion provides much more powerful, secure, and reliable ways to do the same thing, in a much more consistant fashion.
Distributed, but not too connected (Score:4)
Re:Several Options... (Score:2)
Mach is the granddady of distributed OS work? Heck, Mach wasn't even the first distributed OS developed at CMU. Hydra pre-dates it by more than a decade. Bill Wulf did quite a bit of work on it. The successor to Hydra is Legion, at the University of Virginia.
Re:Distributed OSes are here (Score:2)
Harder than we would wish (Score:2)
Load balancing? Easy to write, hard to make work well. You need to compare the cost of migration to the benefits of balancing, and you need to make decisions based on partial and outdated information. Many early systems thrashed because everybody would migrate to the idle processor, which then became overloaded, so everybody migrated somewhere else, etc.
Speaking of migration, it's a mess. The only system I know of that implemented migration fully was Locus, out of UCLA. The trouble is that whenever a process has a dependency on or a hook into its environment, that connection must be migrated too. Open files, working directory, sockets, controlling tty, signals, process parent/child relationships, and many more details must be handled. Not fun, and the benefits turned out to be mostly minor (though I do recall writing a cool version of "find" that migrated itself to the machine that stored the current subtree as it ran).
The issue of supporting distributed applications is generally considered to be separate from writing a truly distributed OS. Most of what a distributed application needs can be provided by a good communications library. To some extent, we're still learning exactly what such a library should have. What about SETI@home is specialized to it, and what's universal? I don't think we've completely figured it out.
The following is a non-exhaustive list of major concerns and design issues that must be addressed in a distributed OS. We have fairly good solutions to some, but most have not yet been solved:
Finally, I should note that the list of projects [arizona.edu] at U of Arizona [arizona.edu] might appear to be complete, but it omits a lot of important projects. Four that jump to my mind are Locus and Ficus [ucla.edu] from UCLA [ucla.edu] (though the latter is more of a distributed filesystem than an OS), Coda [cmu.edu] from CMU [cmu.edu] (again a DFS, rather well-known to Linux folks), and of course the extremely important Network of Workstations [berkeley.edu] work out of UC Berkeley [berkeley.edu], which led to Inktomi [inktomi.com] and Hotbot [hotbot.com].
Re:At least give a good reason. (Score:2)
>Since then I have promised myself never to do any serious development in C if I can help it.
That is why you modularize your code and perform unit testing.. This sort of error will prevail in any sort of language. For a given language, there will always be problems that have complex solutions. At this point, you have to apply good programming practices and a bit of software engineering.
That a language such as Java or Pascal alleviates many types of programming errors is good, but there are just as many minuses to these languages. It's an engineering decision as to which language is best suited for a given set of problems and developers.
Personally I use Perl, but that's even more error-prone than C (with the exception of core dumps). Good coding practices are essential for this. (The benifit, of course, is rapid development time)
Re: (Score:2)
Parallelizing during compilation. (Score:2)
You can do this fairly easily for certain types of loop. It would be a straightforward extension of loop unrolling. Now, I don't think anyone's been insane enough to _do_ this to date, as the thread creation overhead would eat the speed gain for anything except a very long-running loop.
Something like TransMeta's code morphing that profiles on the fly could in principle figure out where it's sensible to do this, but speed gains would be questionable except in very special cases.
Re:"divide processor time for a single task"? (Score:2)
Of course, you would need a very fast network with low-latency logic in between to probably get a speed gain, unless most of your processing was on very large vectors.
Just what I think, as outlandish as it may be. But wouldn't it be cool?
-AP
Re:"divide processor time for a single task"? (Score:2)
At _compile_ time, it's possible, though not always practical or beneficial, as I'd already stated.
You were talking about doing it at _run_ time on binaries that weren't designed for multithreading/multiprocessing.
There is a big difference between these cases.
It's not impossible, but it's *very* difficult, and of questionable use in almost all cases (overhead for threading is high, for multiprocessing is higher, and for running on processors separated by substantial latency is prohibitive).
As another poster pointed out, some compilers already do this at build-time, but that's about it. If you want your application to be easily parallelized, then write it to be multithreaded to begin with.
Re:"divide processor time for a single task"? (Score:2)
Should I keep singing "Dream a little dream", and wait for a parallel MP3 encoder?
That would be overkill in most cases anyway. It is quite simple to set up a beowulf as an MP3 encoder farm at the file level. Ripping a CD can be very fast that way. There would be almost no point in the extra work to do more fine grained parallel processing.
For other tasks, automatic parallelization would be a big plus, but is a hard problem. It's made even harder because depending on the speed of the interconnect (anywhere from a local torus to a bunch of 28.8 dialups) the basic approach would be entirely different.
That doesn't mean it won't happen, just that I'm not holding my breath waiting.
Re:Know your buzzwords (Score:2)
Just because your video card/hard drive/printer/whatever has a CPU and/or RAM inside it doesn't mean that the Operating System is running on it. These are just instances of a Standalone Operating System interfacing with peripherals containing processing power. Inter-process communication does not a Distributed Operating System make!
As you said, know your buzzwords.
--
Re:Problem with current programs (Score:2)
Re:"divide processor time for a single task"? (Score:2)
It is just a matter of time (how long, who knows!) before we hit the wall on processor speed. It will have to be an intelligent solution that would reach the above aims. It may not be mine, but one thing is for sure: it probably won't be me implementing it!
-AP
Re:Of course there would be an advantage (Score:2)
...and dump it on the OS developers, who already have plenty to worry about thankyouverymuch.
Re:Of course there would be an advantage (Score:2)
Author of the comment concerning the Beowulf FAQ, please disregard this rant as you have the only enlightened reply to my original post.
RANT ON:
As to all of you bitching that SMP already takes care of a distributed architecture:
Does SMP handle the latencies encountered when routing messages through ethernet cards? No.
Does SMP handle the reordering of packets when they come back at way different (I'm talking several seconds, not microseconds here) times and in different orders? No.
How can you compare a 100Mhz bus to a distributed architecture? You cant. They are completely different animals with different needs. 100 Mhz buses have caches and low latencies. Distributed architectures work on scales that are completely different than the inside of a microcomputer. Beowulf is perhaps the closest thing we have to a valid distributed architecture (for linux at least) and as far as I know it is not set up to work through routers/firewalls/shared media hubs/etc.
Do any of you app developers have any concept of what a good sysadmin/hardware engineer has to deal with on a daily basis? It certainly doesn't seem so.
RANT OFF:
Please moderate this to hell to your hearts content. The intended victims of my rant will still see it in the thread replies...
Fair 'nuff... (Score:3)
The interesting part is that Legion provides tools that resemble some parts of CORBA, whilst Spring provided tools that grew into CORBA, whilst Sprite provided journalling and cache tools that are essentially what journalling and cache servers provide today.
In a sense, what has happened is that an OS of the 1970s, Unix, has been shown sufficiently malleable that it could integrate in concepts from the research projects of the 1970s and 1980s.
Unfortunately, the 1990s were not a terribly good time for OS research; sort of like The Very Long Night of Londo Mollari [midwinter.com] of the OS world. There was this minor problem of Microsoft "buying away" whatever serious OS researchers that they could...
Re:Mosix (Score:2)
The OS is known as MUNGI, and is a single address-space operating system with persistent memory. This means that:
(a) There's no such thing as 'devices' everything is mapped into the one 64 bit address space (including memory on different machines)
(b) If you want to 'save' something, you stick it in memory and tag it as 'persistent' - hence, there's no such thing as files.
If you want to read more about MUNGI, check out
http://www.cse.unsw.edu.au/~disy/Mungi/index.ht
and particularly
http://www.cse.unsw.edu.au/~disy/Mungi/manifest
-Shane Stephens
Re:QNX (Score:2)
+ All network filesystems available with
+ So, it is quite acceptible to echo "Hello World" >
+ Send/Receive/Reply interprocess messaging is network transparent
+ I have run computers with 4 network cards with no problem. QNX load balances over all available links. It will also intelligently bridge packets between LANS.
+ Load balances between different media too (Ethernet, Token ring, FDDI, etc)
+ Memory protected microkernel architecture! 1.95us context switch on a P133
I recommend checking out http://www.qnx.com/products/networking/
No, I do not work for QNX, but I think the world would be a better place if more people used it
The new QNX RTP will be open source accept for the mikrokernel itself (12k code) I believe.
Re:AI (Score:2)
Corporations == Rule Based System Gone Berzerk (Score:2)
Who needs AI research when you have Harvard Business School?
Yes, it's true, folks. We already have the Sci-Fi scenario at hand. Corporations are organic beings that operate on a very simple set of rules. The only problem is that we can't turn them off -- they'll just keep going until they've consumed all the planet's resources. Then they'll use people as a power source. We'll all be "coppertops".
I would suggest that we seriously look at eradicating these beasts before they kill us all.
__________________________________________________
Re:Of course there would be an advantage (Score:2)
Erlang (Score:2)
Erlang [erlang.org] (developped by the Swedish telecom company Ericsson [ericsson.se]) is an Open Source distributed operating system that runs on top of a host OS such as Unix or MS Windows. Erlang is based on high-level language paradigms, which makes it refreshingly different from all these C-based OSes. I think it deserves to be better known.
For a rather comprehensive list of operating systems, check out the OS review subproject [tunes.org] of the Tunes [tunes.org] project. Of course, since Tunes is The Ultimate OS, it is distributed also (its only disadvantage is that it (currently?) doesn't exist).
Re:How I kinda envision it (Score:2)
And to turn your analogy round, if one man can dig 10 holes in 1hr40, it does mean that 10 men can dig ten holes in ten minutes.
Depends on the application
Re:Know your buzzwords (Score:2)
--