Will Pervasive Multithreading Make a Comeback? 657
exigentsky writes "Having looked at BeOS technology, it is clear that, like NeXTSTEP, it was ahead of its time. Most remarkable to me is the incredible responsiveness of the whole OS. On relatively slow hardware, BeOS could run eight movies simultaneously while still being responsive in all of its GUI controls, and launching programs almost instantaneously. Today, more than ten years after BeOS's introduction, its legendary responsiveness is still unmatched. There is simply no other major OS that has pervasive multithreading from the lowest level up (requiring no programmer tricks). Is it likely, or at least possible, that future versions of Windows or OS X could become pervasively multithreaded without creating an entirely new OS?"
Multithreaded won't be optional any more. (Score:5, Insightful)
It's not entirely the operating system's fault. The biggest advance of BeOS wasn't necessarily just that the kernel was designed to multithread nicely, Be also did their best to force you to write multithreaded code when you wrote a Be application.
I suspect that the first thing that's going to become clearly a performance bottleneck is the applications. And that's not going to be fun, because there's a lot of applications out there and you can't just magically recompile them with threads turned on and see much difference. You need to synchronize the data structures for multiple threads touching them at the same time and split things up so that you can actually keep a decent number of cores busy. This is not trivial when you are talking about an app that somebody wrote single threaded in the mid 90s without any notion that threads might be useful later.
No Maybe Yes (Score:1, Insightful)
Re:Microsoft's plan is to keep adding cores... (Score:2, Insightful)
Nice try with the /. friendly, but ultimately meaningless and ignorant, tirade. CPU cores don't get deadlocked, threads in a cyclic wait pattern get deadlocked. It doesn't matter which core they run on. You could have a million cores, but if two threads are deadlocked, you're still screwed as far as the program goes. And the article was about BeOS, not microsoft!
Re:Multithreaded won't be optional any more. (Score:4, Insightful)
As for applications - if you're running 5 applications, multi-cores will help without recompiling assuming the kernel's scheduler is reasonably sane and kernel writers are getting smarter at writing different schedulers. If you are running one single-threaded app, multiple cores aren't going to help you much at all. Of course, the other advantage of multi-threading apps (even on a single core) is that if the app is blocking on one thing (I/O is most common for blocking), the other threads can carry on doing work.
I don't get it (Score:5, Insightful)
Overall, I really don't see anything in BeOS that you don't get as well or better in a modern Linux system. BeOS has some efficiency gains from having been developed from the ground up with little need for backwards compatibility, but that's probably also why it wasn't successful in the market. And threading and scheduling in particular are highly efficient and mature in Linux.
(Not that OS X is basically a hacked NeXTStep; the NeXTStep kernel is Mach, the same kernel that is the basis of the GNU Hurd.)
Proof MS set computer industry back (Score:4, Insightful)
Same with BeOS. It had many features, including stability, ease-of-use, and responsiveness that MS-Windows can't seem to find today. Granted, neither can GNU/Linux or Mac OSX, but since they are hardly the predominant OS, I can't really fault them to the same extent.
Anyway, it's an old rant. Never mind the ravings of an oldster who never got over the sopranoing Microsoft gave DR-DOS. Those like me are just bitter our careers turned from fun and interesting to tedious and dull because of Microsoft. Y'all go on and play with your shiny new toys. No, really, don't mind me. I'm just gonna sit up here on my porch and get rip-roaring drunk and talk about the old days, whether anybody's listening or not.
Re:Question... (Score:5, Insightful)
Rewriting things from the ground up, without acceptable justification, has never been an effective strategy.
I hope so (Score:2, Insightful)
Re:We had different programmers 10 years ago (Score:5, Insightful)
So yes, if you mean "developers of business applications aren't generally hardcore down to the metal programmers," then I'd agree with you. John Carmack and Michael Abrash would be bored out of their skulls working on UI issues for Quicken 2008. And, given their aesthetic sensibilities, they wouldn't necessarily be the best choices (just *try* to balance your checkbook).
But if you mean that great programmers are no longer among us, then I'd say that you should change jobs, because it's more likely that they're simply not around *you*.
Re:Tried (for Windows) and killed (Score:5, Insightful)
Thank Gawd Linux isn't using any relic of an OS [wikipedia.org] that started in the 1970's as its base! No, no, all 100% 21st clean legacy-free implementation there.
On a more serious note, I used Beos myself back in the day. It was definitely more responsive than Win98 was, but not everything was perfect either. The networking implementation absolutely sucked. Oh, it had lots of threads, its just that the threads were not all that beneficial to actual performance. The networking stack and some other forms of processing in the system that handle streams of many relatively similar tasks would probably parallelize better via a pipeline scheme where parallelism is achieved by having independent stages of the pipeline run in parallel (much as CPUs break up the task of executing instructions into a pipeline). The type of parallelism that works best can depend on the application, and the one-size fits all philosophy is not usually correct no matter what the solution is.
Time to load applications (Score:2, Insightful)
Old systems didn't have bloat because characters were bytes and graphical entities were flat bitmaps.
Nowadays we have jpeg encoded resources and double byte strings and all sorts of other magical crap.
Programs were (mostly) written for one language and didn't need to adapt themselves to multiple systems.
I bet if you tried to work inside the restrictions of olders systems programs would fly along now, startup times would be low, response times would be low.
Just because we have faster systems does not mean we can add more bloat.
Re:Amiga beat them all (Score:5, Insightful)
Ummm... (Score:4, Insightful)
BeOS was a single-user system, if I recall, so that partially reduces the need for the security features that having multiple processes provide.
But beyond that, modern OS's seem to offer a lot more flexibility. They have processes if you want separation of address space, shared memory if you need better performance for communication between threads, threading if you want a shared address space, and user-level threading libraries for the ultimate in performance if you're willing to spend the time to code it properly.
Being able to watch eight movies at a time is a neat trick, but it's not particularly useful, especially when we'll soon have processors with a ridiculous amount of cores on them. With a large number of cores, the overhead of a process context switch is hardly more than that of a thread, since a CPU intensive process can run on its own core.
I think the future of OS's is more likely to be in micro-kernel architectures that can move processes around efficiently to balance the processing load between many CPUs. Or a hybrid microkernel/monolithic architecture that could run the big kernel on one CPU for tasks that require responsiveness, and the rest of the kernel processes balanced between remaining CPU's for throughput.
Re:We had different programmers 10 years ago (Score:5, Insightful)
While C++, assembly and C might no longer be "cool", it definitely teaches people how to write optimal code, how to debug efficiently, understand a wide variety of computing concepts.
The same college today is too busy teaching C# and Java. While those languages are nice and all, not teaching low level C, C++ and assembly IMO leads to sloppy coders, people who don't understand the byte code generated, people who don't mind wasting system resources because hey
I was nearly crucified when I suggested my boss to recode a piece of an application in C so it scales better than the current shitty VB COM version. He just looked through me and said: add another server! Lot of today's code is written by people who don't even understand how the code is getting executed.
Re:Uh, IRIX anyone? (Score:4, Insightful)
What I'm getting at here is that perhaps we could look to the past for some ideas about multi-threading, and IRIX is not a bad choice at all, particularly since it was Unix-derived, like the Linux we use now, whereas BeOS is not.
Re:It makes sense with multi-core cpus (Score:2, Insightful)
v2:
Haiku from BeOS
Multitasking all programs no delay
Open source for the win
(5-7-5 syllables)
Re:It makes sense with multi-core cpus (Score:1, Insightful)
Is High Performance Computing Really the Goal? (Score:4, Insightful)
Re:Amiga beat them all (Score:1, Insightful)
Re:Amiga beat them all (Score:4, Insightful)
And I was an amiga fanatic. And, while I held out hope that Commodore would get their act together and provide the features that were rumored and needed (DSP's, retargetable graphics, etc..) I always knew it would never happen. If only Dave Haynie had been allowed to do what he wanted, but then again that probably would have made it too expensive for people to buy.
Change programming language instead of OS (Score:4, Insightful)
Well, that's it, read up and then maybe we can get some more interesting Slashdot postings about new computers:)
And it is quite amazing that Sun hasn't picked up on this. Their little Java thingie doesn't scale that well after all:)
Re:We had different programmers 10 years ago (Score:5, Insightful)
His reaction likely had little to do with code and alot to do with business. To managment's ears you said "This part is done, but I want to take time and money and re-do it really shiny." Now if craftsmanship meant anything in terms of the sales of software, you may have been listened to. But since the hardware companies are all too quick to step up and offer a new gizmo that will have you computer running "blazing fast", the consumer thinks that the sluggish performance is a hardware problem. The end result of all of this is the management of software companies sees little to no reason to take any more time or money than necessary to make a program clean and efficient.
Re:It makes sense with multi-core cpus (Score:3, Insightful)
BeOS had a lot of problems as well, for instance the OS was written in C++, which meant that when you wrote drivers, they had to be in C++. The software loaded fast, because it wasn't very mature. It was like loading notepad or kedit. Simple and easy, but once big apps were running on it, they wouldn't be quite so snappy as they loaded in the dozens or maybe even hundreds of components.
Re:Multithreaded won't be optional any more. (Score:4, Insightful)
As an aside, I would think that a true micro-kernel based OS would work the best using multi-core. Putting every possible function in a different process would seem to be a better use of a multi-core architecture than to have larger kernels.
Re:Is High Performance Computing Really the Goal? (Score:4, Insightful)
Re:Yes (Score:3, Insightful)
Yes, it will help the programmer masses not shoot themselves in the foot, but the overhead in STM is phenomenal, and you're relying on Moore's Law to save you.
If you want a responsive system (running on thread-unfriendly OSs like Windows) there's no substitute for knowing what you're doing.
We currently have some offshore developers who are peppering their code with Thread.Sleep() statements, with sleep values selected so that the code kinda-sorta works on their machines. They might as well as be doing Thread.Sleep(Rnd()) for all the good it's doing them.
What's needed is some education in how to write multithreaded programs -- and most universities are not able to provide that education or experience in the time available for a bachelor's degree.
I wish that Palm had open-sourced the BeOs source when they acquired the company. Or at least the parts that weren't encumbered by other people's IP. If it had been placed on Sourceforge, it would have been a good starting point for people to learn how to do it correctly, and gotten some eyeballs on it to fix some it's warts.
Chip H.
Re:Microsoft's plan is to keep adding cores... (Score:2, Insightful)
Re:Question... (Score:5, Insightful)
The Intel vs. Motorola Decision (Score:3, Insightful)
IBM made a decision in 1980 to go with the Intel 8088 (8/16 bit) processor instead of the Motorola 68000 (16/32 bit) processor. At the time, the Motorola processor was designed to be the processor of the future. On the other hand, the 8088 was intended to be almost compatible with 8080A assembly code. This created the need for the 8088 segmented architecture, and segments suck.
The use of segment registers set PC development back over a decade. Essentially, all the 80's was spent fighting segmentation wars. The IBM PC didn't get proper 32-bit computing until the widespread popularity of 386 PC hardware in about 1992, and the subsequent introduction of Windows 95. Windows XP was the first unified 32-bit Microsoft O/S in 2001. DOS mode was finally eliminated in Windows Vista in 2007!!! I think you had to live through segmentation and far pointers to understand how incredibly awful they were.
Interestingly, the Apple Mac had a 68000 processor in 1984. If that caught on, then it would have saved the PC industry a decade of pain. Apple made the decision not to license the operating system for the Apple Mac. The result was Apple hardware was expensive, so few purchased it. The problem was so severe that Apple almost went bankrupt before Steve Jobs returned.
IBM built Microsoft and Intel when they created the IBM PC. Apple rested on the laurels of a better architecture and almost went bankrupt. These two decisions defined at least 15 years of software history. It isn't until know where we see a few different pure 32-bit operating systems fighting it out on a common hardware platform. The Windows XP, Vista, OS/X, Linux battles will be interesting. The 32/64 bit battles will take place on PC hardware that is still almost completely assembly language compatible with the first widely used 8-bit Intel Microprocessor: the 8080A!
Re:I don't get it (Score:3, Insightful)
However, from the user's perspective, it's a very big deal. Having used BeOS a few years ago on what was very modest hardware (even at the time), I can easily say that it felt like it was the fastest and most responsive operating system that I've ever used.
Even Linux on modern hardware doesn't come close to the snappiness of BeOS. You also can't beat the fact that it could boot from BIOS to the desktop in under 10 seconds (again, on a *very* modest PC).
Be should have been the future of Operating Systems, and it's an absolute shame that the code is lying to rot under Palm's guidance. Windows, Linux, and MacOS simply can't touch the simple elegance and efficiency that BeOS mastered almost 10 years ago, which is an absolute shame. (Remember that BeOS was released alongside Mac OS 9 and Windows 98)
Re:Multithreaded won't be optional any more. (Score:4, Insightful)
Are YOU serious? Not one of those applications/services you mention requires much CPU. A single CPU with a good scheduler can easily handle all of that with good responsiveness and little or no loss in overall performance. Well, in the case of Windows XP, it woudl also help to have a sane virtual memory system. A lot of the responsiveness problems you see on Win XP machines (Vista may have addressed this) is because Windows likes to swap apps out to disk when you minimize them. It has very little to do with available CPU power.
-matthew
Re:Question... (Score:4, Insightful)
I lost a bit of change on Be stock. It still pisses me off, because Be had the nucleus of a great idea, but failed to follow through.
Re:Question... (Score:4, Insightful)
Re:It makes sense with multi-core cpus (Score:5, Insightful)
An operating system's job is to mediate access to hardware and software resources. The fact that every modern OS is madly bloated is just proof that the world's OS developers are ADHD suburban twits getting lazy and gratuitous with fluffy GUI features, when really they should be focusing on two core things: device drivers and the almighty scheduler.
Just think about it: Windows Vista is, on average, 10% slower than XP for generic tasks and gaming. Why the hell is that ? Someone fucked with the kernel and stuck things in it that don't belong there, like that ever-annoying popup security model.
It's like any other optimization job: you tighten the hell out of the most frequently-called code snippets like the scheduler and memory manager. If your scheduler is so contorted and polluted that it can't even fit in the L1 Cache anymore, you should be beaten with your keyboard!
The BeOS guys probably had a plan, along with some good brains and coding skill, and they stuck to that plan. If a feature isn't in the plan, it doesn't get coded; the system stays lean and fast, and you let the application developers handle all the shiny stuff. That's how it used to be, and still is in some circles... but not Windows nor Linux. That's where we went wrong.
Predict the futrue? Look at the past. (Score:3, Insightful)
Wow. You seem awful sure of the future. Can I borrow your crystal ball? I'd like to look up next week's lottery numbers...
Looking at history, the computer industry seems to show a remarkable propensity to not learn from experience, and instead keep making the same mistakes over and over again (with different names). What evidence do you have that suggests that is going to change?
Re:We had different programmers 10 years ago (Score:3, Insightful)
To be blunt: writing VB business apps in C is usually a stupid idea. Business app requirements change often and usually for nontechnical reasons. C is a low level language. But for business apps you'd only need to manipulate stuff at the "Lego level", not the "molecular level". So why use it?
It's near impossible for a normal company to hire programmers who can _rapidly_ write reasonably bug free C and maintain it AND _WILL_ do so.
And it usually takes a lot longer to get a new C program to decent standards than say a Python/Perl/Ruby program.
Getting a faster machine to run app = $$.
Weeks of programmer coding, testing and debugging = $$$$
Weeks of programmer NOT being able to do other stuff because busy rewriting old stuff = $$$$$$ - $$$$$$$$
Assuming a reasonable programmer ability, if you used a higher level language, changes would usually be done faster and with a better chance of correctness.
So my suggestion for most stuff nowadays is:
1) Use a high level language. Usually the performance bottlenecks for a business app are not due to the language but the architecture and design, or just plain IO.
2) Spend a lot of time designing it right with the future in mind - getting time and resources to rewrite in a business environment is rare - so if you do it right you maximize the lifespan of your software before cruft builds up to extremely annoying or even dangerous levels.
3) Leave the low level stuff to the John Carmacks.
So what if those high level languages are 20x slower than C? Unless totally braindead they are a _CONSTANT_FACTOR_ slower, so if they are fast enough NOW, then that's good enough. In 3-5 years time, even if the performance requirements may go up, new hardware is likely to run the programs at least 2-3X faster, and it's probably about time to replace the hardware as part of a preventative measure. If you are lucky and wise and your architecture can scale OUT instead of UP then that's a good situation to be in.
Java 5 (Score:0, Insightful)
Re:It makes sense with multi-core cpus (Score:3, Insightful)
Most of my experience comes from a roommate I had in college who used it. He liked showing off how he could play 4 mp3s simultaneously (the OS I was using, FreeBSD, had difficulty playing any MP3s at the time), but other than that he was always lacking for application support. I wanted to try it out once, but the hardware support was too picky for my meager budget--I would have had to buy a bunch of new hardware just to get stuff that was supported, which I couldn't afford.
Re:OMG Pervasive Multithreading like NT/2K/XP/Vist (Score:2, Insightful)
Am I completely wrong in this?
Re:It makes sense with multi-core cpus (Score:4, Insightful)
Oh bullshit. Perfect timing as well. Not five minutes ago my work desktop locked up for 45-60 seconds opening a simple HTML e-mail in Outlook and XP. As has been depressingly common with Windows for ages, having difficulties finding a remote source it simply ignored user inputs to concentrate on a network task presumably requiring well under 1% of the hardware's capabilities. Every Outlook window became unresponsive, as did server-hosted toolbars, etc.. These are architectural design decisions, not 'features' cutting off the use, unless 32-bit colour is now an extreme Windows desktop feature.
Threads are not magic (Score:3, Insightful)
There is nothing magic about threads. Sometimes a multi-threaded process is the right approach, sometimes a multi-PROCESS application is better. Sometimes a process is intrinsically serial in nature and any gain from threading will be more than swamped by the overhead.
While sometimes obscured by terminology, threading isn't a single entity. For example, if a process mmaps in blocks of memory with MAP_SHARED, then creates pipe pairs and forks, it IS a multi-threaded application in some sense, but isn't what many think of as threaded. For that matter, the mmap step can be skipped for some server applications and still be multi-threaded.
A single process with a single thread in it MIGHT be somewhat multi-threaded if it does it's I/O through various asynchronous calls.
At the same time, an application that explicitly calls thread creation functions might be effectively single threaded if resource (lock) contention is such that no more than one ever runs at the same time. Another case of being effectively single threaded is when an application is event driven and even though events are dispatched to worker threads, the thread typically completes it's handling before the next event comes in. This will often be true of interactive applications where the user won't likely keep issuing commands until they see the results of the previous command.
OTOH, things like image processing in GIMP could stand to be multi-threaded where the work is tiled and dispatched to worker threads.
Until recntly, multi-threading has only been beneficial to a small percentage of users anyway. Most people until recently have had single processor systems.
BeOS's legendary media handling was more the result of carefully designed and tuned media subsystems than pervasive threading.
Re:Java 5 (Score:0, Insightful)
No, not at all. Maybe my original comment had insufficient information - I teach them the same theory (concurrency is concurrency, after all, and an earlier exercise in the curriculum is just that - the development of a generic resource pool (which could host threads, connections, whatever)), but in many respects you have to do a great deal more work using the traditional Java (and most other languages') threading model to accomplish the same thing. This is not because some "shitty implementation" hides details, but rather because of elegant design decisions, such as the separation of business logic from execution/concurrency logic (which I guess you like to be all intertwined in your code).
With Java 5, you can write your tasks (logic) first, and then decide, as an architectural decision, whether you'd like them executed sequentially, concurrently, with various types of thread pools, etc.
Also, the traditional limitations, such as always blocking when you want to acquire the lock for an object when it's not available, are significantly complex to work around, whereas the Java 5 concurrency utilities offer several different types of locks, and ways of acquiring them.
And don't get me started on Conditions - the traditional model of Object.wait() and Object.notify() is simultaneously too simplistic, and introduces far too great complexity when developing complex things (think the controller for an aircraft's controls that has to deal with inputs from two pilots and an autopilot, all of which affect shared resources). Again, Java 5 Conditions provide an elegant platform within which to solve such a problem.
Using something that "does the work for you" is not "shitty" - it prevents you from having to sit all day and debug multi-threaded code. In either case, you have to understand what you are doing, but let's face it, multi-threading is actually very simple on it's own - it's the implementation, and how you use it to accomplish difficult things, that are exceedingly hard. I'm glad you think you can write a better thread pool than the maintainers of the Java programming language. You should join up at http://dev.java.net/ [java.net], they can use you! (it's open source now through GPL, remember?)