Forgot your password?
typodupeerror
Be Operating Systems Programming Software IT Technology

Will Pervasive Multithreading Make a Comeback? 657

Posted by kdawson
from the let-it-be dept.
exigentsky writes "Having looked at BeOS technology, it is clear that, like NeXTSTEP, it was ahead of its time. Most remarkable to me is the incredible responsiveness of the whole OS. On relatively slow hardware, BeOS could run eight movies simultaneously while still being responsive in all of its GUI controls, and launching programs almost instantaneously. Today, more than ten years after BeOS's introduction, its legendary responsiveness is still unmatched. There is simply no other major OS that has pervasive multithreading from the lowest level up (requiring no programmer tricks). Is it likely, or at least possible, that future versions of Windows or OS X could become pervasively multithreaded without creating an entirely new OS?"
This discussion has been archived. No new comments can be posted.

Will Pervasive Multithreading Make a Comeback?

Comments Filter:
  • by Thaidog (235587) <slashdot753@nym.hus h . c om> on Sunday July 15, 2007 @03:06PM (#19869977)
    OSes like BeOS and Zeta are ahead of their time. With 8 core cpus coming out soon it just makes since with this technology... no programming tricks are needed.
    • Re: (Score:3, Insightful)

      by man_of_mr_e (217855)
      The important thing to keep in mind is that BeOS was not a mature OS. In many ways, they had done just enough to get it going. My guess is that once they had the resources, it would have fattened up to the size of OSX or Windows easily, and all that performance you saw when it was young and new would go out the window.

      BeOS had a lot of problems as well, for instance the OS was written in C++, which meant that when you wrote drivers, they had to be in C++. The software loaded fast, because it wasn't very
      • Re: (Score:3, Informative)

        by dlockamy (597001)
        huh? what BeOS are you talking about?

        libroot was in C, the api was all c++, but there was still a nearly posix subsystem in place via libroot.
      • Re: (Score:3, Informative)

        by Jeremi (14640)
        BeOS had a lot of problems as well, for instance the OS was written in C++, which meant that when you wrote drivers, they had to be in C++.


        This is incorrect -- kernel code under BeOS was written in C. It was possible to use C++ in BeOS kernel mode code if you knew what you were doing and what to avoid (e.g. exceptions) but it wasn't recommended.


        If you wanted to write a decent userland app in BeOS, OTOH, C++ was your only real choice.

      • Re: (Score:3, Insightful)

        by jandrese (485)
        IMHO, the primary reason BeOS died is that they never got a real web browser working on it, at least not until it was too late. The web was exploding right around that time and the lack of a web browser was the kiss of death. Worse, it was a pain in the rear to compile Open Source apps on BeOS, the library support was incomplete and apparently there was some weirdness with the socket layer (which you need when you write an internet application). There were efforts to port open source projects to BeOS, bu
  • by Anonymous Coward on Sunday July 15, 2007 @03:06PM (#19869983)
    Back in the OS/2 days, we could format 72 floppies simultaneous with no slowdown to our 14.4 connections!
    • Amiga beat them all (Score:4, Informative)

      by Anonymous Coward on Sunday July 15, 2007 @03:37PM (#19870265)
      Serious back in the mid 1980's I used to love putting PC and Mac owners to shame by showing them literally dozens of open, active graphics applications displaying animations, while formatting a floppy disk, and downloading a file online, and still having a normal responsive system with no hic-ups, all in a computer with on 128MB RAM.

      Amiga was a multi-tasking, multi-threaded OS, with multiple processors (graphics and I/O were separate co-processors operating on opposite clock cycles from the CPU, and the graphics co-processor could be dynamically loaded with special executable code).

      It was so far ahead of it's time that people today still don't believe it existed in the 80's when I tell them about it.

      But just because it was better than everything else did not assure it's success. A concept the BeOS fanbois might be familiar with.
      • by nogginthenog (582552) on Sunday July 15, 2007 @03:54PM (#19870397)
        128MB? In the mid 80s? Maybe you mean 4Mb :-)
        • by wall0159 (881759) on Sunday July 15, 2007 @10:31PM (#19872881)
          "128MB? In the mid 80s? Maybe you mean 4Mb :-)"

          Actually, I think the GP probably meant 128KB.

          My parents bought a 486SX33 in 1994, and that had 4MB, but in 1985....
          • Re: (Score:3, Informative)

            by CarpetShark (865376)
            Wrong. The very first Amiga was 256KB, the most popular varieties were 512k-16MB (averaging 1-2MB, probably).
      • by GreggBz (777373) on Sunday July 15, 2007 @04:11PM (#19870523) Homepage
        Hey, I'm all for Amiga's but in the mid Eighties, if you had 128MB of ram and was downloading a file online, you must have been from the future.
        What the heck are you talking about?

        Just to be a little more correct here, I'm no hardware engineer but will try to be far more accurate.

        The Amiga had a great messaging system in it's OS, you could easily pass messages to other windows and programs in intuition. Further, you had all that ARexx stuff, and you could script programs to interact very easily with it. Basically, every program could listen on it's own ARexx socket for commands from other programs. Of course, there was the poor (read, no) memory protection which made things very unstable if you did not know what you were doing. Despite all this cool stuff, the OS was actually the weakest link. It was rushed. I remember reading specs on the original intended, but non-implemented file system, and it was about as robust as a single user file system could possibly get.

        You also had preemptive multitasking (not true co-operative) and a fantastic unified memory architecture with a very fast blitter. Another nice thing was
        that the kernel was contained on ROM so that it booted quicker then any other platform of it's day, and still faster then most this day. And all those chips played nice
        and were synced to an internal clock that ran on NTSC (or PAL) timings. This, of course, meant that interrupts worked seamlessly, and the chipset was handily compatible with video signals from television equipment. That last thing turned into an incredible boon for the entire film and television industry.

        The strength of the Amiga was it's bus and it's architecture. They absolutely nailed so many things in it's design, it really was a thing of beauty.
        • Re: (Score:3, Informative)

          by Anonymous Coward
          Corrected the memory size in another reply. The base system had 256KiloBytes of RAM. Sorry for the mix up, I'm so used to putting MB after memory sizes. ;-)

          As for downloading files online, back then "online" meant downloading from BBS systems. The closest thing to the internet back then for the average consumer was FidoNet.

          http://en.wikipedia.org/wiki/Fidonet [wikipedia.org]

          And yes, the lack of an MMU, as well as a lack of FPU, in the CPUs used in the early models was a shame. But it did keep the price of the system wit
      • by man_of_mr_e (217855) on Sunday July 15, 2007 @04:47PM (#19870783)
        Apart from the corrections already brought up, the Amiga was rife with limitations and problems of its own. It worked great in the narrow range that it was designed for, but had all kinds of other issues. For example, upgrading the video was a hack job that usually require patching the ROM libraries with ones that new about the new video hardware. It was tightly integrated, which meant doing anything outside what it was designed for was often difficult and expensive.

        And I was an amiga fanatic. And, while I held out hope that Commodore would get their act together and provide the features that were rumored and needed (DSP's, retargetable graphics, etc..) I always knew it would never happen. If only Dave Haynie had been allowed to do what he wanted, but then again that probably would have made it too expensive for people to buy.
  • by Joce640k (829181) on Sunday July 15, 2007 @03:08PM (#19869997) Homepage
    Microsoft's plan is for us to keep adding CPU cores in the hope that at least one of them won't be deadlocked at any given moment in time.

  • by cmowire (254489) on Sunday July 15, 2007 @03:11PM (#19870021) Homepage
    Given that most machines are already starting to come default with 2 cores, and you can fit 8 cores (2 CPUs) in a nice desktop package, it's pretty clear that it's going to be a requirement.

    It's not entirely the operating system's fault. The biggest advance of BeOS wasn't necessarily just that the kernel was designed to multithread nicely, Be also did their best to force you to write multithreaded code when you wrote a Be application.

    I suspect that the first thing that's going to become clearly a performance bottleneck is the applications. And that's not going to be fun, because there's a lot of applications out there and you can't just magically recompile them with threads turned on and see much difference. You need to synchronize the data structures for multiple threads touching them at the same time and split things up so that you can actually keep a decent number of cores busy. This is not trivial when you are talking about an app that somebody wrote single threaded in the mid 90s without any notion that threads might be useful later.
    • by larien (5608) on Sunday July 15, 2007 @03:20PM (#19870127) Homepage Journal
      Multithreaded CPUs are become more and more common, yes, look at Sun's Niagra with 8 cores & 4 threads per core (looks like 32 CPUs from the OS...). In the consumer desktop space, Intel/AMD both have 4-core CPUs either in the market or coming soon.

      As for applications - if you're running 5 applications, multi-cores will help without recompiling assuming the kernel's scheduler is reasonably sane and kernel writers are getting smarter at writing different schedulers. If you are running one single-threaded app, multiple cores aren't going to help you much at all. Of course, the other advantage of multi-threading apps (even on a single core) is that if the app is blocking on one thing (I/O is most common for blocking), the other threads can carry on doing work.

    • Re: (Score:3, Informative)

      by kripkenstein (913150)

      Multithreaded won't be optional any more.[...] Given that most machines are already starting to come default with 2 cores, and you can fit 8 cores (2 CPUs) in a nice desktop package, it's pretty clear that it's going to be a requirement.

      Sure, the trend towards more cores does imply that an inherently multithreaded OS makes more sense. But on the other hand, the main advantage heard about such pervasive multithreading is 'better responsiveness', and I am not sure that modern OSes are 'unresponsive' - curre

      • Re: (Score:3, Informative)

        by nomadic (141991)
        current Linux desktops seem very responsive even when running multiple apps

        I'm guessing you never used BeOS; by comparison Linux looks weak in terms of responsiveness.
    • Re: (Score:3, Interesting)

      by TheRaven64 (641858)
      How many applications do you run that peg a single core at 100%? I use three:
      • GCC.
      • Final Cut Express.

      iTunes used to be on that list when ripping CDs, but since my last upgrade the CD drive has become the bottleneck. GCC doesn't need to be multithreaded, because I can always add a -j option to my make command and run one instance of it on each code (and a floating spare one or two for when one CPU is waiting for I/O). Final Cut can consume pretty much as much CPU power as it's possible to throw at it,

  • I hope so (Score:4, Interesting)

    by datapharmer (1099455) on Sunday July 15, 2007 @03:11PM (#19870033) Homepage
    I still hate that BeOS went belly up. It was a great operating system but was crushed before it ever got very far. The hardware support was also amazing: it would run winmodems and other windows only hardware. I've never tried writing an operating system, but I hope some of the features from BeOS make it into linux/OSX. One interesting thing to note is Be was originally a mac alternative and was only later moved to x86.

    Another cool operating system to check out is MenuetOS [menuetos.net]... it is written entirely in Assembly! Very fast boot times and the GUI and eevrything fits easily on a floppy!
    • Re: (Score:3, Interesting)

      by Bryan Ischo (893) *
      Well, just for another perspective to balance this out, I found BeOS' hardware support to be pretty poor and the operating system pretty much left you high-and-dry if your hardware wasn't perfectly supported. To whit, I tried to get a modem to work with BeOS back in the day (1999 or so) and if I recall correctly (it's been a long time), I was getting very generic error dialogs ("Error 0xFFFFFFFF occurred") with no other useful diagnostics whatsoever. I vaguely remember playing with some settings and getti
  • by mwadams (520080) on Sunday July 15, 2007 @03:13PM (#19870053)
    It isn't really the pervasive multithreading that does the job on responsiveness for BeOS, and nor does having the "two threads per window" thing (which I think is what the poster is referring to in terms of "pervasive multithreading) avoid "programmer's tricks" - in fact, you have to be just as careful as if you were developing with Windows, and span up a background thread. One issue for BeOS developers was the amount of hard thinking you had to do to perform simple tasks in a pervasively multi-threaded environment, when you're still having to deal with all the pitfalls of lock-based programming.

    However, taking only a few cycles to spin up or kill a thread (rather than the 10,000 plus it takes Windows), or perform a context switch, is a significant help. (There used to be an interesting article benchmarking those things on the Be website, but I can't find it any more).

    MS have also added some more interesting stuff to the scheduler in Vista, which helps with uninterrupted sound or movie playback, so at least some of that stuff is possible without a complete redesign.
    • Re: (Score:3, Informative)

      by PCM2 (4486)

      MS have also added some more interesting stuff to the scheduler in Vista, which helps with uninterrupted sound or movie playback, so at least some of that stuff is possible without a complete redesign.

      Really? Man, tell that to my box running Vista Media Center. Media Center has a helpful (cough) habit of capturing the mouse cursor to the screen running the Media Center app. Hit the Windows key to break out of it and your video playback is interrupted for as much as 20 seconds while Windows struggles to

  • BeOS rocked! (Score:4, Interesting)

    by Anonymous Coward on Sunday July 15, 2007 @03:16PM (#19870081)
    A few years ago, on a Dual Celeron 366Mhz with 256MB of RAM, I went out of my way to attempt to crash it. I opened about 120 OpenGL demos with only minor decrease in performance. After inherriting that mainboard, processors and RAM from my uncle and then increasing it to 512MB, the same test ground both FreeBSD and Linux to a halt.
  • Better than xubuntu (Score:4, Informative)

    by fishthegeek (943099) on Sunday July 15, 2007 @03:18PM (#19870105) Journal
    for older (p2 & p3) laptops. I have the opportunity several times a year to receive old laptops to use to teach my students with. Whenever I need to I use Beos Max on the machines and it is just amazing to watch how effecient and responsive Beos really is.

    Check out Beos Max [beosmax.org]

    Beos is still a lot of fun on older hardware.
  • I don't get it (Score:5, Insightful)

    by nanosquid (1074949) on Sunday July 15, 2007 @03:26PM (#19870167)
    The ability to play eight movies simultaneously is a bad way of determining OS thread performance. Most modern operating systems have efficient, low-overhead threads. How well they play multiple videos depends much more on the display pipeline, the codec, and how the players adapt to load. To say anything about system performance, you'd need to know frame rate, resolution, codec, postprocessing options, etc.

    Overall, I really don't see anything in BeOS that you don't get as well or better in a modern Linux system. BeOS has some efficiency gains from having been developed from the ground up with little need for backwards compatibility, but that's probably also why it wasn't successful in the market. And threading and scheduling in particular are highly efficient and mature in Linux.

    (Not that OS X is basically a hacked NeXTStep; the NeXTStep kernel is Mach, the same kernel that is the basis of the GNU Hurd.)
    • Re: (Score:3, Insightful)

      by moosesocks (264553)
      No, it's not at a good rubric for a Computer Scientist to compare the schedulers of two different operating systems.

      However, from the user's perspective, it's a very big deal. Having used BeOS a few years ago on what was very modest hardware (even at the time), I can easily say that it felt like it was the fastest and most responsive operating system that I've ever used.

      Even Linux on modern hardware doesn't come close to the snappiness of BeOS. You also can't beat the fact that it could boot from BIOS to t
    • Hacked? Mac OS X is Nextstep, except Nextstep 4.0 was called Mac OS X 10.0 for marketing purposes. All that was changed was the UI and graphics engine. And it was ported to a different processor. And another API was added. And a VM was added for Mac OS 9. Other than that it was exactly the same OS--but really, I'm sure you can find two Linuxes that are more different from each other than Nextstep and Mac OS X are.
  • Haiku (Score:5, Funny)

    by Keruo (771880) on Sunday July 15, 2007 @03:26PM (#19870169)
    Is not haiku(beos) open source?
    Take the features and port to linux.
    New scheduler rules them all.
    Speed improvements would increase the desktop performance.
    As they would increase performance with services.
  • by gnetwerker (526997) on Sunday July 15, 2007 @03:29PM (#19870185) Journal
    Recall that this was the effet of Intel's NSP (the ill-named "Native Signal Processing"), a real-time multui-thread scheduler inserted at the device-driver level of Windows. Combined with something called VDI (Video Direct Interface), which allowed applications to bypass the Microsoft GDI graphics layer in certain ways, this allowed multiple video, graphics, and audio streams, mixed and synchronized, on circa-1993 computers, something largely not even possible today. While NSP was intended primarily for media streams, its technology was broadly applicable to more responsive and vivid interfaces. The result was Microsoft's threat to cut off Intel from future Windows development and specifically to withhold 64-bit support from Itanium, to more publically support AMD (which they did, for a while), and to threaten any OEMs using the code with withdrawal of Microsoft software support. Much of this was detailed in the Microsoft anti-trust trial and the accompanying discovery documents. Under this pressure, Intel abandoned the software, transferring VDI to Microsoft (it formed the core of what was later called DirectX), and outright killing NSP. Andy Grove admitted to Fortune magazine "We caved." (http://cyber.law.harvard.edu/msdoj/transcript/sum maries2.html) This is not to suggest that this was the best or only way to do this, or that others haven't done it and done it well. But despite the best efforts of Linus and friends, Windows remains the dominant desktop OS, and Windows continues to be built on a base of 1970s-era operating system principles. Microsoft has, and continues to, build substantial barriers to anyone trying to substantially modify the behaviour of Windows at the HAL/device layer. Whether VMWare and equivalent virtualization technologies are finally a camel's nose under the tent edge remains to be seen. But as long as Windows remains the dominant desktop OS, you can expect the desktop to lag 10-15 years (at best) behind the state of the art in OS, GUI, and real-time developments.
    • by CajunArson (465943) on Sunday July 15, 2007 @03:47PM (#19870331) Journal

      Windows continues to be built on a base of 1970s-era operating system principles.


      Thank Gawd Linux isn't using any relic of an OS [wikipedia.org] that started in the 1970's as its base! No, no, all 100% 21st clean legacy-free implementation there.

      On a more serious note, I used Beos myself back in the day. It was definitely more responsive than Win98 was, but not everything was perfect either. The networking implementation absolutely sucked. Oh, it had lots of threads, its just that the threads were not all that beneficial to actual performance. The networking stack and some other forms of processing in the system that handle streams of many relatively similar tasks would probably parallelize better via a pipeline scheme where parallelism is achieved by having independent stages of the pipeline run in parallel (much as CPUs break up the task of executing instructions into a pipeline). The type of parallelism that works best can depend on the application, and the one-size fits all philosophy is not usually correct no matter what the solution is.

  • Yes (Score:5, Interesting)

    by MarkPNeyer (729607) on Sunday July 15, 2007 @03:31PM (#19870205)

    I'm a CS grad student at the University of North Carolina. I've never used BeOS, but I'm confident that responsiveness will increase, because the work I'm doing right now is attended to address this very issue.

    The thing that makes multi threaded programming so difficult is concurrency control - it's extremely easy for programmers to screw up lock-based methods, deadlocking the entire system. The are newer methods of concurrency control that have been proposed, and the most promising method (in my opinion) is 'Software Transactional Memory' which makes it almost trivial to convert correct sequential code to code that is thread-safe. Currently, there are several 'High Performance Computing Languages' in development, and to my knowledge, they all include transactional memory.

    The incredible difficulties involved in making chips faster are precipitating a shift to multicore machines. The widespread prevalence of these machines, coupled with newer concurrency control techniques will undoubtedly lead to an increase of responsiveness.

    • by kimanaw (795600) on Sunday July 15, 2007 @03:56PM (#19870409)
      Unfortunately, STM is very resource heavy and very slow. Yes, it abstracts away lots of issues, but that abstraction comes at a significant cost. In most instances, STM is slower than "classic" locking schemes until 10+ cores are available. (FYI: University of Rochester [rochester.edu] has a nice bibliography for STM info)

      If/when the CPU designers currently screaming "more threads, more threads!" at us coders get around to implementing efficient h/w transactional memory, painless fine grain parallelism may become a reality. Until then, STM may be fine for very large applications on systems with huge memories and lots of cores, but probably isn't an option for the average desktop.

      But STM does present some intriguing possibilities for distributed parallel environments (think STM + DSM).

    • Re: (Score:3, Insightful)

      by chiph (523845)
      Forgive me, but STM is a crutch.
      Yes, it will help the programmer masses not shoot themselves in the foot, but the overhead in STM is phenomenal, and you're relying on Moore's Law to save you.

      If you want a responsive system (running on thread-unfriendly OSs like Windows) there's no substitute for knowing what you're doing.

      We currently have some offshore developers who are peppering their code with Thread.Sleep() statements, with sleep values selected so that the code kinda-sorta works on their machines. The
  • by Tony (765) on Sunday July 15, 2007 @03:32PM (#19870219) Journal
    I think both NeXTStep and BeOS are living (dead) proof that Microsoft set the computer industry back over a decade. It wasn't until MS-Windows 2k that MS-Windows was even close to NeXTStep in features, and the cost was a lack of simplicity. (The only downside to the NeXT: Netware networking sucked. But Netware networking sucked on everything but DOS, so I guess it's no surprise.)

    Same with BeOS. It had many features, including stability, ease-of-use, and responsiveness that MS-Windows can't seem to find today. Granted, neither can GNU/Linux or Mac OSX, but since they are hardly the predominant OS, I can't really fault them to the same extent.

    Anyway, it's an old rant. Never mind the ravings of an oldster who never got over the sopranoing Microsoft gave DR-DOS. Those like me are just bitter our careers turned from fun and interesting to tedious and dull because of Microsoft. Y'all go on and play with your shiny new toys. No, really, don't mind me. I'm just gonna sit up here on my porch and get rip-roaring drunk and talk about the old days, whether anybody's listening or not.
    • IBM made a decision in 1980 to go with the Intel 8088 (8/16 bit) processor instead of the Motorola 68000 (16/32 bit) processor. At the time, the Motorola processor was designed to be the processor of the future. On the other hand, the 8088 was intended to be almost compatible with 8080A assembly code. This created the need for the 8088 segmented architecture, and segments suck.

      The use of segment registers set PC development back over a decade. Essentially, all the 80's was spent fighting segmentation

    • Re: (Score:3, Informative)

      by drsmithy (35869)

      It wasn't until MS-Windows 2k that MS-Windows was even close to NeXTStep in features, and the cost was a lack of simplicity.

      Depends on what you're measuring. NT had (still has, comparing to OS X) better internals - SMP, fine-grained locking, etc, etc.

  • by Tablizer (95088) on Sunday July 15, 2007 @03:43PM (#19870307) Journal

    [BSOD]

      . , . . , . . [BSOD]

      - . [BS0D]

    [BSOD]

      . . , . [BS0D]

      - . [BSOD]
  • Ummm... (Score:4, Insightful)

    by Bluesman (104513) on Sunday July 15, 2007 @03:58PM (#19870425) Homepage
    The big advantage with threads is that the TLB doesn't have to be flushed on a context switch, since they share the same address space. This has great performance advantages over processes, but you lose all of the advantages of protected virtual memory, hence the need for locks, mutexes, critical sections, etc. Threads are actually a step backward from a reliability/security standpoint.

    BeOS was a single-user system, if I recall, so that partially reduces the need for the security features that having multiple processes provide.

    But beyond that, modern OS's seem to offer a lot more flexibility. They have processes if you want separation of address space, shared memory if you need better performance for communication between threads, threading if you want a shared address space, and user-level threading libraries for the ultimate in performance if you're willing to spend the time to code it properly.

    Being able to watch eight movies at a time is a neat trick, but it's not particularly useful, especially when we'll soon have processors with a ridiculous amount of cores on them. With a large number of cores, the overhead of a process context switch is hardly more than that of a thread, since a CPU intensive process can run on its own core.

    I think the future of OS's is more likely to be in micro-kernel architectures that can move processes around efficiently to balance the processing load between many CPUs. Or a hybrid microkernel/monolithic architecture that could run the big kernel on one CPU for tasks that require responsiveness, and the rest of the kernel processes balanced between remaining CPU's for throughput.

  • by Proudrooster (580120) on Sunday July 15, 2007 @04:35PM (#19870709) Homepage
    Ask yourself this question, "Is High Performance Computing Really the Goal?" or is herding the consumer to newer shinier hardware the goal? The amount of computing power found in a typical Pentium III computer sitting out and someones curb far exceeds the needs of most users.
    • by blankaBrew (1000609) on Sunday July 15, 2007 @05:29PM (#19871073)
      I'm so sick of hearing that most users don't need anything greater than say a P3. That is bogus. Users today do more things with their computers than was done during P3's day. Today, people retouch photos and import them into a library with thousands of photos, they render home movies taken from their camcorder, they run movies (quicktime, flash, etc.) at hi resolutions and at full screen, they rip CDs, they sometimes rip DVDs, video teleconferencing, and so much more. Heck, you need a decent system to render most popular websites today. Here's my generalization: Most slashdotters don't give "Joe Six-Pack" enough credit. He may not know how it works, but he uses more features than you think. The fact is that the software has gotten easier and more powerful, thus allowing people to use more and more features. To say that most users don't need anything more than 6 year old technology is insulting to software developers. It essentially is saying that these developers have been wasting their time for the past 6 years.
  • by forgoil (104808) on Sunday July 15, 2007 @04:54PM (#19870821) Homepage
    Programming languages like Haskell and Erlang has very little problems with using a massive amounts of CPUs/cores. Look them up and learn about them, and you'll see that they can, without any fuss, spread over many many threads without any special code at all.

    Well, that's it, read up and then maybe we can get some more interesting Slashdot postings about new computers:)

    And it is quite amazing that Sun hasn't picked up on this. Their little Java thingie doesn't scale that well after all:)
  • by TheLink (130905) on Monday July 16, 2007 @02:44AM (#19874015) Journal
    Let me know how long it takes to start OpenOffice on BeOS.
  • by argent (18001) <peter@NOsPam.slashdot.2006.taronga.com> on Monday July 16, 2007 @09:00AM (#19875747) Homepage Journal
    Back when BeOS was still cool, and Rhapsody was hot, and NT was still counting by numbers instead of names, I installed BeOS, Rhapsody DR1, and NT 4 on the same hardware... a Pentium with 16MB of RAM... not exactly state of the art but not ridiculous for the time either.

    BeOS showed no exceptional capabilities. Both Rhapsody and NT were easily able to run multiple concurrent applications without slowdown, and BeOS was at least as often bottlenecked on I/O.

    BeOS was certainly a competent OS design, but the "remarkable" performance was only remarkable when it was compared with the classic Mac OS and mainstream Windows 9x. With those as the "competition", the legend of BeOS has grown over the years, but any contemporary preemptive multitasking OS could do as well.

A LISP programmer knows the value of everything, but the cost of nothing. -- Alan Perlis

Working...