Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
News

Who Will Benefit From Hyper-Threading? 55

qoncept asks: "I've read a number of reviews of Intel's new Pentium 4 3.06ghz processor with multithreading and I've noticed that perhaps it is being reviewed as an option to the wrong people, and in fact Intel may even be marketing it to the wrong people. It seems that, as a business move, Hyper-Threading may not have been worth Intel's investing in it. Most reviews show that in single threaded benchmarks, there are literally no benefits to using HT. In multithreaded processes, the results are moderate at best. Yet, of course, the reviews seem to say the feel is better. There you go -- it won't increase your productivity by compiling your Java. But, price point permitting, it may be exactly what the casual home user wants -- save money by getting, say a 3.06ghz HT CPU instead of a 3.6ghz CPU without, yet have Internet Explorer, mIRC, AIM and Word run just as 'comfortably.' The benchmarks don't say much for HT, but I'm at least slightly excited about it. What about everyone else?"
This discussion has been archived. No new comments can be posted.

Who Will Benefit From Hyper-Threading?

Comments Filter:
  • Developers (Score:2, Interesting)

    I really look forward to being able to run multi-threaded apps on the average user's desktop. There are a lot of advantages to being able to have two lines of logic running concurrently. Although there are few performance benefits right now i'm sure developers will appreciated the ubiquity of SMP and all of the nifty programming techniques that come with it.
    • Re:Developers (Score:3, Informative)

      by davincile0 ( 168775 )
      Neither SMP nor hyperthreading are prerequisites to writing multi-threaded programs. They run just fine on single-cpu machines, so long as the thread library and the OS' scheduler do a decent job. It's nice when you can run multiple threads concurrently on multiple CPUs, or with hyperthreading, but it's a stretch to say "now we can finally start writing multithreaded programs."
      • To quote: "Simultaneous multithreading is a processor design that combines hardware multithreading with superscalar processor technology to allow multiple threads to issue instructions each cycle. Unlike other hardware multithreaded architectures (such as the Tera MTA), in which only a single hardware context (i.e., thread) is active on any given cycle, SMT permits all thread contexts to simultaneously compete for and share processor resources. Unlike conventional superscalar processors, which suffer from a lack of per-thread instruction-level parallelism, simultaneous multithreading uses multiple threads to compensate for low single-thread ILP. The performance consequence is significantly higher instruction throughput and program speedups on a variety of workloads that include commercial databases, web servers and scientific applications in both multiprogrammed and parallel environments. " http://www.cs.washington.edu/research/smt/ Yeah, when I say multi-threaded, I mean multi-threaded, as opposed to an OS hack which makes it appear to be multi-threading. The windows threading model is just peachy for running excel and word at the same time, but as the time window you're operating in decreases, it no longer is useufl. Single-processor pseudo-multi-threading is more useful for UI and such, but true multi-threading is a new way of thinking about solving some of the more interesting problems in computer science today. Just because we had an os hack to make it look like programs were multithreading does not diminish the impact of true multi-threading.

        • The use of the term "thread" in the statements you quote refers more to a "thread of execution" than specifically to a thread in a multi-threaded app. Their statements were intended to (and do) apply to multiple threads in one multithreaded app, or several seperate multithreaded apps, or several seperate processes. The things they say about true parallelism are basically the same things you could say of SMP vs UP. They built a "half-smp" inside a single processor. Another way to think of it is that they are allowing threads of execution that would otherwise be completely blocked to make opportunistic use of parts of the instruction pipeline that the main active process isn't using.

          In any case (up, smp, up-ht, smp-ht), one should code multi-threaded for apps that can benefit from parallelism, and don't bother for those that don't. And in any case, the os will do it's best in the given hardware context to satisfy your needs.
          • Re:Developers (Score:2, Interesting)

            by PaulBu ( 473180 )
            Two points:
            They built a "half-smp" inside a single processor.

            It's more of smp squared (or at least doubled... ;) ) -- threads in multithreaded app share instructon AND data caches (no cache coherency problems which are going to plague SMP implementations more and more), as well as register files (fast data exchange which does not require access to the main memory nor even to the inter-processor bus).

            one should code multi-threaded for apps that can benefit from parallelism

            One? Or the compiler? You use s/w available in source code, right? ;-)

            • I can see the cache thing making a performance boost versus SMP. I don't get the register-file sharing. How can two threads, even if they are in the same process, share register files? Each thread has it's own set of registers, and they can't see each others', as far as I'm aware. But SMP still wins in many cases because you can actually run two cpu-bound processes/threads full-on with 2 cpus - with hyperthreading instead of getting 2x you're getting 1.5x, or 1.1x, or 1.7x maybe depending on the nature of the two processes/threads. One is using the pipeline slack the other one leaves. If the tight loops in these threads/processes are hand-optimized asm that tries to make full use of the pipelines with careful instruction ordering, it would leave virtually zero free for another thread to use. For that matter, if the two threads are doing virtually the same thing, they're likely to get into lock-step with each other since they use the pipelines in a similar fashion.

              Yes I use software available in source code form, what does that have to do with my statement that one should code parallelizable apps in multi-threaded style as appropriate regardless of hardware and it will always be of benefit.
      • The optimization is transparent to application developers, but it takes extra effort in compiler to compile the source optimized for a particular architecture, like HT.

        I like to use this simplest example in lecture. Pentium(I) has two pipelines U-pipe and V-pipe. They're fed in instructions sequentially. Therefore, it'd be better if we could arrange the code during compilation time in such a way that an execution of an instruction does not depend on the result of the previous one.

        I'm not sure what it takes to optimize compiled code to take advantage of HT, but I'm sure it can be done.
  • Everyone misses (Score:5, Insightful)

    by psavo ( 162634 ) <psavo@iki.fi> on Friday November 15, 2002 @09:18PM (#4682556) Homepage
    The point in 2+ CPU systems. It's not about getting multithreaded apps gettin faster, it's about getting more programs run together better.
    When using 2 cpus (or HT), when one process takes all the juice there's still some left for everything else, and system will appear more responsive.
    So, there You go, You can encode some divx, and still browse comfortably net, or listen to mp3's, or watch some divx. (Of course I don't know how effective HT is, but my 2xAthlon lets me do just that).
    • Not a problem on my 1x AthlonXP 1667 (2000+). Most modern operating systems can balance processes pretty well (I can burn a CD while playing a game, for example; my burner rarely has to use BURN-Proof). The point is that a process should never take over the system because it should never be allowed to take over the system. I'd say that Windows does a pretty good job of that.
    • Re:Everyone misses (Score:4, Informative)

      by ConceptJunkie ( 24823 ) on Friday November 15, 2002 @10:07PM (#4682800) Homepage Journal
      But since NT/2000/XP has all disk I/O in critical regions, your system still grinds to a standstill.

      The day Microsoft OS's are not ridiculously I/O-bound, this will make a much bigger difference... for Microsoft users.

      Of course, I guess there's a point to helping your filesystem remain intact when Granny or the baby flips the Big Red Switch without shutting down...

      The moral: Use lots and lots and lots of RAM with Microsoft

      • This topic is interesting to me. Anyone have info about Windows being I/O-bound?
        • Re:Everyone misses (Score:1, Interesting)

          by Anonymous Coward
          Try to take your windows installation and make a run-from-CD version, like Knoppix is to linux.

          You will soon find that windows craps out on all kinds of stuff because it needs to touch the harddrive for silly things, and can't run off of read only media.

          After getting around the basic ones by moving those files to ram disks, you will find that windows needs to touch certain files everytime a new process starts; that's when you realize your project is fucked and give up.
      • Use Linux. It has SMT(HT) support. I met 'mjc' of #gentoo in irc.freenode.net. He has 4-way Xeon each with SMT, results in virtual 8-way processors. :)
      • As with km790816's comment, this interests me. I find pre-emptively scheduled OS's to not schedule I/O according to process priority. Thus a lower priority process can tie up the hard drive and slow down a higher priority process, possibly causing it to fail in its performance (skip audio/video). This seems to be a problem on both Windows and *nix. Does anyone have an answer / solution to this?

        • My understanding is that Windows does this to help maintain file system integrity... turning off a Windows box without shutting down is much less risky than with Linux, as I understand it. The trade-off is performance, and when I have lots of stuff going on, sometimes the whole system will freeze for several seconds or more while the disk thrashes. If I get really stupid and launch something memory-intensive when the machine is already low on RAM the system can go dead for several minutes. It's pretty easy to avoid that, usually.

    • The benefit you describe is just as achievable with good scheduling even on a single processor. (Amiga users have known this since the 1980s.) You can run a bunch of CPU-bound processes and the system's responsiveness will not be influenced at all.

      The real benefit of SMP/SMT on a modern OS is that it lets you get stuff done faster. It has nothing to do with responsiveness; that's the scheduler's job. SMP/SMT only help with responsiveness if your scheduler is defective.

  • and a 500Mhz p3

    and 99% of the time you wouldn't know the difference

    • I watch porn 99% of the time too! But a better processor makes on hell of a lot of difference to that 1% of the time when I'm not.
    • by ctr2sprt ( 574731 ) on Friday November 15, 2002 @11:02PM (#4683106)
      Are you including time that the machines are idle and you're not using them? That's the only way I can make sense of your claim. Even if you're not a hardcore developer (where MP is a big bonus) or gamer (where the faster CPU make all the difference, and it doesn't matter how many of them you have), the difference is still going to be visible for ordinary desktop tasks, like ripping a CD and surfing the web at the same time.

      And while we're comparing experience, I have a 2-way PPro-200 system, a 2-way P3-450 system, and a 1-way P4-1.6 system. Both of the MP machines are far more responsive for, well, every task that I throw at them: the only reason I don't have only MP boxes is the cost.

      • It seems to me that games should see some benefit, although that isn't panning out right now. Don't you think performance would jump in games if you could devote an ENTIRE processor to one, so it didn't have to worry about sharing with other processors, not having it's stuff in the cache, etc? The other processor would handle dealing with the hard drives, NIC, and other 'mundane' things that would just 'tie down' the other CPU.

        Because games do many things at once (sound, graphics, input, AI) if they are well written to use SMP, they would see a large benifit. But even in the situation I gave above, there should be a benefit.

      • I've run a multitude of dual and single processor machines. I've never noticed a difference as far as application usability goes. Last time I ever had a problem ripping a CD and surfing the web at the same time was when I was still running my 200mmx.

        Also, dual CPU will not help the "feel" of your system. If it slows down, it is because your IDE bus is busy and Windows is trying to access files off of your HD. You can increase your clock cycles and the number of processors all you want, but the PCI bus is still only 33MHz on a 32bit bus.

        Dual processor makes very little difference to PC users who don't have penis size issues.

        In a server, totally different story. When most OLTP databases or webservers are running, they are generally servicing more than 10 users at once.
    • Most of the time, I agree. For most of what I do, extra eye-candy is all you are going to get for your extra Mhz. I cannot immediately think of anything that my mom or dad would want to do that would require more than 500 Mhz.

      On the other hand, just last week I plowed my dual P3-900 box into the ground. Doing some diagnostic work on a large program I work on, the diagnostic utility bumped my virtual memory usage up to 1.5 GB and kept my CPUs burning 100% for over an hour.

      I guess most people don't need the extra speed. But those who know how to use it should be thankful to all who buy more speed than necessary. How much more do you think a 2.4Ghz chip would cost if 99% of the people were satisfied with the 500Mhz one? Intel can spread development costs over many more people if more people buy the higher-end chip.
  • by Splork ( 13498 ) on Friday November 15, 2002 @09:25PM (#4682594) Homepage
    they've just invented a feature that their marketing department can say without lying that their chip has that others don't.

    the fact that it doesn't do anything useful for most uses at the moment makes no difference.
  • Marketing. . . (Score:3, Interesting)

    by Cokelee ( 585232 ) on Friday November 15, 2002 @09:27PM (#4682603)
    it's the only way with Intel. They can't really make a faster processor, so they're always coming up with new ways to make it "feel" faster, or make the clock speed higher.

    I'm not excited at all. What about resonance? Multithreading with simultaneous and common processes may cause it to run SLOWER!

    • Re:Marketing. . . (Score:2, Interesting)

      by qoncept ( 599709 )
      I believe its called optimizing. I suppose there are alternate routes -- they could just slap more chips on and require an external powersource, a la Voodoo5. But I'm sure as we all know, without optimization, computers would have gone nowhere.

      Sure Intel is using Hyper-Threading as a buzz word, but that doesn't mean its worthless. Your beloved AMD copied SSE, and made their own 3DNow! and you'd have a hard time convincing me either of those will have the impact hyperthreading does. I saw someone compared the price of a p4 with hyperthreading to that of a dual athlon, but thats not the point. Its the technology. Don't you think AMD and everyone else who makes cpus for anything would be interested in taking advantage of it? If it was AMD (and I really dont think anyone else, except maybe Motorola) who had introduced HT, slashdotters would love it.

      Of course I don't like Intel (and of course I hate Bill Gates more, and Steve Jobs has everyone beat), but that doesn't mean having them around isn't healthy for the entire industry.

      By the way, has anyone else noticed nvidia trying the brute force tactic like 3dfx did right before they went out?

  • Another review (Score:2, Informative)

    by KarateBob ( 556340 )
    Heres a new review of the 3.06 HT at Sharky Extreme [sharkyextreme.com]
  • As much as I am really rooting for AMD, I must say that I wish the Athlon's had this feature. The average user is not going to notice a big difference right now because most applications have been so optimized for single processor computers that they perform poorly on SMP computers. The big thing that hyperthreading is going to do is allow for more registers on the X86 architecture w/o changing the instruction set at all. This is the big enhancement and why I am so excited about it.
    • True. Maybe the x86-64 processors (Hammers) will get it soon after launch (or at the next major die change). That said, I think that Intel could wipe the floor with AMD performance wise easily. They already have a great processor. If they could just get it's FPU performance near that of the Athlons (or faster), then AMD would only have the price arguement on their side. IMHO, of course.

      I think you're wrong about the average user seeing no performance increase. I think that that's exactly who WILL see the increase. Developers might not. It's the guy who sits there surfing the web, playing MP3s, and ripping a CD who will benifit from this. A dual 600 feels zippy doing things that a much faster computer has problems with, because it's got a second processor to help process UI clicks, etc.

    • As much as I am really rooting for AMD, I must say that I wish the Athlon's had this feature.
      Me too; I wish all CPUs had it. But don't forget: for about the same price of this P4, you can buy two Athlon MPs. That's two whole cores, and they'll cream a single SMT P4.

      This is a neat enhancement, but it's hardly "big." It just gets Intel somewhat closer to catching up to AMD on bang/buck.

  • Not Me (Score:4, Interesting)

    by Konster ( 252488 ) on Friday November 15, 2002 @11:20PM (#4683202)
    I read several reviews, most notable among them are here [tech-report.com] and here [aceshardware.com]. Although the technology seems compelling when looking forward a few years, its infancy just doesn't sell me the product, especially when I consider that a dual Athlon MP 2000 (1.6Ghz) is respectably close to the $700 PIV 3.06GHz with HT, and costs a LOT less.

    3.06GHz PIV + motherboard + 512MB DDR RAM = $1025
    2 Athlon MP 2000 + motherboard + 512MB DDR RAM = $695....for 80-90% of the performance of the HT PIV?

    Sorry, but I can get the basics for an SMP system for $5 less than Intel wants for its new flagship CPU.

    Now, if I could get 2 PIV 2.4 GHz CPUS with HT, that might be a different story...

    • Hyperthreading is a good concept. I don't know about you, but I'd pay a slight increase in price to get it over an equivelent CPU. This is, when you're using multiple apps, free performance. That said, I agree with you. If the price is similar, I'd rather have a true SMP rig that was slower than a faster HT CPU.

      What this boils down to is this: if given the choice between two CPUs of near equal speed and they cost nearly the same, what would you rather have? The one with or without HT?

      Besides, let's face it, this is for marketing. It does give benefit, but this gives Intel the ablility to say "things are more responsive with Intel Pentium 4 than the other guys". Not only that, but now people can market P4 computers and say "Sure it costs more, but it's like getting a free second processor. That's worth like $700 right there." It wouldn't be the first less-than-half-truth we've heard from people in the PC industry before...

      (*cough* netburst *cough*)

      • People today, it seems to me, have this odd notion that the first iteration of something should be the best iteration.

        Sure, hyperthreading won't do much for you now. But what about the next generation? Or the third? Once this trickles down to the point that it winds up on chips automatically, great.

    • Hey I'm planning my new (BYO)PC now. Where'd you get those prices?
  • Intel will benefit from this because people will THINK that they are getting a better processor and therefore buy into it without reading the reviews and benchmarks.

    However, the server-end of the market will probably be Intel's target audience with the consumer market at the side.
    High-grade servers will benefit from HT because it will allow for better load balancing and server efficency.

    Intel has nothing to lose by releasing it, so why not?
  • There's a few people saying "Right now, it doesn't do anything."

    "Hey man, have you heard about this new invention? It's called a Compact Disc!"
    "Forget about it man, they're worthless. My record player gets crap sound out of them."

    How about MMX? What about 3D cards? We had to wait until software came out that really took advantage of them before we could see what it could really do. Some apps are developed for multithreading, but the hardware's got to be in wide release before it's worth it to developers to write for it.

    I think in the end we'll all benifit, but just like every other technology it'll take some time.

  • Rack Density (Score:2, Informative)

    by stmfreak ( 230369 )
    In our preliminary tests of a unit Intel donated, we were able to run four instances of a single threaded process on a dual-proc HT machine. The performance was somewhat greater than two instances on the same box.

    Admittedly, not conclusive results and we've yet to run more controlled tests, but our initial take is that you might achieve higher rack density of processes and throughput using this architecture.

    Sorry I don't have specific data, we're still studying HT.
  • The funniest quote from the original news story at:
    http://www.newsfactor.com/perl/story/19980.ht ml

    Intel cited a Harris Interactive (Nasdaq: HPOL) Latest News about Harris Interactive study it commissioned that showed the vast majority of computer users do more than one task at a time on their machines, with 50 percent saying they play video games while also burning CDs, for instance. Many of those same users say older machines purchased three or more years ago can have difficulty performing several tasks at once.

    So, how EXACTLY hyperthreading will help an I/O bound problem (burning a CD) AND a GPU-bound problem (rendering OpenGL) simultaneously? ;) If anything, it will make worst-case response for these real-time tasks worse!

    Just some stupid marketoid speaking... I like Apple's straightforward approach ("Supercomputer on your desk/lap") way better... At least Apple is not trying to convince people that a fine (for many applications which know how to use it!) feature in their CPU can really help mundane tasks like burning a CD.

    Paul B.
  • Check out this article [digitalvideoediting.com] that answers your question. It shows how the new Intel chip in a Dell workstation blows the pants off a dual-cpu Apple computer for less cost.
  • Honestly, I am incredibly excited about this hardware, but not for the same reasons you might (or might not) be -

    Fourteen years ago I was doing development on a 6MHz PC/AT. Today high end machines are 3GHz using HT for performance boosting. That pretty much supports Moore's law at 1.5x per year (1.5^14 ~= 500x). My dev box of yore was 512k of RAM, current box running 2G, for a 1.8x per year sustained over 14 years (1.8^14 ~= 4000x) Hard drives? 10M to 160G for a 14 year sustained rate of 2x per year.

    In two years, a high end system will be somewhere in the 6.5GHz range with 8G of RAM and a full Terabyte of hard drive space if I want to splurge and spend $2,000 for the uberSystem. Now THAT excites me.

    Of course Windows, Office, and the next generation of Studio will still run like pigs, but hey - that's life.
  • Think of hyperthreading as turning the HZ value up to 1,000,000,000.

    It's a big win if your OS's scheduler sucks, but supports multiprocessors.

A morsel of genuine history is a thing so rare as to be always valuable. -- Thomas Jefferson

Working...