crookedvulture writes "With its Sandy Bridge and Ivy Bridge processors, Intel allowed standard Core i5 and i7 CPUs to be overclocked by up to 400MHz using Turbo multipliers. Reaching for higher speeds required pricier K-series chips, but everyone got access to a little "free" clock headroom. Haswell isn't quite so accommodating. Intel has disabled limited multiplier control for non-K CPUs, effectively limiting overclocking to the Core i7-4770K and i5-4670K. Those chips cost $20-30 more than their standard counterparts, and surprisingly, they're missing a few features. The K-series parts lack the support for transactional memory extensions and VT-d device virtualization included with standard Haswell CPUs. PC enthusiasts now have to choose between overclocking and support for certain features even when purchasing premium Intel processors. AMD also has overclocking-friendly K-series parts, but it offers more models at lower prices, and it doesn't remove features available on standard CPUs."
Follow Slashdot stories on Twitter
Vigile writes "It looks like the rumors were true; AMD is going to be selling an FX-9590 processor this month that will hit frequencies as high as 5 GHz. Though originally thought to be an 8-module/16-core part, it turns out that the new CPU will have the same 4-module/8-core design that is found on the current lineup of FX-series processors including the FX-8350. But, with an increase of the maximum Turbo Core speed from 4.2 GHz to 5.0 GHz, the new parts will draw quite a bit more power. You can expect the the FX-9590 to need 220 watts or so to run at those speeds and a pretty hefty cooling solution as well. Performance should closely match the recently released Intel Core i7-4770K Haswell processor so AMD users that can handle the 2.5x increase in power consumption can finally claim performance parity."
MojoKid writes with more detailed information on the new hardware Apple announced earlier today at WWDC "On the hardware side, Apple is updating its two MacBook Air devices; both the 11-inch and 13-inch versions will enjoy better battery life (up to 9 hours and 12 hours, respectively), thanks in no small part to having Intel's new Haswell processors inside. They'll also have 802.11ac WiFi on board. Both models have 1.3GHz Intel Core i5 or i7 (Haswell) processors, Intel HD Graphics 5000, 4GB of RAM, and has 128GB or 256GB of flash storage. Arguably the scene stealer on the desktop side of things is a completely redesigned Mac Pro. The 9.9-inch tall cylindrical computer boasts a new 'unified thermal core' which is designed to conduct heat away from the CPU and GPU while distributing it uniformly and using a single bottom-mounted intake fan. It rocks a 12-core Intel Xeon processor, dual AMD FirePro GPUs (standard), 1866MHz DDR3 ECC memory (60GBps), and PCIe flash storage with up to 1.25GBps read speeds. The system promises 7 teraflops of graphics performance, supports 4k displays, and has a host of ports including four USB 3.0, two gigabit Ethernet ports, HDMI 1.4, six Thunderbolt 2 ports that offer super-fast (20Gbps) external connectivity."
MojoKid writes "AMD recently unveiled a handful of mobile Elite A-Series APUs, formerly codenamed Richland. Those products built upon the company's existing Trinity-based products but offered additional power and frequency optimizations designed to enhance overall performance and increase battery life. Today AMD is launching a handful of new Richland APUs for desktops and small form factor PCs. The additional power and thermal headroom afforded by desktop form factors has allowed AMD to crank things up a few notches further on both the CPU and GPU sides. The highest-end parts feature quad-CPU cores with 384 Radeon cores and 4MB of total cache. The top end APUs have GPU cores clocked at 844MHz (a 44MHz increase over Trinity) with CPU core boost clocks that top out at lofty 4.4GHz. In addition, AMD's top-end part, the A10-6800K, has been validated for use with DDR3-2133MHz memory. The rest of the APUs max out at with a 1866MHz DDR memory interface." As with the last few APUs, the conclusion is that the new A10 chips beat Intel's Haswell graphics solidly, but lag a bit in CPU performance and power consumption.
An anonymous reader writes "While everyone was glued to the Xbox One announcement, Nvidia GeForce GTX 780 launch, and Intel's pre-Haswell frenzy, it seems that AMD's launch was overlooked. On Wednesday, AMD launched its latest line of mobile APUs, codenamed Temash, Kabini, and Richland. Temash is targeted towards smaller touchscreen-based devices such as tablets and the various Windows 8 hybrid devices, and comes in dual-core A4 and A6 flavors. Kabini chips are intended for the low-end notebook market, and come in quad-core A4 and A6 models along with a dual-core E2. Richland includes quad-core A8 and A10 models, and is meant for higher-end notebooks — MSI is already on-board for the A10-5750M in their GX series of gaming notebooks. All three new APUs feature AMD HD 8000-series graphics. Tom's Hardware got a prototype notebook featuring the new quad-core A4-5000 with Radeon HD 8300 graphics, and benchmarked it versus a Pentium B960-based Acer Aspire V3 and a Core-i3-based HP Pavillion Sleekbook 15. While Kabini proves more efficient, and features more powerful graphics than the Pentium, it comes up short in CPU-heavy tasks. What's more, the Core-i3 matches the A4-5000 in power efficiency while its HD 4000 graphics completely outpace the APU."
Vigile writes "When NVIDIA released the GTX Titan in February, it was the first consumer graphics card to use the GK110 GPU from NVIDIA that included 2,688 CUDA cores / shaders and an impressive 6GB of GDDR5 frame buffer. However, it also had a $1000 price tag that was the limiting specification for most gamers. With today's release of the GeForce GTX 780 they are hoping to utilize more of the GK110 silicon they are getting from TSMC while offering a lower cost version with performance within spitting range. The GTX 780 uses the same chip but disables a handful more compute units to bring the shader count down to 2,304 — still an impressive bump over the 1,536 of the GTX 680. The 384-bit memory bus remains though the frame buffer is cut in half to 3GB. Overall, the performance of the new card sits squarely between the GTX Titan ($1000) and AMD's Radeon HD 7970 GHz Edition ($439), just like its price. The question is, are PC gamers willing to shell out $220+ dollars MORE than the HD 7970 for somewhere in the range of 15-25% more performance?" As you might guess, there's similarly spec-laden coverage at lots of other sites, including Tom's, ExtremeTech, and TechReport. HotHardware, too.
MojoKid writes "AMD is announcing its Radeon HD 8970M. The mobile GPU is based on a design that has a few small feature changes that have led it to be unofficially labeled a Graphics Core Next (GCN) 1.1 part versus AMD's previous gen GCN 1.0 technology. AMD claims that the Radeon HD 8970M is significantly faster than NVIDIA's GeForce GTX 680M in a variety of tests, but high-end laptops that use AMD hardware are harder to find these days."
MojoKid writes "For the past decade, AMD and Intel have been racing each other to incorporate more components into the CPU die. Memory controllers, integrated GPUs, northbridges, and southbridges have all moved closer to a single package, known as SoCs (system-on-a-chip). Now, with Haswell, Intel is set to integrate another important piece of circuitry. When it launches next month, Haswell will be the first x86 CPU to include an on-die voltage regulator module, or VRM. Haswell incorporates a refined VRM on-die that allows for multiple voltage rails and controls voltage for the CPU, on-die GPU, system I/O, integrated memory controller, as well as several other functions. Intel refers to this as a FIVR (Fully Integrated Voltage Regulator), and it apparently eliminates voltage ripple and is significantly more efficient than your traditional motherboard VRM. Added bonus? It's 1/50th the size." Update: 05/14 01:22 GMT by U L : Reader AdamHaun comments: "They already have a test chip that they used to power a ~90W Xeon E7330 for four hours while it ran Linpack. ... Voltage ripple is less than 2mV. Peak efficiency per cell looks like ~76% at 8A. They claim hitting 82% would be easy..." and links to a presentation on the integrated VRM (PDF).
An anonymous reader writes "In a 15-way graphics card comparison on Linux of both the open and closed-source drivers, it was found that the open-source AMD Linux graphics driver is much faster than the open-source NVIDIA driver on Ubuntu 13.04. The open-source NVIDIA driver is developed entirely by the community via reverse-engineering, but for Linux desktop users, is this enough? The big issue for the open-source 'Nouveau' driver is that it doesn't yet fully support re-clocking the graphics processor so that the hardware can actually run at its rated speeds. With the closed-source AMD Radeon and NVIDIA GeForce results, the drivers were substantially faster than their respective open-source driver. Between NVIDIA and AMD on Linux, the NVIDIA closed-source driver was generally doing better than AMD Catalyst."
Vigile writes "One of the drawbacks to high end graphics has been the lack of low cost and massively-available displays with a resolution higher than 1920x1080. Yes, 25x16/25x14 panels are coming down in price, but it might be the influx of 4K monitors that makes a splash. PC Perspective purchased a 4K TV for under $1500 recently and set to benchmarking high end graphics cards from AMD and NVIDIA at 3840x2160. For under $500, the Radeon HD 7970 provided the best experience, though the GTX Titan was the most powerful single GPU option. At the $1000 price point the GeForce GTX 690 appears to be the card to beat with AMD's continuing problems on CrossFire scaling. PC Perspective has also included YouTube and downloadable 4K video files (~100 mbps) as well as screenshots, in addition to a full suite of benchmarks."
crookedvulture writes "AMD has revealed more details about the unified memory architecture of its next-generation Kaveri APU. The chip's CPU and GPU components will have a shared address space and will also share both physical and virtual memory. GPU compute applications should be able to share data between the processor's CPU cores and graphics ALUs, and the caches on those components will be fully coherent. This so-called heterogeneous uniform memory access, or hUMA, supports configurations with either DDR3 or GDDR5 memory. It's also based entirely in hardware and should work with any operating system. Kaveri is due later this year and will also have updated Steamroller CPU cores and a GPU based on the current Graphics Core Next architecture." bigwophh writes links to the Hot Hardware take on the story, and writes "AMD claims that programming for hUMA-enabled platforms should ease software development and potentially lower development costs as well. The technology is supported by mainstream programming languages like Python, C++, and Java, and should allow developers to more simply code for a particular compute resource with no need for special APIs."
Indiana University has replaced their supercomputer, Big Red, with a new system predictably named Big Red II. At the dedication HPC scientist Paul Messina said: "It's important that this is a university-owned resource. ... Here you have the opportunity to have your own faculty, staff and students get access with very little difficulty to this wonderful resource." From the article: "Big Red II is a Cray-built machine, which uses both GPU-enabled and standard CPU compute nodes to deliver a petaflop -- or 1 quadrillion floating-point operations per second -- of max performance. Each of the 344 CPU nodes uses two 16-core AMD Abu Dhabi processors, while the 676 GPU nodes use one 16-core AMD Interlagos and one NVIDIA Kepler K20."
An anonymous reader writes "Today AMD has officially unveiled its long-awaited dual-GPU Tahiti-based card. Codenamed Malta, the $1,000 Radeon HD 7990 is positioned directly against Nvidia's dual-GPU GeForce GTX 690. Tom's Hardware posted the performance data. Because Fraps measures data at a stage in the pipeline before what is actually seen on-screen, they employed Nvidia's FCAT (Frame Capture Analysis Tools). ... The 690 is beating AMD's new flagship in six out of eight titles. ... AMD is bundling eight titles with every 7990, including: BioShock Infinite, Tomb Raider, Crysis 3, Far Cry 3, Far Cry 3: Blood Dragon, Hitman: Absolution, Sleeping Dogs, and Deus Ex: Human Revolution." OpenGL performance doesn't seem too off from the competing Nvidia card, but the 7990 dominates when using OpenCL. Power management looks decent: ~375W at full load, but a nice 20W at idle (it can turn the second chip off entirely when unneeded). PC Perspective claims there are issues with Crossfire and an un-synchronized rendering pipeline that leads to a slight decrease in the actual frame rate, but that should be fixed by an updated Catalyst this summer.
illiteratehack writes "10 years ago AMD released its first Opteron processor, the first 64-bit x86 processor. The firm's 64-bit 'extensions' allowed the chip to run existing 32-bit x86 code in a bid to avoid the problems faced by Intel's Itanium processor. However AMD suffered from a lack of native 64-bit software support, with Microsoft's Windows XP 64-bit edition severely hampering its adoption in the workstation market." But it worked out in the end.
mikejuk writes "This is a strange story. AMD Vice President of Global Channel Sales Roy Taylor has said there will be no DirectX12 at any time in the future. In an interview with German magazine Heise.de, Taylor discussed the new trend for graphics card manufacturers to release top quality game bundles registered to the serial number of the card. One of the reasons for this, he said, is that the DirectX update cycle is no longer driving the market. 'There will be no DirectX 12. That's it.' (Google translation of German original.) Last January there was another hint that things weren't fine with DirectX when Microsoft sent an email to its MVPs saying, 'DirectX is no longer evolving as a technology.' That statement was quickly corrected, but without mentioning any prospect of DirectX 12. So, is this just another error or rumor? Can we dismiss something AMD is basing its future strategy on?"
An anonymous reader writes "Years of desire by AMD Linux users to have open source video playback support by their graphics driver is now over. AMD has released open-source UVD support for their Linux driver so users can have hardware-accelerated video playback of H.264, VC-1, and MPEG video formats. UVD support on years old graphics cards was delayed because AMD feared open-source support could kill their Digital Rights Management abilities for other platforms."
MojoKid writes "Every year, AMD and NVIDIA re-brand their GPU product lines, regardless of whether the underlying hardware has changed. This annual maneuver is a sop to OEMs, who like yearly refreshes and higher numbers. The big introduction NVIDIA is making this year is what it calls GPU Boost 2.0. When NVIDIA launched the GTX Titan in February, it discussed a new iteration of GPU Boost technology that measured GPU temperature rather than estimating TDP. This new approach gave NVIDIA finer-grained control over clock speeds and thermal thresholds, thereby allowing for better dynamic overclocking. That technology is coming to the GeForce 700M mobile family. In notebooks, GPU Boost 2.0 is a combination of thermal and application monitoring. GPU Boost 2.0 is designed to reflect an important fact of 3D gaming — no two applications use the same amount of power. The variance can be significant, even within the same game. It's therefore possible for the GPU to adjust clocks dynamically in order to maximize frame rates. Put the two together, and NVIDIA believes it can substantially improve FPS speeds without compromising thermals or electrical safe operating margins."
MojoKid writes "AMD made a number of interesting announcements today at the Game Developers Conference, currently taking place in San Francisco. AMD revealed their 'Radeon Sky' series of graphics products targeted at cloud gaming and virtualized computing applications. The company also showed off the dual-GPU powered AMD Radeon HD 7990, and extended the 'Never Settle: Reloaded' gaming bundle program to include BioShock Infinite. AMD revealed three Radeon Sky Series cards, two based on the Tahiti GPU and another based on Pitcairn. The top of the line Radeon Sky 900 is powered by two Tahiti GPUs linked to 6GB of memory (3GB per GPU). The Sky 700 is powered by a single Tahiti GPU and the Sky 500 is based on Pitcairn. All of the cards are passively cooled and are designed for cloud gaming / computing servers. The upcoming high-end, consumer targeted Radeon HD 7990 was also previewed, but few details were given. Devon Nekechuk, Product Manager of AMD Graphics, did say the triple-fan setup was whisper quiet. We think it's safe to assume the card features 6GB of memory and clocks are in-line with current Radeon HD 7970 GHz Edition cards."
Nerval's Lobster writes "One could argue that the University of Illinois' "Blue Waters" supercomputer, scheduled to officially open for business March 28, is lucky to be alive. The 11.6 petaflop supercomputer, commissioned by the University and the National Science Foundation (NSF), will rank in the upper echelon of the world's fastest machines—its compute power would place it third on the current list, just above Japan's K Computer. However, the system will not be submitted to the TOP500 list because of concerns with the way the list is calculated, officials said. University officials and the NSF are lucky to have a machine at all. That's due in part to IBM, which reportedly backed out of the contract when the company determined that it couldn't make a profit. The university then turned to Cray, which would have had to replace what was presumably a POWER or Xeon installation with the current mix of AMD CPUs and Nvidia GPU coprocessors. Allen Blatecky, director of NSF's Division of Advanced Cyberinfrastructure, told Fox that pulling the plug was a 'real possibility.' And Cray itself had to work to find the parts necessary for the supercomputer to begin at least trial operations in the fall of 2012."
MojoKid writes "AMD has just announced a new family of Elite A-Series APUs for mobile applications, based on the architecture codenamed 'Richland.' These new APUs build upon last year's 'Trinity' architecture, by improving graphics and compute performance, enhancing power efficiency through the implementation of a new 'Hybrid Boost' mode which leverages on-die thermal sensors, and offering AMD-optimized applications meant to improve the user experience. AMD is unveiling a new visual identity as well, with updated logos and clearer language, in a bid to enhance the brand. At the top of the product stack now is the AMD A10-5750M, a 35 Watt, 3.5GHz quad-core processor with integrated Radeon HD 8650G graphics, 4MB of L2 cache and a DDR3-1866 capable memory interface. The low-end is comprised of dual-cores with Radeon HD 8400G series GPUs and a DDR3-1600 memory interface."