Bram Stolk writes So, I am running GNU/Linux on a modern Haswell CPU, with an old Radeon HD5xxx from 2009. I'm pretty happy with the open source Gallium driver for 3D acceleration. But now I want to do some GPGPU development using OpenCL on this box, and the old GPU will no longer cut it. What do my fellow technophiles from Slashdot recommend as a replacement GPU? Go NVIDIA, go AMD, or just use the integrated Intel GPU instead? Bonus points for open sourced solutions. Performance not really important, but OpenCL driver maturity is.
GhostX9 writes: SLR Lounge just posted a first look at the Samsung NX1 28.1 MP interchangeable lens camera. They compare it to Canon and Sony full-frame sensors. Spoiler: The Samsung sensor seems to beat the Sony A7R sensor up to ISO 3200. They attribute this to Samsung's chip foundry. While Sony is using 180nm manufacturing (Intel Pentium III era) and Canon is still using 500nm process (AMD DX4 era), Samsung has gone with 65nm with copper interconnects (Intel Core 2 Duo — Conroe era). Furthermore, Samsung's premium lenses appear to be as sharp or sharper than Canon's L line and Sony's Zeiss line in the center, although the Canon 24-70/2.8L II is sharper at the edge of the frame.
An anonymous reader writes: Tests of the AMD Catalyst driver with the latest AAA Linux games/engines have shown what poor shape the proprietary Radeon driver currently is in for Linux gamers. Phoronix, which traditionally benchmarks with open-source OpenGL games and other long-standing tests, recently has taken specially interest in adapting some newer Steam-based titles for automated benchmarking. With last month's Linux release of Metro Last Light Redux and Metro 2033 Redux, NVIDIA's driver did great while AMD Catalyst was miserable. Catalyst 14.12 delivered extremely low performance and some major bottleneck with the Radeon R9 290 and other GPUs running slower than NVIDIA's midrange hardware. In Unreal Engine 4 Linux tests, the NVIDIA driver again was flawless but the same couldn't be said for AMD. Catalyst 14.12 wouldn't even run the Unreal Engine 4 demos on Linux with their latest generation hardware but only with the HD 6000 series. Tests last month also showed AMD's performance to be crippling for NVIDIA vs. AMD Civilization: Beyond Earth Linux benchmarks with the newest drivers.
DeviceGuru writes CompuLab has unveiled a tiny 'Fitlet' mini-PC that runs Linux or Windows on a dual- or quad-core 64-bit AMD x86 SoC (with integrated Radeon R3 or R2 GPU), clocked at up to 1.6GHz, and offering extensive I/O, along with modular internal expansion options. The rugged, reconfigurable 4.25 x 3.25 x 0.95 in. system will also form the basis of a pre-configured 'MintBox Mini' model, available in Q2 in partnership with the Linux Mint project. To put things in perspective, CompuLab says the Fitlet is three times smaller than the Celeron Intel NUC.
itwbennett writes: In the fierce battle between CPU and GPU vendors, it's not just about speeds and feeds but also about process shrinks. Both Nvidia and AMD have had their move to 16nm and 20nm designs, respectively, hampered by the limited capacity of both nodes at manufacturer TSMC, according to the enthusiast site WCCFTech.com. While AMD's CPUs are produced by GlobalFoundaries, its GPUs are made at TSMC, as are Nvidia's chips. The problem is that TSMC only has so much capacity and Apple and Samsung have sucked up all that capacity. The only other manufacturer with 14nm capacity is Intel and there's no way Intel will sell them some capacity.
An anonymous reader writes: Along with the open-source AMD Linux driver having a great 2014, the AMD Catalyst proprietary driver for Linux has also improved a lot. Beyond the open-source Radeon Gallium3D driver closing in on Catalyst, the latest Phoronix end-of-year tests show the AMD Catalyst Linux driver is beating Catalyst on Windows for some OpenGL benchmarks. The proprietary driver tests were done with the new Catalyst "OMEGA" driver. Is AMD beginning to lead real Linux driver innovations or is OpenGL on Windows just struggling?
Phoronix has taken an in-depth look at progress on AMD's open source Radeon driver, and declares 2014 to have been a good year. There are several pages with detailed benchmarks, but the upshot is overwhelmingly positive: Across the board there's huge performance improvements to find out of the open-source AMD Linux graphics driver when comparing the state at the end of 2013 to the current code at the end of this year. The performance improvements and new features presented (among them are OpenMAX / AMD video encode, UVD for older AMD GPUs, various new OpenGL extensions, continued work on OpenCL, power management improvements, and the start of open-source HSA) has been nothing short of incredible. Most of the new work benefits the Radeon HD 7000 series and newer (GCN) GPUs the most but these tests showed the Radeon HD 6000 series still improving too. ... Coming up before the end of the year will be a fresh comparison of these open-source Radeon driver results compared to the newest proprietary AMD Catalyst Linux graphics driver.
itwbennett writes According to a report in Korean IT Times, Samsung Electronics has begun production of the A9 processor, the next generation ARM-based CPU for iPhone and iPad. Korea IT Times says Samsung has production lines capable of FinFET process production (a cutting-edge design for semiconductors that many other manufacturers, including AMD, IBM and TSMC, are adopting) in Austin, Texas and Giheung, Korea, but production is only taking place in Austin. Samsung invested $3.9 billion in that plant specifically to make chips for Apple. So now Apple can say its CPU is "Made in America."
MojoKid writes: AMD just dropped its new Catalyst Omega driver package that is the culmination of six months of development work. AMD Catalyst Omega reportedly brings over 20 new features and a wealth of bug fixes to the table, along with performance increases both on AMD Radeon GPUs and integrated AMD APUs. Some of the new functionality includes Virtual Super Resolution, or VSR. VSR is "game- and engine-agnostic" and renders content at up to 4K resolution, then displays it at a resolution that your monitor actually supports. AMD says VSR allows for increased image quality, similar in concept to Super Sampling Anti-Aliasing (SSAA). Another added perk of VSR is the ability to see more content on the screen at once. To take advantage of VSR, you'll need a Radeon R9 295X2, R9 290X, R9 290, or R9 285 discrete graphics card. Both single- and multi-GPU configurations are currently supported. VSR is essentially AMD's answer to NVIDIA's DSR, or Dynamic Super Resolution. In addition, AMD is claiming performance enhancements in a number of top titles with these these new drivers. Reportedly, as little as 6 percent improvement in performance in FIFA Online to as much as a 29 percent increase in Batman: Arkham Origins can be gained when using an AMD 7000-Series APU, for example. On discrete GPUs, an AMD Radeon R9 290X's performance increases ranged from 8 percent in Grid 2 to roughly 16 percent in Bioshock Infinity.
MojoKid writes To say that BioWare has something to prove with Dragon Age: Inquisition is an understatement. The first Dragon Age: Origins was a colossal, sprawling, unabashed throwback to classic RPGs. Conversely, Dragon Age: Inquisition doesn't just tell an epic story, it evolves in a way that leaves you, as the Inquisitor, leading an army. Creating that sense of scope required a fundamentally different approach to gameplay. Neither Dragon Origins or Dragon Age 2 had a true "open" world in the sense that Skyrim is an open world. Instead, players clicked on a location and auto-traveled across the map from Point A to Point B. Thus, a village might be contained within a single map, while a major city might have 10-12 different locations to explore. Inquisition keeps the concept of maps as opposed to a completely open world, but it blows those maps up to gargantuan sizes. Instead of simply consisting of a single town or a bit of wilderness, the new maps in Dragon Age: Inquisition are chock-full of areas to explore, side quests, crafting materials to gather, and caves, dungeons, mountain peaks, flowing rivers, and roving bands of monsters. And Inquisition doesn't forget the small stuff — the companion quests, the fleshed-out NPCs, or the rich storytelling — it just seeks to put those events in a much larger context across a broad geographical area. Dragon Age: Inquisition is one of the best RPGs to come along in a long time. Never has a game tried to straddle both the large-scale, 10,000-foot master plan and the small-scale, intimate adventure and hit both so well. In terms of graphics performance, you might be surprised to learn that a Radeon R9 290X has better frame delivery than a GeForce GTX 980, despite the similarity in the overall frame rate. The worst frame time for an Radeon R9 290X is just 38.5ms or 26 FPS while a GeForce GTX 980 is at 46.7ms or 21 FPS. AMD takes home an overall win in Dragon Age: Inquisition currently, though Mantle support isn't really ready for prime time. In related news, hypnosec sends word that Chinese hackers claim to have cracked Denuvo DRM, the anti-piracy solution for Dragon Age: Inquisition. A Chinese hacker group has claimed that they have managed to crack Denuvo DRM — the latest anti-piracy measure to protect PC games from piracy. Introduced for the first time in FIFA 15 for PC, the Denuvo anti-piracy solution managed to keep the FIFA 15 uncracked for 2 months and Dragon Age Inquisition for a month. However, Chinese hackers claim that they have managed to rip open the DRM after fifteen days of work. The hackers have uploaded a video to prove their accomplishment. A couple of things need to be pointed out here. First,the Chinese team has merely cracked the DRM and this doesn't necessarily mean that there are working cracks out there. Also, the crack only works with Windows 7 64-bit systems and won't work on Windows 8 or Windows 7 32-bit systems for now. The team is currently working to collect hardware data on processor identification codes.
jones_supa writes We all are aware of various chirping and whining sounds that electronics can produce. Modern graphics cards often suffer from these kind of problems in form of coil whine. But how widespread is it really? Hardware Canucks put 50 new graphics cards side-by-side to compare them solely from the perspective of subjective acoustic disturbance. NVIDIA's reference platforms tended to be quite well behaved, just like their board partners' custom designs. The same can't be said about AMD since their reference R9 290X and R9 290 should be avoided if you're at all concerned about squealing or any other odd noise a GPU can make. However the custom Radeon-branded SKUs should usually be a safe choice. While the amount and intensity of coil whine largely seems to boil down to luck of the draw, at least most board partners are quite friendly regarding their return policies concerning it.
MojoKid (1002251) writes "Life is hard when you're a AAA publisher. Last month, Ubisoft blamed weak console hardware for the troubles it had bringing Assassin's Creed Unity up to speed, claiming that it could've hit 100 FPS but for weak console CPUs. Now, in the wake of the game's disastrous launch, the company has changed tactics — suddenly, all of this is AMD's fault. An official company forum post currently reads: "We are aware that the graphics performance of Assassin's Creed Unity on PC may be adversely affected by certain AMD CPU and GPU configurations. This should not affect the vast majority of PC players, but rest assured that AMD and Ubisoft are continuing to work together closely to resolve the issue, and will provide more information as soon as it is available." There are multiple problems with this assessment. First, there's no equivalent Nvidia-centric post on the main forum, and no mention of the fact that if you own an Nvidia card of any vintage but a GTX 970 or 980, you're going to see less-than ideal performance. According to sources, the problem with Assassin's Creed Unity is that the game is issuing tens of thousands of draw calls — up to 50,000 and beyond, in some cases. This is precisely the kind of operation that Mantle and DirectX 12 are designed to handle, but DirectX 11, even 11.2, isn't capable of efficiently processing that many calls at once. It's a fundamental limit of the API and it kicks in harshly in ways that adding more CPU cores simply can't help with.
MojoKid writes It has been over six years since Intel first unveiled its Atom CPUs and detailed its plans for new, ultra-mobile devices. The company's efforts to break into smartphone and tablet sales, while turning a profit, have largely come to naught. Nonetheless, company CEO Brian Krzanich remains optimistic. Speaking to reporters recently, Krzanich opined that the company's new manufacturing partners like Rockchip and Spreadtrum would convert entirely to Intel architectures within the next few years. Krzanich has argued that with Qualcomm and MediaTek dominating the market, it's going to be tougher and tougher for little guys like Rockchip and Spreadtrum to compete in the same spaces. There's truth to that argument, to be sure, but Intel's ability to offer a competitive alternative is unproven. According to a report from JP Morgan, Intel's cost-per-wafer is currently estimated as equivalent to TSMC's average selling price per wafer — meaning TSMC is making money well below Intel's break-even. Today, Intel is unquestionably capable of building tablet processors that offer a good overall experience but the question of what defines a "good" experience is measured in its similarity to ARM. It's hard to imagine that Intel wants to build market share as an invisible partner, but in order to fundamentally change the way people think about Intel hardware in tablets and smartphones, it needs to go beyond simply being "as good" and break into territory that leaves people asking: "Is the ARM core just as good as the Intel chip?"
MojoKid writes Dell's Alienware division recently released a radical redesign of their Area-51 gaming desktop. With 45-degree angled front and rear face plates that are designed to direct control and IO up toward the user, in addition to better directing cool airflow in, while warm airflow is directed up and away from the rear of the chassis, this triangular-shaped machine grabs your attention right away. In testing and benchmarks, the Area-51's new design enables top-end performance with thermal and acoustic profiles that are fairly impressive versus most high-end gaming PC systems. The chassis design is also pretty clean, modular and easily servicable. Base system pricing isn't too bad, starting at $1699 with the ability to dial things way up to an 8-core Haswell-E chip and triple GPU graphics from NVIDIA and AMD. The test system reviewed at HotHardware was powered by a six-core Core i7-5930K chip and three GeForce GTX 980 cards in SLI. As expected, it ripped through the benchmarks, though the price as configured and tested is significantly higher.
An anonymous reader writes: AMD recently presented plans to unify their open-source and Catalyst Linux drivers at the open source XDC2014 conference in France. NVIDIA's rebuttal presentation focused on support Mir and Wayland on Linux. The next-generation display stacks are competing to succeed the X.Org Server. NVIDIA is partially refactoring their Linux graphics driver to support EGL outside of X11, to propose new EGL extensions for better driver interoperability with Wayland/Mir, and to support the KMS APIs by their driver. NVIDIA's binary driver will support the KMS APIs/ioctls but will be using their own implementation of kernel mode-setting. The EGL improvements are said to land in their closed-source driver this autumn while the other changes probably won't be seen until next year.
MojoKid (1002251) writes A new interview with Assassin's Creed Unity senior producer Vincent Pontbriand has some gamers seeing red and others crying "told you so," after the developer revealed that the game's 900p framerate and 30 fps target on consoles is a result of weak CPU performance rather than GPU compute. "Technically we're CPU-bound," Pontbriand said. "The GPUs are really powerful, obviously the graphics look pretty good, but it's the CPU that has to process the AI, the number of NPCs we have on screen, all these systems running in parallel. We were quickly bottlenecked by that and it was a bit frustrating, because we thought that this was going to be a tenfold improvement over everything AI-wise..." This has been read by many as a rather damning referendum on the capabilities of AMD's APU that's under the hood of Sony's and Microsoft's new consoles. To some extent, that's justified; the Jaguar CPU inside both the Sony PS4 and Xbox One is a modest chip with a relatively low clock speed. Both consoles may offer eight CPU threads on paper, but games can't access all that headroom. One thread is reserved for the OS and a few more cores will be used for processing the 3D pipeline. Between the two, Ubisoft may have only had 4-5 cores for AI and other calculations — scarcely more than last gen, and the Xbox 360 and PS3 CPUs were clocked much faster than the 1.6 / 1.73GHz frequencies of their replacements.
An anonymous reader writes: AMD is moving forward with their plans to develop a new open-source Linux driver model for their Radeon and FirePro graphics processors. Their unified Linux driver model is moving forward, albeit slightly different compared to what was planned early this year. They're now developing a new "AMDGPU" kernel driver to power both the open and closed-source graphics components. This new driver model will also only apply to future generations of AMD GPUs. Catalyst is not being open-sourced, but will be a self-contained user-space blob, and the DRM/libdrm/DDX components will be open-source and shared. This new model is more open-source friendly, places greater emphasis on their mainline kernel driver, and should help Catalyst support Mir and Wayland.
An anonymous reader writes Counter-Strike: Global Offensive has finally been released for Linux two years after its Windows debut. The game is reported to work even on the open-source Intel Linux graphics drivers, but your mileage may vary. When it comes to the AMD and NVIDIA drivers, NVIDIA continues dominating for Linux gaming over AMD with Catalyst where there's still performance levels and other OpenGL issues.
MojoKid (1002251) writes NVIDIA has launched two new high-end graphics cards based on their latest Maxwell architecture. The GeForce GTX 980 and GTX 970 are based on Maxwell and replace NVIDIA's current high-end offerings, the GeForce GTX 780 Ti, GTX 780, and GTX 770. NVIDIA's GeForce GTX 980 and GTX 970 are somewhat similar as the cards share the same 4GB frame buffer and GM204 GPU, but the GTX 970's GPU is clocked a bit lower and features fewer active Streaming Multiprocessors and CUDA cores. The GeForce GTX 980's GM204 GPU has all of its functional blocks enabled. The fully-loaded GeForce GTX 980 GM204 GPU has a base clock of 1126MHz and a Boost clock of 1216MHz. The GTX 970 clocks in with a base clock of 1050MHz and Boost clock of 1178MHz. The 4GB of video memory on both cards is clocked at a blisteringly-fast 7GHz (effective GDDR5 data rate). NVIDIA was able to optimize the GM204's power efficiency, however, by tweaking virtually every part of the GPU. NVIDIA claims that Maxwell SMs (Streaming Multiprocessors) offer double the performance of GK104 and double the perf per watt as well. NVIDIA has also added support for new features, namely Dynamic Super Resolution (DSR), Multi-Frame Sampled Anti-Aliasing (MFAA), and Voxel Global Illumination (VXGI). Performance-wise, the GeForce GTX 980 is the fastest single-GPU powered graphics card ever tested. The GeForce GTX 970 isn't as dominant overall, but its performance was impressive nonetheless. The GeForce GTX 970 typically performed about on par with a GeForce GTX Titan and traded blows with the Radeon R9 290X.