Amd radeon r9 fury x review: The AMD Radeon R9 Fury X Review: Aiming For the Top

AMD Radeon R9 Fury X 4GB Review — Tom’s Hardware

Today’s best Intel Core i7-5930K deals

3 Amazon customer reviews

☆☆☆☆☆

$150

View

$654.89

View

Introduction

AMD is known for provoking its fans into frothy feeding frenzies ahead of important launches. After slowly teasing out details of its Radeon R9 Fury X flagship graphics card, it’s time for us to determine if the hype was warranted. Can a more complex GPU, groundbreaking memory technology and closed-loop liquid cooler generate enough performance to usurp Nvidia’s decidedly efficient Maxwell architecture in the GeForce GTX 980 Ti?

AMD’s last piece of ultra-high-end hardware surfaced more than a year ago. The Radeon R9 295X2 was a crowning achievement for the company. It showed that two Hawaii GPUs fit on one graphics card and, unlike the Radeon HD 7990 and 6990 before it, could be cooled relatively quietly.

The secret to AMD’s success was a closed-loop liquid cooler. A large radiator and 120mm fan effectively exhausted waste heat right out the back of your chassis. The combination didn’t make much noise, and yet it enabled enough thermal capacity for AMD to overclock its big processors beyond the reference Radeon R9 290X.

Best of all, it surfaced for $1500—half the price Nvidia was asking for its ill-fated GeForce GTX Titan Z, a card that ate up three expansion slots and still needed significant detuning from its Titan pedigree to behave under air cooling.

We certainly admired what AMD accomplished in its Radeon R9 295X2. But over time, and in the face of increasingly faster single-GPU boards from Nvidia, the 295X2 became a reminder of the company’s tendency toward brute force, rather than efficiency, to compete. Meanwhile, prices on the dual-GPU board dipped as low as $600—a steal for anyone willing to cope with its physical size and the sometimes-frustrating state of CrossFire support.

Technical Specifications

AMD Radeon R9 Fury X

Radeon R9 290X

Nvidia GeForce GTX 980 Ti

MORE: Best Graphics Cards For The Money
MORE: All Graphics Content
MORE: Graphics Cards in the Forum

A Blast From The Past

The Radeon R9 Fury X is born of the same DNA, with Graphics Core Next in its heart and liquid drawing thermal energy away from the massive Fiji GPU. It’s a single-processor board, though, so it doesn’t need to reside on an especially long PCB. Further, Fury X is AMD’s first graphics card featuring HBM, putting 4GB of stacked dies on a silicon interposer, right next to Fiji, further condensing the necessary dimensions.

What we’ve been promised, of course, is that the new GPU’s bigger pool of processing resources plus an unprecedented amount of memory bandwidth come together on a graphics card able to beat Nvidia’s GeForce GTX 980 Ti (and at a similar $650 price point, too).

AMD isn’t leaving this card’s fate to chance. The company’s marketing machine is invoking a brand from the past—one that predates Tom’s Hardware, even. Rage was what ATI called its first 3D accelerators in 1995, back before PCI Express or even AGP. And yes, I owned an original 3D Rage graphics card. The Rage Pro, Rage 128 Pro and Rage Fury MAXX also found their way into my various PCs. This is AMD conjuring up some of the mojo that made ATI a company it wanted to acquire for more than $5 billion dollars back in 2006.

Fiji Takes Shape

Is the Radeon R9 Fury X worthy of such a designation? It wields promising specifications, that’s for sure. We covered most of the vitals in a preview last week. But to recap, the card centers on AMD’s new Fiji GPU.

Both AMD and Nvidia knew that 28nm manufacturing would be a long-term affair. However, AMD acknowledges it was counting on process technology to evolve more quickly. The same almost certainly applies to Nvidia. As reality set in, the two companies adapted and took different paths in designing their newest GPUs. Whereas GM200 measures 601mm², Fiji is almost as large at 596mm². AMD crams a claimed 8.9 billion transistors into that space, and then mounts the chip on a 1011mm² silicon interposer, flanking it with four stacks of High Bandwidth Memory.

A quick glance at Fiji’s block diagram is suggestive of the Hawaii design launched back in 2013, if only because they’re both organized into four Shader Engines, each with its own geometry processor and rasterizer, plus four render back-ends capable of 16 pixels per clock cycle. AMD doesn’t touch any of that. But the company does replicate more of the Compute Units in each Shader Engine, deploying 16 rather than 11. With 64 shaders per CU, you end up with 1024 per Shader Engine and an aggregate of 4096 shaders across the entire GPU. Furthermore, AMD maintains four texture filter units per CU, yielding a total of 256 in Fiji versus Hawaii’s 176.

Clearly, Fiji’s theoretical shading, compute and texture filtering performance are way up. Without corresponding improvements to the geometry engines or ROP count, however, aren’t we staring down the barrel of another big bottleneck? It’s going to depend on the workload. Just remember that back when it introduced Hawaii, AMD made a concerted effort to augment geometry throughput with the four-way Shader Engine layout and boost pixel fill rate. Representatives went so far as to posit that memory bandwidth was the GPU’s limiter, despite a wide 512-bit bus. Today, the company says its analysis suggests standard eight-bit-per-channel raster ops rarely bottleneck performance. Sixteen-bit-per-channel ops can be more of a challenge; however, the combination of HBM and color compression allow Fiji to realize GCN’s ability to support full-rate 16bpc raster operations where previous GPUs were, in fact, bottlenecked. Does AMD wish it could have built a bigger engine? That seems to have been the plan. Facing a limit to the interposer’s size, though, AMD pulled right up to ceiling for its GPU, yielding Fiji.

What you don’t see in the processor’s block diagram are the incremental improvements made to AMD’s Graphics Core Next architecture, some of which actually help alleviate the bottlenecks we’d worry about. Hawaii employed a second iteration of GCN, which was subsequently updated for Radeon R9 285’s Tonga GPU. Fiji inherits the benefits of what AMD calls its third-gen GCN design. One such advantage is updated geometry processors that improve tessellation performance. Lossless color compression for frame buffer reads and writes, new 16-bit integer/floating-point instructions and a doubling of L2 cache to 2MB are on the list too. Less relevant to Fiji’s 3D pipeline but no less welcome are a higher-quality display scalar and an updated video decode engine supporting accelerated HEVC playback.

On the compute side, Fiji incorporates improved task scheduling and some new data parallel processing instructions to go along with the eight Asynchronous Compute Engines carried over from Hawaii. Given this GPU’s 4096 shaders and 1050MHz maximum core frequency, AMD can claim an 8.6 TFLOP single-precision compute rate. However, it limits FP64 to 1/16th of that, yielding a DP ceiling 537.6 MFLOPs (less than Hawaii). After the handicapping of GM200, consider this another nod to the purpose-built nature of high-end gaming GPUs.

The HBM Hookup

Where AMD makes up ground is its implementation of High Bandwidth Memory, which propels peak throughput from 320 GB/s on R9 290X to Fury X’s 512 GB/s. The low-level details are already fairly well-known, but HBM achieves its big bandwidth numbers by stacking DRAM vertically. Each die has a pair of 128-bit channels, so four create an aggregate 1024-bit path.

This first generation of HBM runs at a relatively conservative 500MHz and transfers two bits per clock. GDDR5, in comparison, is currently up to 1750MHz at four bits per clock (call it quad-pumped, to borrow a term from the old Pentium 4 front-side bus days). That’s the difference between 1 Gb/s and 7 Gb/s. Ouch. But factor in the bus width and you have 128 GB/s per stack of HBM versus 28 GB/s from a 32-bit GDDR5 package. A card like GeForce GTX 980 Ti employs six 64-bit memory controllers. Multiply that all out and you get its 336 GB/s specification. Meanwhile, Radeon R9 Fury X employs four stacks of HBM, which is where we come up with 512 GB/s.

Image 1 of 2

It’s not often you see such an instrumental specification jump by 60%, or sit more than 50% higher than the competition. There’s no doubt that HBM plays a big role in Fury X’s performance story, or that it would have been even more influential had Fiji been a bigger chip. But here’s where we’re thrown a bit of a curveball. You’re going to see in the performance results that Radeon R9 290X and GeForce GTX 980 are closely matched today. We know that this is AMD’s first outing with Fiji and HBM, and it’s logical to assume the company’s driver team will extract some more performance from the combination. However, AMD has specific release targets planned when it expects significant speed-ups. We certainly can’t predicate a conclusion on guesses as to where Fury X will land. Still, it’s interesting that AMD sees unrealized potential.

There is some uncertainty about Fury X’s long-term prospects given its 4GB of HBM. Indeed, it’s easy to get spooked in the face of 6GB 980 Tis and 12GB Titan Xes. None of our benchmarks at 4K suggest that Radeon R9 Fury X will prove problematic with 4GB, though. We were able to set up a fairly contrived combination of settings in Grand Theft Auto V that blew past 4GB of memory use and dropped frame rates to single digits. But the game was barely playable by then anyway. AMD does find itself in a somewhat strange position, what with arming Radeon R9 390X and 390 with 8GB and all. Still, we just don’t think the flagship’s halved capacity is much of a handicap. At the resolutions and settings required to exceed 4GB, a single Fiji already finds itself out of its element. Moreover, AMD says there’s a lot more it could be doing to manage memory that wasn’t happening before. Now that the issue is more complicated than simply throwing down twice as much GDDR5, the company is motivated to take better care of the capacity available. This is receiving engineering attention now, naturally.

  • 1

Current page:
Introduction

Next Page Meet The Radeon R9 Fury X

Get instant access to breaking news, in-depth reviews and helpful tips.

Contact me with news and offers from other Future brandsReceive email from us on behalf of our trusted partners or sponsors

AMD Radeon R9 Fury X review

AMD is leading us into a new, exciting era of graphics technology — where ultra-fast memory is connected directly to the core, enabling higher performance, enhanced power efficiency and a new wave of small form-factor graphics cards. The Radeon R9 Fury X is the first GPU to arrive boasting this cutting-edge tech, with AMD telling us that it is the fastest single-chip GPU on the market, a title currently held by Nvidia’s mammoth Titan X 12GB. Well, the reality is that the Fury X is a fascinating first-gen product with plenty of positives, but in terms of raw performance, both Nvidia’s Titan X and its cut-down GTX 980 Ti are generally faster and more versatile for the high-end enthusiast market.

As always, performance is king, so AMD’s inability to be comprehensively competitive with Nvidia’s GM200 across the length and breadth of our benchmarks is a little disappointing — but certainly in terms of the physical package, it’s great to see that the poor reference cooling design of the 200 series is now a thing of the past. The Fury X is built from quality materials that look good and even feel good, and the dinky, compact nature of the 7.5-inch board is quite remarkable — it’s a marvel of integration. The work AMD carried out on the Radeon R9 295X2’s reference water cooler is carried over and refined on Fury X, which also has its own closed-loop set-up that is significantly quieter than Nvidia’s reference coolers, though it is accompanied by a continuous, consistent, high-pitched tone — presumably emanating from the pump. It was a little bothersome on the test bench, but will hopefully be less of an issue when the card is installed deep within a decent case.

Radeon R9 Fury X specs

The Fiji processor in the R9 Fury X is based on AMD’s third generation GCN architecture, previously found in the R9 295/380, and codenamed Tonga. Doubling up on stream processors brings the shader count up to a gargantuan 4096, up from the 2816 found in the R9 290X/390X. This core is then combined with the ultra-fast, ultra-wide HBM RAM.

  • Stream Processors: 4096
  • Texture Units: 256
  • ROPs: 64
  • Max Clock: 1050MHz
  • Memory: 4GB HBM
  • Memory Clock: 100MHz
  • Bandwidth: 512GB/s
  • Process: 28nm
  • Transistor count: 8.9bn
  • Max TFLOPs: 8.6
  • Die Size: 596mm2
  • TDP: 275W

AMD announced a $650 price-point for Fury X in the USA, bringing it into line with the GTX 980 Ti. In the UK, Fury X starts at £509 — which is a good deal cheaper than current 980 Ti prices here.

Comparisons with decent aftermarket coolers are interesting though — the fan mounted onto the radiator isn’t silent when it’s really being pushed and the overall package isn’t that much quieter than the MSI cooler we saw recently on the R9 390X. The difference comes down to the fact that the radiator is mounted on the case where it can directly push heat out of the chassis — something the fancy third-party coolers rely upon case airflow to achieve.

The aesthetics are finished off with a red LED Radeon logo, along with a series of lights designed to give some rudimentary measurement of GPU load. Other features include a dual-BIOS switch (there are two BIOSes, one of which you can re-write), while power is supplied via two eight-pin inputs fed from your PSU. Display outputs consist of three DisplayPorts, along with HDMI 1.4a video. The end of the DVI port is nigh, it seems.

Caption

Attribution

  • Order the Radeon R9 Fury X from Amazon with free shipping

This is usually the part of the review where we get some idea of a new GPU’s capabilities by running it through our Crysis 3 gameplay test — where we attempt to run the game at a display’s native resolution and as close to its refresh rate as possible: 60Hz. Which brings up an interesting point — Fury X is seemingly targeted at 4K gamers, but the reality is that the latest top-tier GPUs are much more suited to 1440p gameplay with all the visual trimmings. And this presents a slight issue: as you’ll see later on this review, Fury X really is at its best — and at its most competitive — at UHD. Our solution? To carry out the Crysis test at both 1440p and 4K on both Fury X and GTX 980 Ti, so it’s lucky that our brand new 4K, 60Hz DisplayPort 1.2 capture solution came online just in time for the occasion.

It is worth bearing in mind that 4K is a 4x increment in pixel count over 1080p, and a 2.25x boost over 1440p. Driving that sort of resolution on top-end quality settings is fool’s errand: something has to give, so at UHD we drop down from Crysis 3’s very high quality preset to high. Suffice to say that it reduces GPU overhead massively. As is often the case when ultra settings are dropped down just one notch, there’s only a limited impact on image quality, and that’s mostly unseen in the thick of the action. To add some spice to the proceedings, we also add the GTX 980 Ti to the mix, operating at exactly the same settings.

The end result? Well, 60fps can’t be sustained at either of our quality setting/resolution combos but it’s clear that it’s the GTX 980 Ti that gets closer to the target. We’d need to drop down to medium to sustain something closer to 60fps at 4K, which strongly suggests to us that even the latest ‘uber’ GPUs don’t have the plumbing to drive UHD displays with gameplay fast enough to match the typical 60Hz refresh.

Fury X and GTX 980 Ti compared in Crysis 3 as we aim — with only limited success — to drive 60fps at 1440p on very high settings, and 4K at the high preset.

Crysis 3 V-Sync Gameplay R9 Fury X 1440p GTX 980 Ti 1440p R9 Fury X 4K GTX 980 Ti 4K
Lowest Frame-Rate 40.0fps 44.0fps 28.0fps 30.0fps
Dropped Frames (from 18650 total) 1141 (6.12%) 624 (3.35%) 5320 (28.53%) 3626 (19.44%)

Order the graphics cards tested against the R9 390 from Amazon, with free shipping:

  • Radeon R9 390X 8GB
  • Radeon R9 Fury X 4GB
  • GeForce GTX 980 4GB
  • GeForce GTX 980 Ti 6GB
  • Nvidia Titan X 12GB

But this is just one game, one experience. To get an idea of what Fury X is capable of across more modern titles, it’s time to break out the benchmarks, based on the Fury X paired with a Core i7 4790K system overclocked to 4. 6GHz, matched with 16GB of 1600MHz DDR3 and running from a Crucial SSD. We usually start our tests at 1080p and scale up from there, but starting at 4K is perhaps the more logical approach — and it’s certainly where AMD is most competitive. Similar to the Crysis 4K gameplay test, we knock down all games by one ‘notch’ from maximum, in an attempt to remove the worst of the diminishing returns found on ultra-level settings.

Out of the nine games, AMD musters four wins over the 980 Ti. Three of them — Ryse, Far Cry and Shadow of Mordor — aren’t really that much of a surprise. These titles always seem to run faster on AMD’s hardware (the result, we suspect, of console GCN optimisations feeding through to PC GCN hardware). Titles like The Witcher 3 and Call of Duty only show a small advantage to Nvidia, but Battlefield 4 sees the 980 Ti storm ahead, dominating by over 19 per cent — the biggest margin of the lot.

Is 4GB of VRAM enough?

There’s a simple rule of thumb developers have been telling us for years now — when it comes to making a GPU purchase, the more VRAM you have onboard and the faster it is, the better. This puts AMD in a difficult position with the R9 Fury X. HBM is expensive, and while it’s ultra-fast, current designs are limited to 4GB of RAM. Meanwhile, Nvidia ships the GTX 980 Ti with 6GB of GDDR5, while AMD’s own R9 390 and 390X have a colossal 8GB of onboard memory.

The question is whether 4GB is enough, or if current and future titles need more. Our benchmarks typically omit multi-sampling anti-aliasing (a traditionally large drain on memory) so across 1080p, 1440p and 4K, we don’t tend to see anything above 4GB of RAM making much in the way of difference right now. In future, that might be entirely different, of course.

However, one of our tests does indeed push memory to the limits — Assassin’s Creed Unity, running at 4K on very high settings with FXAA. And here’s where we see some interesting data. Click on the images in this sidebar, and note the latency spikes on the red and orange lines (the rectangular drops on the right graph) representing the Fury X and the R9 290X — the only 4GB cards in the comparison. Then note the lack of such spikes on the R9 390X and GTX 980 Ti, both of which have much more than 4GB of RAM.

If there’s a smoking gun here, it’s that we are running 290X and 390X on the same driver — that’s the same hardware effectively, the only difference coming from clock-speed and VRAM allocation. If more VRAM wasn’t helpful here, we should expect the R9 390X to stutter just like the Fury X and the 290X, but it doesn’t. This may be indicative that VRAM — or lack of it — is the culprit for the latency spikes we do see, even though the Fury X has delta compression technology that the 290X lacks, which should help to make its 4GB go further.

Right now, we feel that 4GB isn’t a deal-breaker for the Fury X — most games fit within that allocation relatively comfortably as the benchmarks attest, though we do shy away from Shadow of Mordor’s ultra textures (as the developer does not recommend them for sub-6GB cards). However, based on our discussions with developers, we feel that VRAM utilisation is only moving in one direction — upwards — and that’s a consequence of the lavish amount of unified memory available on the current generation of consoles dictating the trend of development.

As one well-placed developer told us recently: «The harder we push the hardware and the higher quality and the higher res the assets, the more memory we’ll need and the faster we’ll want it to be. Our games currently in development are hitting memory limits on the consoles left, right and centre now — so memory optimisation is on my list pretty much constantly.»

However, once overclocking is factored in, the GTX 980 Ti is back in the game on all of our test games, increasing its lead or getting very, very close against the titles that favour AMD hardware. We could add 200MHz to the core clock and 400MHz to the RAM on the Nvidia card, where we were stable on our most stringent overclocking stress tests. Fury X has no memory overclocking (AMD tells us that it is pointless, even if you could do it) but you should be able to up the core by 90-100MHz in many titles. However, the card consistently failed our stress tests — we were finally stable at 1113MHz (a six per cent boost via AMD Overdrive), just 63MHz over the base clock.

At 3840×2160 — 4K resolution — It’s a very close battle between Fury X and GTX 980 Ti. It’s where the AMD card is at its most competitive, beating 980 Ti on four games out of the nine we test.

3840×2160 (4K) R9 390X GTX 980 Titan X GTX 980 Ti GTX 980 Ti OC R9 Fury X R9 Fury X OC
The Witcher 3, High, HairWorks Off, Custom AA 29.1 27.7 37.5 36.9 40.7 36.2 37.6
Battlefield 4, High, Post-AA 44.5 46.8 61.3 61.0 69.6 51.0 52.5
Crysis 3, High, SMAA 40.2 39.0 52.4 52.5 59.7 49.2 51.1
Assassin’s Creed Unity, Very High, FXAA 22.7 21.8 27.4 26.5 29.0 25.3 26.7
Far Cry 4, Very High, SMAA 44. 4 36.1 46.7 47.1 50.9 50.5 50.5
COD Advanced Warfare, Console Settings, FXAA 76.4 72.0 90.8 86.9 96.9 85.3 88.0
Ryse: Son of Rome, Normal, SMAA 37.8 31.5 42.2 41.7 45.6 44.0 45.7
Shadow of Mordor, High, High Textures, FXAA 50.1 42.4 54.8 54.8 59.7 55.5 57.1
Tomb Raider, Ultra, FXAA 51.4 47.1 64.6 61.3 66.0 63.9 66.8

Based on our initial Crysis 3 test, there’s a pretty strong argument that 2560×1440 resolution — not 4K — is perhaps the natural home of this new breed of high-end GPU. If we can run Crytek’s scalable title on max settings at something approaching a locked 60fps, that’s a very strong statement — we can move up to the ultra presets we couldn’t achieve at 4K, and in titles where gameplay can’t match the 60Hz refresh of the display, we can strategically tweak settings to increase frame-rates without losing that much in the way of image quality.

What immediately becomes obvious is that the competitiveness of the Fury X begins to slip. It only beats 980 Ti on two of the titles we test here: there’s a useful 4.6 per cent boost on Far Cry 4, but Ryse is only 0.3 per cent faster — that’s margin of error stuff. Shadow of Mordor, Assassin’s Creed Unity and Crysis 3 show fairly close results between the two cards, with GTX 980 Ti between four to six points clear over the AMD challenger. However, The Witcher 3, Battlefield 4 and Advanced Warfare are all at least 17 per cent faster on Nvidia’s competing hardare.

But what’s fascinating here is the extent to which Nvidia’s overclocking yields dividends. With our stable speed increases in place on both products, the gains on the GTX 980 Ti are eye-opening. On the AMD side, gains are minimal and in the case of Assassin’s Creed Unity, there’s a slight drop, albeit one in the margin of error. As you’ll see when the focus shifts to 1080p, the trend seems to be that Fury X is competitive at 4K, but loses its edge the lower the rendering resolution.

If you’re looking to get an excellent 60fps experience from this new wave of graphics cards, we think that a 2560×1440 monitor is the best option right now, preferably one with FreeSync or G-Sync. However, at this resolution, Fury X is not as competitive as its Nvidia counterpart.

2560×1440 (1440p) R9 390X GTX 980 Titan X GTX 980 Ti GTX 980 Ti OC R9 Fury X R9 Fury X OC
The Witcher 3, Ultra, HairWorks Off, Custom AA 43.5 47.5 63.3 61.7 70.9 52.4 54.3
Battlefield 4, Ultra, 4x MSAA 54.5 57.0 76.1 75.0 86.7 62.2 64.6
Crysis 3, Very High, SMAA 52.3 50.0 68.0 66.2 75.7 63.4 65.4
Assassin’s Creed Unity, Ultra High, FXAA 38.4 39. 7 49.6 48.3 54.4 45.8 45.4
Far Cry 4, Ultra, SMAA 69.0 61.3 77.0 75.4 86.9 78.9 78.9
COD Advanced Warfare, Extra, FSMAA 94.7 98.2 123.2 121.3 139.4 103.0 105.6
Ryse: Son of Rome, High, SMAA 62.2 54.1 72.8 71.2 83.3 71.4 73.0
Shadow of Mordor, Ultra, High Textures, FXAA 74.4 66.0 87.2 87.2 101.8 82.5 86.4
Tomb Raider, Ultimate, FXAA 75.6 76.7 101.9 99.2 117.2 91.6 95.7

Let’s be clear — Fury X and GTX 980 Ti aren’t really designed for 1080p gaming, and it’s where we saw the most disappointing returns over lower level cards in both our Titan X and 980 Ti reviews. However, although results were not as high as we hoped, there were still some clear and obvious gains with Titan X and GTX 980 Ti, and there are some practical applications for throwing a ton of GPU power at a full HD display — 120Hz gaming and stereoscopy, for example.

While there is some scalability in Fury X, it’s safe to say that the R9 390X is the better bet if you’re after an AMD card designed for 1080p gameplay. The GTX 980 Ti is faster than Fury X in every game we tested, and remarkably the non-Ti 980 — a much cheaper card — is competitive on The Witcher 3, Battlefield 4, Assassin’s Creed Unity and Call of Duty. The 980 Ti is 23 to 33 per cent faster in The Witcher 3, Battlefield 4, Far Cry 4 and Advanced Warfare. Remarkably, Far Cry 4 is actually a touch slower on the Fury X than it is at 1440p — margin of error stuff perhaps, but this may suggest that the GPU hardware is not the bottleneck. On several titles, R9 390X gets very, very close too — in Call of Duty, for example. These strange results are actually one of the reasons why this review is a little late. We had to re-assess and re-bench the data. We did it several times, because it just didn’t look right, but the same results kept coming back.

We can only speculate as to the reasons why, but data from the excellent hardware.info — who went the extra mile and benched at both medium and ultra settings at full HD — confirms what we see here. Whether it’s down to AMD’s well-known issues with its DX11 API overhead, or whether the Fiji hardware design simply works better with higher resolutions, the R9 Fury X simply doesn’t favour gameplay at 1080p.

It’s safe to say that Fury X’s 1080p performance is disappointing. You can make the case that the card isn’t made for 1080p gaming, but regardless, we shouldn’t see GTX 980 Ti command this much of an advantage and it doesn’t explain how R9 390X and GTX 980 can challenge it in several titles.

1920×1080 (1080p) R9 390X GTX 980 Titan X GTX 980 Ti GTX 980 Ti OC R9 Fury X R9 Fury X OC
The Witcher 3, Ultra, HairWorks Off, Custom AA 57. 4 65.8 84.4 82.6 92.1 67.1 70.2
Battlefield 4, Ultra, 4x MSAA 78.3 86.5 112.4 109.9 125.4 86.9 89.9
Crysis 3, Very High, SMAA 80.1 81.5 105.2 104.0 115.5 94.3 96.8
Assassin’s Creed Unity, Ultra High, FXAA 56.0 62.4 74.7 74.4 84.3 62.8 65.0
Far Cry 4, Ultra, SMAA 82.4 87.4 101.4 101.2 103.0 75.7 78.5
COD Advanced Warfare, Extra, FSMAA 112.3 128.0 159.9 156.8 173.1 115.1 115.5
Ryse: Son of Rome, High, SMAA 81.8 75.8 99.2 97.8 109.5 85.1 86.0
Shadow of Mordor, Ultra, High Textures, FXAA 101. 9 91.7 119.0 118.5 135.4 110.2 113.0
Tomb Raider, Ultimate, FXAA 107.1 118.2 150.1 150.3 168.2 127.4 132.3

Scalability at lower resolutions is clearly a concern but certainly from an overall hardware perspective, there’s much to like about the Fiji chip. First of all, after the significant power consumption of the 390X, we were concerned that an even bigger chip based on the GCN architecture would be even more of an energy hog. But the good news is that the Fury X hands in a substantial performance boost over the 390X and does so using considerably lower amounts of power. This may explain how the water-cooling set-up is able to keep the GPU at very low temperatures. In a hot office at 27 degrees Celsius, with the Fury X running through extended overclock stress tests with PowerTune pushed to its 150 per cent maximum, we didn’t notice temperatures exceed anything higher than 64 degrees. In most use-case scenarios, it’s ten degrees lower.

And in more good news, we also found that Fury X is much, much closer to the GTX 980 Ti in terms of power consumption — a great achievement bearing in mind how much praise Nvidia’s hardware has received for its efficiency. After the 80-100W gulf we saw between R9 290X and GTX 980, the gap between Fiji and GM200 is just 32W at stock speeds in our tests. With both graphics cards overclocked as far as we could push them, there’s just 6W between them — though the GTX 980 Ti is pushing out more frames.

The conclusion we draw from this is positive — our big concern going into Fury X testing was that the water-cooler was there to manage excessive heat, just as it was on the R9 295X2. A hot chip would also have caused problems for the upcoming air-cooled Fury, coming along some time next month. Based on the kind of heat dissipation we saw on the MSI Radeon R9 390X we reviewed, it’s our contention that even a fully enabled Fiji processor at the same clocks as Fury X could be cooled by air — though we may need to forego the smaller form factor to accommodate a larger heat sink and fan. By extension, the power efficiency on display in Fury X also suggests that the small form factor 175W Fury Nano should have no thermal problems whatsoever and could still pack quite a kick.

In our overclocking stress-testing, we found that this scene from Crysis 3 incurred a larger power draw than anything we’ve previously seen, so we tested Fury X and GTX 980 Ti here on our benchmarking system, though we did remove the overclock from our Core i7 4790K to lessen any spikes in consumption from the CPU.

GTX 980 Ti R9 Fury X GTX 980 Ti OC R9 Fury OC
Peak System Power Draw 375W 407W 421W 427W

Radeon R9 Fury X: the Digital Foundry verdict

There have been some excellent deep dives on the make-up of the Fiji processor at the heart of the Radeon R9 Fury X — one of the best we’ve seen comes from The Tech Report, breaking down performance from individual components of the GPU and analysing their strengths and weaknesses. It shows that while a great many elements of the chip are best in class, others show little improvement from R9 290X — which may explain some of the results seen at lower resolutions. But the overall takeaway we have is that the Fiji hardware is a strong technological achievement — and as seen in the 4K results, where GPU capabilities are the absolute focus of attention, Fury X is competitive — it’s a serious rival, and that’s exactly what the market needs. The fact that AMD’s new card is a good deal cheaper than current GTX 980 Ti prices (in the UK at least) is also a factor worth taking into account.

On the flip side, if you’re pushing for balance between frame-rates and visual refinement, even with this new wave of uber-cards, you need to drop down from 4K to a more reasonable resolution — and the further down you drop, the more dominant GTX 980 Ti becomes, especially so when overclocked. But on a more general level, we do have to wonder whether 4K may well be something of a blind alley for PC gaming, bearing in mind that the physical size of 4K PC displays is remaining static (which is why many PC gamers are considering large 4K UHD TVs as monitor replacements). Right now, our gut feeling remains that the new wave of 34-inch 3440×1440 monitors with the super-wide 21:9 aspect ratio may well be the more natural home for the likes of the Fury X and the GTX 980 Ti — there’s just 60 per cent of the resolution to drive, and there’s a generally more immersive experience owing to the expanded field of view. LG, Asus and others are also incorporating FreeSync and G-Sync technologies into these panels too, making them even more compelling — we hope to review one of these displays soon.

But that’s a discussion for another time. In the here and now, it’s fair to say that AMD has produced an innovative, powerful piece of hardware — and if 4K gaming is your bag, it’s a very serious contender. But there’s the sense that the complete package hasn’t quite been delivered — the hardware is there, but perhaps the accompanying software has fallen short. We already know that DX12 and Vulkan will solve AMD’s API overhead issues (which may have something to do with the sub-par 1080p results), but that’s going to take a while to proliferate into the games we actually play. With HBM, we are already seeing sleek, smaller form factors, but right now there’s no knock-out blow in terms of gaming performance vs old-school GDDD5 VRAM. And in terms of flexibility — whether we’re talking about running at sub-4K resolutions, or even support for HDMI 2.0, we’re surprised to see AMD concede ground to Nvidia in any area on what is a flagship product.

In conclusion, a straight comparison of these high-end GPUs is fascinating. AMD and Nvidia have invested in innovation in completely different areas, the red team banking on the remarkable HBM, its counterpart relying more heavily on the power efficiency and performance of the second generation Maxwell architecture (and leaving its own version of HBM in reserve until next year’s hardware). At 4K, these entirely different approaches have ended up yielding remarkably similar performance with accomplished levels of efficiency — but what we really need from both sides is brand new core technology, combined with smaller 16nm FinFET manufacturing, due next year. Factor in HBM and DX12 and within a couple of years, the next-gen Fury and its Titan equivalent should leave both of these current cards in the dust. In the here and now, AMD may not have bested Nvidia — but as we move away from the 28nm era, it’s clear that the appetite and the technology are there for us to see some serious competition between the two.

AMD Radeon R9 Fury X

PROS

  • Constantly good performance 1440P
  • Comparable 4K tests
  • Innovation design with water cooling

Cons

  • In some key tests, NVIDIA 9000,0005,0005,0005,000 Water cooling can be cumbersome

Key Specs

  • Recall Price: £546.00
  • 1050MHz core clock
  • 4GB 500MHz memory HBM
  • 8.9 billion transistors
  • 4096 stream processors
  • requires 2 8-pin power connectors

What is an AMD Radeon R9 Fury X?

Lately, AMD has been quiet in the graphics market, which has allowed Nvidia with the new GeForce GTX Titan X and GTX 980 Ti racing cards. Now we can reveal the reason for this: the Radeon R9 Fury X has arrived.

This is

is AMD’s first new GPU for over a year now. It was designed using the familiar

architecture, but virtually every component was turbocharged — and to deal with the change, AMD ditched the

traditional air cooler in favor of a water-based setup.

Fury X

isn’t cheap at £546 but it’s rated for the GTX 980 Ti, leaving the GTX Titan X to reign supreme at the £800+9 end0042

market.

SEE ALSO: Best 4K and gaming monitors

AMD Radeon R9 Fury X under the hood AMD chips since 2011.

The numbers have grown in almost every department, but the structure remains the same

: one GPU delegates tasks

up to four shader engines, each equipped with 16 compute units.

each

one of these compute units has 64 stream processors, which

means that each shader engine has 1024 stream processors; The entire

GPU boasts 4096 of these critical components.

It’s a big

Difference from AMD’s previous flagship — and the current Fury

competition. R9 290X and Nvidia GeForce GTX 9The 80 Ti both do with

2816 stream processors, and the AMD card only used 11 compute units in

each shader engine, not 16.

Some of the smaller

GPU parts remain the same. Each shader engine still has its own

texture unit and rasterizer, and the core continues to be built using 28nm

process.

New chip colossal 596 mm 2 in size and

is made from 8.9 billion transistors — almost a billion more than

Nvidia card. Clock speeds have also improved; 1050MHz Fury X

50MHz faster than R9 290X and GTX 980 Ti.


Fury X’s impressive start continues with throughput figures. Its 8601 GFLOP single precision performance puts the GTX 980

Ti 5,632 and this pattern is repeated in

double precision calculations and memory bandwidth is 512GB/sec Fury

Memory bandwidth easily surpassed GTX 980 Ti at 336GB/s.

This big jump in memory performance

is due to the new AMD memory system.

the usual GDDR5 memory has been replaced by high bandwidth memory or

HBM. The new technology was designed to use a much wider

interface than previous cards: GDDR5-based chips use 512-bit interfaces on their

is the peak, but the 4GB HBM memory inside the Fury X is accessed by

the mighty 4096-bit bus.

The new memory stacks chips vertically like

horizontally, but currently it has four stacks of 1 GB,

and speeds lower. On paper, the Fury X’s 500MHz memory clock is far

slower than any other high-end GPU. Regardless, the wider

bus and higher bandwidth makes AMD confident. nine0042

Innovation is not

limited to the inside of the card, as evidenced by the cooling device.

The Cooler Master is equipped with smart braided cables and one 120mm fan. In addition, it is modest in size. Not much taller or wider than the

fan on its own, this is comparable to the smaller pre-made

water coolers currently commonly used for CPU cooling.

You will be

a 120mm fan mount is needed to use the cooler, but it shouldn’t be hard to find

in large cases at the enthusiast level — and it can even be used in

small form factor cases, depending on how you choose processor cooling.


separation of the cooling mechanism and a decrease in the space required by the HBM

memory means that the card has shrunk. Its circuit board is only 195mm, which is

much smaller than most high end GPUs — GTX 980 Ti and R9 290X

Reference designs are 267mm and 278mm long.

Liquid Cooling

isn’t the Fury’s only neat physical feature. Eight LEDs that sit next to two

eight-pin power connectors display GPU load and can be switched

between red or blue; one green light flashes when ZeroCore mode has

turn off the graphics processor when the computer is idle. There is also

dual BIOS so you can store two profiles at the same time.


The reference model is the only design available, so it is important to

study its display output. It has three DisplayPort connectors and one HDMI port, but it’s HDMI 1.4 — and that could create a problem. GTX 980 Ti includes HDMI 2 i.e. Nvidia

9 cardThe 0041 supports 4K at 60fps, 32-channel audio over four audio streams, and the

has a 21:9 aspect ratio. There’s no DVI either.

Fury X supports

all the APIs we expect, including DirectX 12, Vulcan and Mantle. Plus AMD

The FreeSync feature now works with 30Hz screens, so games can run at 30fps —

— ten frames lower than before. Fury X runs

with CrossFire XDMA so dual graphics work over PCI and not

using a separate physical bridge.

SEE ALSO: Best Games of 2015

AMD Radeon R9 Fury X Test Results

The second page of this review.

Page 2: AMD Radeon R9 Fury X Benchmarks and Analysis

Simply put, AMD Radeon R9Fury X absolutely explodes

through games at 1080p. It averaged over 100 fps across our four games, while Battlefield 4’s worst result was 91 fps.

It’s

at 1440p that the Fury X presents with more of a problem. At this

resolution we’ve seen results between 45fps and 125fps, with the likes of

Battlefield 4 hitting 64fps and Shadow of Mordor rushing at

84fps.

These figures clearly show that this card is well suited for

This resolution, although the Nvidia GTX 980 Ti is comfortable, the faster

postcard. In Battlefield 4, the speed was 10 frames per second faster; in batman

Arkham, it was 24 fps faster, although both cards showed over

100fps, this is perhaps a moot point.

Rage X pulled things back

a bit in BioShock where its 98fps was only 5fps from Nvidia. In

Crysis 3, his 39fps was only 3fps behind.

Moving up to 4K

resolution and both cards are starting to be heavily taxed, though they still deliver over 30fps, with many

games running at 60fps and above. Please also note that we are testing at a very high level of detail.

In

in terms of how the two lived against each other, it was more evenly divided

, with the memory bandwidth of the AMD card helping in this

high resolution. As a result, the GTX 980 Ti led the way at 3fps.

Battelfield 4 (37 fps to 34 fps) and 8 fps in Batman Arkham (82 fps to 74 fps).

Rage X closed the gap

a couple of frames in BioShock, however, and then finally the GTX 980

Ti overhaul in Crysis 3: 32 fps on average outpaced the GTX 980 Ti by

two frames. The AMD card extended its lead in Metro where its 33fps is

Minimum and 45 fps were eight and three frames ahead of

respectively.

Our final gaming test, Shadow of Mordor, saw the

Fury X triumph at an average of 48 fps — just one frame ahead of the

GTX 980 Ti.

The
Fury X also proved to be slightly more competitive in our theoretical tests. it’s

3DMark Fire Strike Ultra result — 3943 — a few points ahead of

GTX 980 Ti and it was only two frames behind in the Unigine 4K test.

AMD cards traded blows with Nvidia elsewhere. The

water cooling mechanism helped the Fury X reach a maximum temperature of just 65°C, which is 17

degrees cooler than the GTX 980 Ti. The water cooling unit helped keep the

noise down too; the card is no louder than traditionally cooled high quality cards.


But the AMD card lost power consumption. The idling power of our plant is 9The 0042

draw of 111W was 10W more than required for the GTX 980 Ti,

and Fury X-based PC with a maximum output of 369W — 39W more than the GTX

980 Ti needed.

AMD Radeon R9 Fury X Things to Consider

Fury X

is unusual in that this new card is not modified by board partners. For now — and possibly forever — the reference design is the only

one available, which means there aren’t any overclocked versions out there. nine0042

On the other hand, this means that when executing

selecting a card. In addition to being made by different board partners, they are identical and priced similarly.

While there are fewer options to choose a card that will fit the aesthetics of

Your setup, in one department at least AMD caters to modders.

The design of the faceplate of the card is designed so that

enthusiasts can replace the aluminum plate with their own creations, although in

The traditional PC will face down, so we’re not sure how many

people will take this route.

judgment

Rage X

impressive return to form. AMD has improved its architecture by adding

more fragile and debuting innovative memory, and it keeps the

chip cool with an efficient water-cooled envelope pusher that helps cut the

down the size of the main card.

Performance is fast throughout the

with enough power to run 1440p and 4K games smoothly. Part of the

credit should go to AMD HBM memory, which provides more bandwidth

despite having a relatively modest amount installed.

When

is compared to Nvidia’s GTX 980 Ti, the picture gets a little hazy. It’s a

tie between the Fury and the Nvidia GTX 980 Ti at 4K, but the Fury loses

at 1440p and in most theory tests. AMD hardware also

suffers from the lack of overclocked cards and more power consumption

.

Because of this, Fury X cannot quite overhaul competitors. This is a big improvement over the latest

generation, but it’s not entirely good to establish Nvidia’s

lead. The Fury X is a viable 4K option, especially if you want a smaller card, but the GTX 980 Ti almost wins with its better

1440p performance and added versatility. nine0042

Just invested in a new graphics card but don’t know what to play? No problem: we’ve got roundups of the best FPS games you can get right now and the best RPGs of 2015.

Mike has been a technology journalist for over a decade and has written for most of the UK’s most prominent websites and magazines. While writing about technology, he developed obsessions…

Unlike other sites, we thoroughly test every product we test. We use standard industry benchmarks to properly compare features. We will always tell you what we find. We never, ever accept money for a product review. nine0039 Tell us what you think — send your letters to the editor.

GIGABYTE Radeon R9 FURY X (GV-R9FURYX-4GD-B) review and test GECID.com. Page 1

::>Video cards
>2016
> GIGABYTE GV-R9FURYX-4GD-B

09-06-2016

Page 1
Page 2
One page

Continuing the conversation about the current flagship AMD graphics cards based on the AMD Fiji graphics core, we will look at the top-end graphics accelerator AMD Radeon R9 FURY X .

Unlike the previously reviewed AMD Radeon R9 FURY, the tested model, like the AMD Radeon R9 Nano, received the maximum version of the AMD Fiji graphics core, which includes 4096 stream processors, 256 texture units and 64 ROPs. If we compare AMD Radeon R9FURY X with AMD Radeon R9 Nano, we will see a higher GPU frequency (1050 vs. 1000 MHz) and an increased TDP (275 vs. 175W). The final comparative table of characteristics of all three video cards is as follows:

Model

AMD R9 Fury X

AMD R9 Fury

AMD R9 Nano

Graphics core

AMD Fiji XT

AMD Fiji PRO

AMD Fiji XT

Microarchitecture

AMD GCN 1.2

Number of stream processors

4096

3584

4096

Number of texture units (TMU)

256

224

256

Number of ROPs

nine0422

64

64

64

Graphics core frequency, MHz

1050

1000

1000

Memory type

HBM

nine0422

Video memory size, GB

4

Effective video memory frequency, MHz

1000

Video memory bus width, bit

4096

Video memory bandwidth, GB/s

512

TDP value, W

275

275

175

Separately, we recall that you can find a detailed description of all the characteristics and capabilities of AMD Fiji in the review of AMD Radeon R9 Nano, but in the meantime we will go directly to the review of AMD Radeon R9FURY X.

And the modification GIGABYTE Radeon R9 FURY X (GV-R9FURYX-4GD-B) will help us do this. This video accelerator was kindly provided by the smart electronics market VSESVIT.BIZ, where you can also buy it for approximately $785.

Specification :

nine0435

nine0419

500 (1000)

nine0419

GIGABYTE

Model

GIGABYTE Radeon R9 FURY X (GV-R9FURYX-4GD-B)

Graphics core

AMD Fiji XT

Number of stream processors

4096

Graphics core frequency, MHz

1050

Memory frequency (effective), MHz

Memory capacity, GB

4

Memory type

HBM

Memory bus width, bit

4096

Memory bandwidth, GB/s

512

Tire type

PCI Express 3. 0 x16

Image output interfaces

1 x HDMI

3 x DisplayPort

Recommended power supply, W

nine0041 750

Dimensions (according to measurements in our test lab), mm

210 x 130 x 40 (208 x 116)

Drivers

Latest drivers can be downloaded from the GIGABYTE website or the GPU manufacturer’s website

Manufacturer website

Packing and contents

The video card is supplied in a fairly large and voluminous box, which is not surprising, given the presence of a complete CBO. The packaging design is made in the already familiar corporate style, and high-quality printing is very informative.

The reverse side mentions the technical specification, some of the benefits of GIGABYTE GV-R9FURYX-4GD-B and a list of system requirements. Based on the recommendations, the power supply in the system must be at least 750W and support two 8-pin PCIe cables. And in the PC case, one of the seats for a 120mm system fan will be used to mount the CBO heatsink. nine0042

Included with the graphics adapter, we found only a standard set of accessories in the form of documentation and software CD. There are no adapters in the box, which means that if necessary, you will have to take care of finding adapters for connecting power in advance.

To display the image on the tested model, a reference set of interfaces is used:

  • 1 x HDMI;
  • 3 x DisplayPort.

It seems like a logical step for the manufacturer to abandon the analog video output in a top-end video card. As for DVI, it was problematic to place it on the board due to the cooling system used, so if necessary, you can purchase an appropriate adapter.

Appearance and cooling system

The list of GIGABYTE Radeon R9 FURY X advantages should undoubtedly include its compact dimensions, thanks to which there will be no problems with installation even in relatively small cases (provided that there is a seat for SVO radiator). Let’s single out a completely recognizable and stylish design in dark colors, which is perfectly complemented by a red logo and a soft-touch coating on the top cover of the cooler. nine0042

The reverse side is covered with a base plate, on which stickers with the necessary service information are applied.

Separately, we note the presence on the reverse side of a small switch that is responsible for activating and controlling the diagnostic backlight, implemented using 9 LEDs next to the power connectors.

Using the switch, you can not only turn off the backlight, but also change the color of its glow from red to blue. nine0042

Another toggle switch is responsible for switching between two BIOS firmware (main and spare). It is located next to the glowing red «Radeon» logo.

Unlike the LEDs mentioned above, this logo always glows only in red and cannot be controlled.

The graphics adapter under test is powered by a PCI Express x16 slot and two 8-pin PCIe connectors located on the side of the board. The cooling system does not close access to them, which accompanies the convenient connection and disconnection of power cables. nine0042

The tested model is based on the AMD Fiji XT GPU, manufactured using the 28nm process technology. It includes 4096 stream processors, 64 raster units and 256 texture units. The GPU frequency corresponds to the reference 1050 MHz.

The memory, totaling 4 GB, is composed of four HBM chips manufactured by SK hynix, which operate at an effective frequency of 1000 MHz. Data exchange between the GPU and memory is carried out through 4096-bit bus, which is capable of passing 512 GB / s.

Cooling System

The cooling system of the GIGABYTE Radeon R9 FURY X (GV-R9FURYX-4GD-B) is a closed circuit. Two connecting hoses for fluid circulation and a power cable come out of the video card case. Inside, they are connected to a water block combined with a pump, which is mounted on an aluminum plate that covers the printed circuit board.

A massive radiator measuring 150 x 120 x 38 mm is used to cool the refrigerant. nine0042

The active element of this part of the design is a 120 mm axial fan with a blade diameter of 111 mm. The width of the radiator with the fan installed is 64 mm.

In automatic mode with maximum load, the graphics core heated up to 64°C, and, judging by the monitoring readings, the cooler worked at 19% of its peak power. The noise level was very quiet and absolutely comfortable. For comparison, we recall that a fairly large cooler with three fans installed on the GIGABYTE Radeon R9FURY WINDFORCE 3X OC was able to cool AMD Fiji PRO running at lower frequencies to 69°C.

In the maximum fan speed mode, the temperature of the GPU dropped to 42°C. The noise emitted at the same time slightly exceeded the average level and became uncomfortable for constant use. In turn, the temperature of GIGABYTE Radeon R9 FURY WINDFORCE 3X OC in the same mode was 50°C.