MSI Radeon R9 290X Gaming 4G Review
- 1 – Gallery
- 2 – GPU Info: GPU Caps Viewer, GPU-Z
- 3 – Benchmarks
- 4 – Burn-in Test and Throttling
- 5 – Conclusion
I received this card several weeks ago but I didn’t find the time to write about it. Even if Radeon R9 290X has been launched around one year ago, this card has no successor yet. So a quick review is still interesting, especially to update the benchmarking comparative tables for upcoming GTX 900 reviews…
MSI R9 290X Gaming 4G is a graphics card that belongs to MSI’s Gaming Series. Here are all Series ranked from the top line (Lightning) to the bottom line (CLASSIC):
This R9 Gaming is a card for gamers. Although you can overclock it, this card is not intended for overclockers because some OC features like voltage check points are missing. These features are available on the Hawk and Lightning series.
This card is based on the Hawaii XT GPU. More information about this GPU can be found on this page.
MSI R9 290X Gaming 4G homepage can be found HERE.
1.1 – Bundle
I received the card directly from MSI without bundle… so no picture!
1.2 – The Graphics Card
MSI has equipped its R9 290X with a custom VGA cooler based on the famous Twin Frozr IV cooler technology. This is a very efficient cooler that manages to keep low temperatures mor or less quietly: at idle, the VGA cooler is nearly noiseless while under heavy load the noise is tolerable.
MSI R9 290X comes with a backplate which is a good point, and includes the following output connectors:
– 1 x DisplayPort 1.2
– 1 x HDMI 1.4 (see HERE for the bandwidth of HDMI 1. 4 interface).
– 2 x DVI
This card has however a little problem of design: once plugged, you can’t easily unplug the PCI-E power cables because the heat sink to too near of the power connectors:
The dirty solution: just remove/break the small tie on each power connector!
Weird conception bug. Looks like nobody has tested this card at MSI labs before the launch day… or PSUs at MSI are all like my PSU, without the tie 😉
– GPU Caps Viewer 1.22.0
– GPU Shark 0.9.2
– GPU-Z 0.8.0
This MIS R9 290X has a GPU core clock of 1030MHz (reference clock: 1000MHz). The 4GB of GDDR5 memory are not overclocked and comes with the reference clock speed of 1250MHz real speed (or 5GHz effective speed).
I benchmarked MSI’s R9 290X with several GPU tests. See THIS PAGE for all results in 1920×1080 fullscreen mode.
By default, Radeon cards work with the PowerTune set to 0% in the Overdrive section of the Catalyst Control Center. With this setting, the R9 290X runs at around 960 MHz when it’s stressed by FurMark. 960Mhz is lower than the normal 3D clock speed (1030MHz). Why? Because PowerTune tries to keep the power consumption under a threshold defined by the PowerTune setting.
At idle, the total power consumption of the testbed is 55W and the GPU temperature is 31°C which is a nice temperature.
With a PowerTune of 0%, the total power consumption of the testbed is 412W and the GPU temperature reaches 82°C.
When FurMark is running, the CPU pulls around 25W and the total power. The efficiency factor of the Corsair AX 860i is 0.92. An estimation of the power consumption of the R9 290X is:
P = (412-25-50) * 0.92
P = 305W
To unleash the beast hidden behind the Twin Frozr IV cooler, you have to set the PowerTune to 20%:
Let’s see the numbers after a 5-min burn-in test session:
– total power consumption: 462W
– GPU temperature: 86°C
With a PowerTune of +20%, an estimation of the power consumption of the R9 290X is:
P = (462-25-50) * 0. 92
P = 351W
351W!!! It’s an insane power consumption. But the card is designed to resist and it resists. The Twin Frozr IV does it job by evacuating the heat produced by the GPU. But to correctly cool the GPU the VGA cooler has to do some noise…
The R9 290X Gaming 4G has successfully passed the FurMark stress test. This card is FurMark-proof.
This R9 290X Gaming 4G is a great product by MSI. The card is built with a solid mechanical structure (backplate, Twin Frozr IV), does not suffer from throttling when it’s under heavy graphics load and has a nice design.
– 4GB of graphics memory
– nice GPU temperature at idle state (31°C)
– quiet VGA cooler at idle
– strong power cicuitry, no GPU throttling
– backplate for mechanical protection
– DisplayPort 1.2 for 4K gaming
MSI R9 290X Lightning Review — Tom’s Hardware
Skip to main content
Tom’s Hardware is supported by its audience. When you purchase through links on our site, we may earn an affiliate commission. Here’s why you can trust us.
Today’s best MSI R9 290X Lightning deals
No price information
For more information visit their website
Meet The Largest, Heaviest Radeon R9 290X Of All
Based on the size of its R9 290X Lightning, it appears that MSI has a thing for overkill. But the company might also be onto something. As we already know, cooling AMD’s Hawaii GPU properly is what separates the men from the boys. Forget about re-purposing cooling solutions from other cards. Asus tried that and it didn’t go over well at all. Instead, MSI sent over a three-slot take on the Radeon R9 290X, which, as you can see, employs a trio of cooling fans and a lot of metal dedicated to keeping that hot graphics processor operating within its comfort zone.
This thing isn’t a toy. A $700 price tag, tied for the most-expensive Radeon on Newegg, makes you think hard before dropping nearly $150 more than the cheapest models with aftermarket cooling.
Even the Lightning’s box is massive; it includes an extra compartment with lots of accessories and a certificate of ownership. You’re dealing with a limited-edition piece of hardware, after all.
The snazzy-looking board and high-end packaging (not to mention lofty price) naturally have us expecting quite a bit out of MSI’s R9 290X Lightning. Historically, the company reserves this branding for its flagship models. Does this board live up to that standard? It’s time to break out the lab gear and find out.
Box Style and Contents
The number of accessories you get with this card is impressive, though we question the wisdom of including 6-to-8-pin adapter cables. Why solder eight-pin plugs to the card and then encourage users to fry thinner cables that might not be suitable for driving a high-end GPU? The same goes for the bundled Molex adapter. Who in their right mind would make up the balance of too-few cables by tapping into a pair of four-pin plugs? We’ve woken up to the smell of smoke in the morning; it’s not fun. Seriously. We expect folks who buy $700 graphics cards to use similarly enthusiast-oriented PSUs.
Beyond the manual and CD (which contains the drivers, MSI’s Afterburner software, and a fan control utility for the three fans), the box contains a metal plate, thermal pads, and screws. The plate fits over the DC-DC converters when you embark on an extreme overclocking mission, go the water cooling route, and remove the massive heat sink.
Lab Note about the Dimensions
The dimensions reported here don’t necessarily match the manufacturer’s official technical specifications. Rather, we measure them by hand to assure they’re correct. The image and chart below should help illustrate what each measurement actually means. Auxiliary PCI Express power connectors are not included; they have to be added depending on the power plug and cable design.
MSI’s R9 290X Lightning is as long as the Sapphire R9 290X Tri-X. However, it monopolizes three expansion slots, and is a tad higher, too. As a result, we’re fairly certain it’s the bulkiest Radeon R9 290X we’ve tested thus far.
|Models||Length L||Height H||Depth D1||Depth D2|
|Asus R9290X-DC2OC-4GD5 R9 290X DirectCU II OC||11.3″ / 288 mm||5.6″ / 142 mm||1.5″ / 38 mm||0.16″ / 4 mm|
|Sapphire Tri-X OC R9 290X||12.0″ / 305 mm||4.5″ / 114 mm||1.5″ / 38 mm||0.16″ / 4 mm|
|Gigabyte GV-R929XOC-4GD R9 290X Windforce OC||11.1″ / 282 mm||4.8″ / 123 mm||1.5″ / 38 mm||0.16″ / 4 mm|
|HIS R9 290X IceQ X² Turbo||11.7″ / 297 mm||5.3″ / 135 mm||1.4″ / 36 mm||0.16″ / 4 mm|
|MSI R9 290X Gaming 4G||11.0″ / 279 mm||4. 7″ / 120 mm||1.5″ / 38 mm||0.24″ / 6 mm|
|MSI R9 290X Lightning||12.0″ / 305 mm||4.8″ / 122 mm||2.1″ / 53 mm||0.2″ / 5 mm|
The weight of a card might be interesting if you’re trying to figure out if any additional support is needed, or to calculate the amount of stress your motherboard might be under in a CrossFire-based setup. Since MSI’s offering is by far the heaviest in this field, we want to emphasize the importance of bracing it somehow, even if you’re only able to use cable ties.
|Asus R9290X-DC2OC-4GD5 R9 290X DirectCU II OC||2.5 lbs / 1135 g|
|Sapphire Tri-X OC R9 290X||2.25 lbs / 1022 g|
|Gigabyte GV-R929XOC-4GD R9 290X Windforce OC||2.32 lbs / 1053 g|
|HIS R9 290X IceQ X² Turbo||2.15 lbs / 976 g|
|MSI R9 290X Gaming 4G||2. 29 lbs / 1038 g|
|MSI R9 290X Lightning||3.49 lbs / 1581 g|
Meet The Largest, Heaviest Radeon R9 290X Of All
Next Page Features And Pictures
Tom’s Hardware is part of Future US Inc, an international media group and leading digital publisher. Visit our corporate site .
Future US, Inc. Full 7th Floor, 130 West 42nd Street,
AMD Radeon R9 290X Review
The GeForce GTX Titan blew us all away eight months ago with its mindblowingly fast GPU, cramming 7080 million transistors into a 561mm2 die to provide massive processing power and bandwidth. The catch, of course, was that Nvidia wanted (and still wants) $1,000 for it — a sum that didn’t necessarily seem to prevent cards from flying off shelves even though it’s more than our entire entry-level rig.
Nvidia followed up three months later with the equally impressive GTX 780 for a more plausible $650, where it remains today. Neither of those cards had much of an impact on AMD’s sales as the company’s most expensive offering at the time was a $450 Radeon HD 7970 GHz Edition (the 7990 arrived a few months later).
Eagerly awaiting AMD’s response, we were disappointed that the first run of Rx 200 cards were rebadges. For instance, the R7 260X and R9 270X are the Radeon HD 7790 and 7870 overclocked, while the R9 280X is an underclocked version of the HD 7970 GHz Edition — an equivalency chart can be found in our previous Radeon R7/R9 review. We were less than impressed.
You could easily argue that Nvidia did the same thing with its GTX 770 (a rebadged GTX 680) and the GTX 760 (a rebadged OEM GTX 660), but these products followed the highly impressive GTX 780 and GTX Titan, which were new parts. Moreover, the GTX 770 was not only faster than the GTX 680, but it also arrived with $100 knocked off its price.
Although AMD’s new series got off to an underwhelming start, we knew better things would eventually follow, and this week’s Radeon R9 290X is the first truly new product of its range. In a sense, the R9 290X (codenamed «Hawaii XT») could be considered AMD’s Titan, as it takes the Tahiti architecture and stuffs with nearly 2000 million more transistors.
It’s the most complex and powerful GPU AMD has created and by no coincidence, it’s also the company’s most expensive single-GPU product to date matching the Radeon HD 7970. Before you click away, that’s «only» $550, which is substantially cheaper than Nvidia’s solution.
The question, of course, is whether the R9 290X can actually keep company with Nvidia’s flagships…
Radeon R9 290X in Detail
The R9 290X measures 27.5 cm long (10.8 in), a typical length for a modern high-end graphics card and roughly the same as the original HD 7970. Its GPU core is clocked at 1000MHz, the same frequency as the 7970 GHz Edition, while the GDDR5 memory operates at just 5GHz, 17% slower than the 7970. Still, pairing that frequency with a 512-bit memory bus gives the R9 290X a whopping theoretical bandwidth of 320GB/s, an 11% advantage over the HD 7970 GHz Edition.
While the HD 7970 had a 3GB frame buffer, the R9 290X has been upgraded to 4GB. For the most part, no games will use more than 2 to 3GB of memory at resolutions up to 2560×1600. This means you will need to play on an extreme multi-monitor setup where more than one 290X would be required to provide playable performance and take advantage of the larger memory buffer.
The R9 290X’s core configuration also differs from the 7970’s. The new card carries an incredible 2816 SPUs, 176 TAUs and 64 ROPs, up 38% from 2048 SPUs and 96 TAUs with 100% more ROPs.
The «Hawaii XT» GPU is cooled by a massive aluminum vapor chamber heatsink with 41 fins measuring 15.0cm long, 8.5cm wide and 2.5cm tall. This is virtually the same cooler AMD used on its reference HD 7970. The vapor chamber design was first implemented by the Radeon HD 5970 and was later adopted by Nvidia’s GTX 580. Heat is dispersed by a 75x20mm fan that pulls air in from the case and pushes it out the back.
The R9 290X’s fan operates quietly for the most part, but despite the card’s impressive idle consumption, it still chugs up to 300 watts under load (20% more than the GTX Titan), so the fan does kick up during a heavy gaming session.
To feed the card enough power, AMD includes 8-pin and 6-pin PCI Express connectors — the same setup you’ll find on the HD 7970, GTX 780 and other demanding boards.
Naturally, the R9 290X supports Crossfire, but there are no bridge connectors on this card. This makes the R9 290X the first graphics card to support Crossfire without needing a hardware strap.
The only other connectors are on the I/O panel. Our AMD reference sample has a pair of dual DL-DVI connectors, an HDMI 1.4a port, a DisplayPort 1.2 socket and a dual BIOS switch. The BIOS switch can be found on top of the graphics card and it lets you select between what AMD calls «Quiet Mode» and «Uber Mode».
Speaking of which by default the R9 290X runs in what AMD calls quiet mode, this sets the default maximum fan speed to 40%. However if you want to keep the R9 290X cooler use the Uber mode which sets the default max fan speed to 55%. Of course these options can be further customized in the Catalyst Control panel which now features new OverDrive options.
Due to power and performance being so closely related now the idea with the Uber mode is to keep the R9 290X cooler so it will always perform to its fullest. However in our office which had an ambient room temperature of 21 degrees while testing we saw no difference in performabce between the Uber and Silent modes.
The R9 290X supporrts a max resolution of 2560×1600 on up to three monitors as well as Ultra HD (also known as UHD or 4K) over both HDMI 1.4b (low refresh) and DisplayPort 1.2.
Many Ultra HD/4K monitors can achieve a 60Hz refresh rate using a tiled display configuration. AMD Eyefinity technology can be leveraged to support these tiled displays by making two 2Kx2K tiles act as one 4Kx2K monitor. AMD has taken steps to make this easy for end users by providing an Automatic AMD Eyefinity Configuration feature. This feature allows an automatic «plug and play» configuration of supported Ultra HD/4K tiled displays when a Display Port cable is connected.
When you hot-plug a tiled 4K monitor (such as the Sharp PN-K321 or Asus PQ321Q), a 2×1 display group will be automatically created and the two tiles will be combined to act as one monitor. This configuration will be remembered and re-enabled when the display is unplugged or the system is rebooted.
It is also possible to manually disable the display group in CCC, and have the two tiles act as independent monitors. Additionally, with a multi-stream hub using the mini-DisplayPort 1.2 sockets, the card can power up to six screens.
AMD Radeon R9 290X Review
AMD Radeon R9 290X 4GB Review
UK price (as reviewed): MSRP £449.99 (inc VAT)
US price (as reviewed): MSRP $549 (ex Tax)
The first stage of AMD’s Volcanic Islands GPU launch, covering the Radeon R7 240 up to the Radeon R9 280X, brought with it new information about technologies such as TrueAudio and Mantle. From a pure hardware standpoint, however, it was a little disappointing given that it was mostly just a rebrand of existing HD 7000 series products. Today, however, we can finally pull the wrapping off of AMD’s new flagship card, the Radeon R9 290X 4GB. It’s got a new GPU and everything.
AMD’s new top-tier single GPU card is launching on these shores with a retail price of £450. This means it actually undercuts the card its designed to compete with, Nvidia’s GTX 780 3GB, by around £50 at the time of writing. This means AMD is opting, for now at least, to leave the somewhat ludicrous price bracket occupied by GTX Titan solely to Nvidia. AMD is also positioning the R9 290X as the card to have for gaming across multiple monitors or at 4K. It’s worth noting that the final piece of Volcanic Islands pie, the R9 290, is still to launch, as this review is solely for the R9 290X.
Under the Hood
The R9 290 series Hawaii GPU is built on a 28nm process node and based on the existing Graphics Core Next architecture, which is no surprise given AMD’s commitment to it through Mantle. With 6.2 billion transistors and measuring 438mm2, it’s AMD’s biggest GCN GPU yet and represents a 24 percent raw die size increase over the HD 7970. The GPU also now supports the MQSAD instruction set. This new GPU is fully enabled on R9 290X cards, with no sections left unused, and clocked at a rounded 1GHz.
At a high level, Hawaii is organised mainly into four of what AMD has termed shader engines. Each of these features its own geometry processor, which is load balanced with the others and which still contains a geometry assembler, vertex assembler and tesselator. This means that the dual front-end design of Tahiti has been upgraded to a quad front-end one here.
Compute functionality has also been expanded thanks to a quadrupling of Tahiti’s asynchronous compute engine (ACE) count from 2 to 8. These ACEs work in parallel with the main graphics command processor such that graphics and compute tasks can be processed at the same time. The ACEs have direct access to the L2 cache and global data share, and each of them can handle up to eight queues at once. AMD has evidently utilised GCN’s scalable nature to produce a more parallel GPU that should improve its multi-tasking.
Hawaii’s main processing grunt comes from its 44 Compute Units (11 per shader engine), a 37.5 percent increase over Tahiti. These are mostly unchanged, and as such each one features 16KB of L1 cache, four texture units and four of AMD’s SIMD engines. These SIMD engines are the smallest individual unit of work in the GCN architecture, and consist of 16 stream processors a piece and a 64KB register file. The total texture unit and stream processors counts of Hawaii are thus 176 and 2,816 respectively. By comparison, the GTX 780 GPU has 194 and 2,304. The R9 290 series GPU’s L2 cache is now 1MB, a 33 percent improvement, and likewise AMD boasts of a 1TB/sec maximum total L1/L2 bandwidth, again a 33 percent boost.
As well as the CUs, each geometry processor again has access to its own rasteriser, which doubles Tahiti’s number. Four render back-end units per shader engine complete the graphics pipeline, and as each of these features four ROPs, there are now a whopping 64 in total, which is again twice that of the HD 7970 and R9 280X GPU. Consequently, the card can process almost twice as many pixels per second as these GPUs, 64Gpixels/sec to be precise (64 pixels per clock, 1GHz clock speed), which emphasises AMD’s focus on multi-monitor and 4K set-ups here.
The jump from six to eight 64-bit memory controllers takes Tahiti’s already high 384-bit memory interface to a massive 512-bit one. On the silicon, this is also at a higher density than before, so it actually takes up 20 percent less area despite being a wider bus. It’s connected to 4GB of GDDR5, which is clocked at 5GHz for a total memory bandwidth of 320GB/sec – a 32GB/sec advantage over the GTX 780, GTX Titan, and Radeon HD 7970/R9 280X. The 1GHz lower memory clock speed is a result of the memory interface’s higher density, but the figure AMD is keen to highlight is 50 percent more memory bandwidth per mm2 than the HD 7970 GHz Edition.
Crossfire burns its bridge
The R9 290X also comes with a new CrossFire DMA (direct memory access) engine in the CrossFire compositing block. This allows GPUs in such set-ups to fully communicate over PCIe without the need for an external CrossFire bridge. AMD also ensures us that no performance has been lost in the transition to a bridge-less design, indeed the bandwidth available over PCIe is far greater than over the existing brige implementations. Also the company boasts of scaling somewhere between 1.8x and 2.0x in modern games when using two R9 290X cards versus one.
Finally, also embedded within the Hawaii silicon is AMD’s dedicated audio pipeline, TrueAudio, which you can read about in more detail in our R7 260X coverage. Naturally, thanks to the GCN architecture, Mantle is also supported, with AMD having recently released a blog post outlining some clarifications regarding its new API.
1 — AMD Radeon R9 290X Review2 — AMD Radeon R9 290X Review — Tuning up PowerTune3 — AMD Radeon R9 290X Review — All About 4K4 — AMD Radeon R9 290X Review — Meet the Card5 — GeForce GTX Titan Review — Test Setup6 — AMD Radeon R9 290X Review — Battlefield 3 Performance7 — AMD Radeon R9 290X Review — BioShock Infinite Performance8 — AMD Radeon R9 290X Review — Crysis 3 Performance9 — AMD Radeon R9 290X Review — Skyrim Performance10 — AMD Radeon R9 290X Review — Unigine Valley 1. 0 Benchmark11 — AMD Radeon R9 290X Review — Power and Thermals12 — AMD Radeon R9 290X Review — Overclocking13 — AMD Radeon R9 290X Review — Performance Analysis14 — AMD Radeon R9 290X Review — Conclusion
AMD Radeon R9 290X Review
by XbitLabs Team
Last update 29 May 2021
XbitLabs participates in several affiliate programs. If you click links on our website and make a purchase, we may earn a commision. Learn More
Finally we can tell about Radeon R9 290X performance. The top AMD graphics card is competing now with very powerful rivals: Nvidia GeForce GTX 780 and GeForce GTX 780 Ti.
AMD’s flagship Radeon R9 290X graphics card with the new Hawaii XT core was released a few months ago. Since then there have been released several beta and one official version of the Catalyst driver suite that were supposed not only to improve the new Radeon’s performance but also optimize its cooling and noise parameters which had been criticized in early reviews.
The graphics card market has changed, too. Nvidia seems to have been prepared for AMD’s Radeon R9 290X/290 release, so besides cutting the price of the GeForce GTX 780 and GTX 770 they rolled out the faster GeForce GTX 780 Ti. The market situation for nearly all of the Radeons has also been worsened by the cryptocurrency craze. Coupled with the traditional Christmas excitement, it has provoked a considerable rise in prices and even shortage of AMD-based products. So the bottom price of a reference Radeon R9 290X is about one fourth higher than the price of an original GeForce GTX 780 with high-efficiency cooler and factory overclocking as of the time of our writing this.
Anyway, AMD fans don’t lose hope that the prices will stabilize and the shortage will end. We are also looking forward to getting original versions of AMD’s new cards with better and quieter coolers. Until then we have to content ourselves with testing the reference Radeon R9 290X and comparing it with its current market opponents using new drivers and an extended set of benchmarks.
Specifications and Recommended Price
The following table helps you compare the AMD Radeon R9 290X specifications with the reference versions of AMD Radeon R9 280X, Nvidia GeForce GTX 780 and Nvidia GeForce GTX 780 Ti:
This review has been late due to the sheer lack of test samples, so we won’t delve into details about the architectural differences of the new GPU and the whole graphics card from their predecessors. Suffice it to say that the new Hawaii XT features the same Graphics Core Next architecture as the Tahiti XT but has more muscle. To be specific, it incorporates 2816 instead of 2048 unified shader processors, 176 instead of 128 texture-mapping units, and 64 instead of 32 raster operators. Still manufactured on 28nm tech process, the GPU consists of about 6.2 billion transistors (instead of 4.313 billion) and has grown in size from 365 to 438 sq. mm. The clock rate has remained the same at 1000 MHz (in 3D mode). The memory frequency is lower compared to the Radeon HD 7970 GHz Edition: 5000 vs. 6000 MHz. However, the memory bus is 512-bit now, so the memory bandwidth has grown from 288 to 320 GB/s. Instead of 3 GB, the AMD Radeon R9 290X comes with 4 GB of onboard memory.
In the software department, the new Mantle API must be noted in the first place. It is supposed to minimize the effect of the API on the game engine code and reduce CPU load, yet its benefits are still debatable. Time will show how useful it really is. Besides it, we can note the DirectX 11.2 support, the exclusive TrueAudio technology and the well-known AMD Eyefinity feature.
PCB Design and Features
The new AMD Radeon R9 290X follows the classic dual-slot design where the face side of the PCB is covered by the cooling system with a radial fan. The device’s rather boring brick-like appearance is somewhat enlivened by a few sculpted red lines:
The card measures 275x99x39 mm. Its reverse side is exposed.
The AMD Radeon R9 290X has dual-link DVI-I and DVI-D outputs, one HDMI 1.4a connector and one DisplayPort version 1. 2.
There’s a vent grid in the card’s mounting bracket to exhaust the hot air from the cooler out of the computer case.
As opposed to its predecessors, the new AMD Radeon R9 290X lacks CrossFireX connectors. Multi-GPU configurations are now built using the PCI Express bus.
Additional power is delivered to the card via one 6-pin and one 8-pin connector. The peak power draw is specified to be 275 watts, which is a mere 25 watts more than required by the Radeon R9 280X.
A BIOS switch can be found in its traditional location:
It allows booting from a different BIOS chip and choosing between two operation modes for the card’s cooler: Uber Mode and Quiet Mode. We’ll tell you more about them shortly in our description of the cooling system.
The PCB design is overall similar to that of the Radeon R9 280X and of the earlier Radeon HD 7970 GHz Edition:
The power system includes 5 phases for the GPU, 1 phase for the memory, and 1 phase for PLL. It uses state-of-the-art DirectFET transistors.
The GPU voltage regulator is based on an International Rectifier 3567B controller:
AMD puts an emphasis on it, claiming that the controller helps improve AMD’s PowerTune technology. The voltage regulator is 6.25 mV accurate now, so there are as many as 255 possible values in the range of 0 to 1.55 volts.
The GPU looks impressive with its 438 sq. mm die which is 20% larger than the Tahiti. Well, it is still smaller than the GK110 (561 sq. mm). Our sample of the GPU was manufactured on the 31st week of 2013 in Taiwan. Its marking is printed around the chip on the protective metal frame:
The GPU incorporates 2816 unified shader processors, 176 texture-mapping units, and 64 raster operators. It is expected to work at clock rates up to 1000 MHz in 3D applications, depending on load and temperature. The clock rate is dropped to 300 MHz in 2D mode while the voltage is lowered from 1.195 to 0.945 volts.
The ASIC quality is rather high for an engineering sample at 76. 2%:
The card comes with 4 gigabytes of GDDR5 memory in FCBGA chips soldered to the face side of the PCB. These are H5GQ2h34AFR R0C components from SK Hynix:
The chips are rated for 6000 MHz but clocked at 5000 MHz on the Radeon R9 290X, so we can expect them to overclock well. The memory bandwidth is anyway quite high with the 512-bit bus: 320 GB/s. That’s higher compared to the GeForce GTX 780 but somewhat worse compared to the GeForce GTX 780 Ti.
Thus, the Radeon R9 290X has the following specs:
Now we can check out its cooling system, temperature and noisiness.
Cooling System: Efficiency and Noise Level
The AMD Radeon R9 290X is equipped with a cooler whose design hasn’t changed much since AMD’s earlier reference coolers. The plastic casing fastened with several screws around the base frame covers a large aluminum heatsink, a steel heat-spreading plate and a radial fan:
The heatsink is soldered to the base which has contact with the memory chips and power components via thermal pads.
The heatsink consists of slim aluminum fins soldered to a copper base:
The base is a large vapor chamber with too much of thick and viscous thermal grease in the middle.
The fan drives the air through the heatsink and exhausts it out of the computer case. The 70mm blower is made by FirstD (the FD7525U12D model).
Its speed is PWM-regulated in a range of 1000 to 5500 RPM. The fan has a peak output power of 20.4 watts at 1.7 amperes.
To measure the temperature of the graphics card we ran Aliens vs. Predator (2010) five times at the maximum visual quality settings, at a resolution of 2560×1440 pixels, with 16x anisotropic filtering and with 4x MSAA.
We used MSI Afterburner 3.0.0 beta 17 and GPU-Z version 0.7.4 to monitor temperatures inside the closed computer case. The computer’s configuration is detailed in the following section of our review. All tests were performed at 25°C room temperature.
As we noted above, the reference Radeon R9 290X has two BIOS versions with different fan settings. There are two modes: Silent and Normal (or Quiet and Uber). They differ in how the fan’s speed depends on the GPU’s temperature and clock rate. Here’s what we have in the Silent mode during the looped Aliens vs. Predator test:
At the beginning of the second test cycle the GPU grew as hot as 94°C, triggering thermal throttling (similar to what was first implemented in Intel CPUs a few years ago). The clock rate was lowered whereas the speed of the fan didn’t exceed 2300 RPM (44% of the fan’s full power). Although the noise level is out of the comfort zone, AMD calls this mode quiet. Of course, the card’s performance is lower in this mode since the clock rate can drop down to 700 MHz, which is about 30% lower than the GPU’s default frequency. Thus, the reference design doesn’t seem to be attractive for practical purposes.
But the main problem is that when you switch to another BIOS and enable the Normal mode, the fan can accelerate up to 49% or 2800 RPM, which is still not enough to avoid the frequency drop. The GPU gets 94°C hot again, just not so quickly – during the fourth test cycle. That’s why we tried to find out the fan’s speed which would prevent the GPU from slowing down. We almost did it at 55% and 3020 RPM, yet the frequency would still sag down to 978 MHz occasionally:
So we sped the fan up to 60% or 3340 RPM:
The GPU was no hotter than 87°C then and didn’t suffer any frequency drops. The card was too noisy at that speed of the fan, of course.
After we took the card apart and replaced its default thermal grease with Arctic MX-4, we carried out our temperature test one again and saw that there were no frequency drops at 55% fan speed.
Replacing the default thermal interface seems to do the card some good. The peak temperature was close to the threshold, though. It is quite possible that the frequency would have dropped if the test had lasted longer.
We measured the level of noise using an electronic noise-level meter CENTER-321 in a closed and quiet room about 20 sq. meters large. The noise-level meter was set on a tripod at a distance of 15 centimeters from the graphics card which was installed on an open testbed. The mainboard with the graphics card was placed at an edge of a desk on a foam-rubber tray. The bottom limit of our noise-level meter is 29.8 dBA whereas the subjectively comfortable (not low, but comfortable) level of noise when measured from that distance is about 36 dBA. The speed of the graphics card’s fans was being adjusted by means of a controller that changed the supply voltage in steps of 0.5 V.
We’ve included the results of the reference Nvidia GeForce GTX 780 and two original cards, EVGA GeForce GTX 780 Superclocked ACX and MSI Radeon R9 280X Gaming, in the diagram below for the sake of comparison. We’ll also use these cards in our performance tests. The vertical dotted lines mark the top speed of the fans in the automatic regulation mode. There are two such lines for the AMD Radeon R9 290X: for the automatic Silent mode and for 55% fan speed.
Here are the results:
The Radeon R9 290X is the noisiest reference card. We had expected that even before the test, judging by our subjective impressions. The high heat dissipation of AMD’s new flagship means that its noise in the quiet mode is about as high as with the reference GeForce GTX 780 but the latter doesn’t drop its GPU frequency by 30%. When running 3D games at a stable 1000 MHz, the Radeon R9 290X has 55% fan speed, which is unbearably noisy. This graphics card is just not meant for home users.
Our temperature tests make it clear that overclocking the reference Radeon R9 290X in the automatic fan regulation mode is pointless as the clock rate would just drop down as soon as the GPU temperature hit 94°C. That’s why we fixed the fan speed at 65% or 3660 RPM and didn’t increase the core voltage. We managed to increase the GPU and memory clock rates by 130 and 760 MHz, respectively.
The resulting clock rates were 1130/5760 MHz:
The fan worked at 65% of its full speed and roared mercilessly, yet the GPU was as hot as 90°C.
Even though we achieved some success in our overclocking experiment, we have to admit that the reference Radeon R9 290X is no good for overclocking unless you are completely deaf.
Testbed and Methods
Here is the list of components we use in our testbed.
- Mainboard: Intel Siler DX79SR (Intel X79 Express, LGA 2011, BIOS 0590 dated 17.07.2013)
- CPU: Intel Core i7-3970X Extreme Edition 3.5/4.0 GHz (Sandy Bridge-E, C2, 1.1 V, 6x256KB L2 cache, 15MB L3 cache)
- CPU cooler: Phanteks PH-TC14PЕ (2x Corsair AF140 fans, 900 RPM)
- Thermal grease: ARCTIC MX-4
- Graphics cards:
- Nvidia GeForce GTX 780 Ti 3GB (876/928/7000 MHz)
- EVGA GeForce GTX 780 Superclocked ACX 3GB (863/916/6008 MHz; and overclocked to 1032/1085/7348 MHz)
- AMD Radeon R9 290X 4GB (1000/5000 MHz and overclocked to 1130/5760 MHz)
- MSI Radeon R9 280X Gaming 3GB (1050/6000 MHz)
- System memory: DDR3 4x8GB G. SKILL TridentX F3-2133C9Q-32GTX (2133 MHz, 9-11-11-31, 1.6 V)
- System disk: SSD 256GB Crucial m4 (SATA 6 Gbit/s, CT256M4SSD2, BIOS v0009)
- Games/software disk: Western Digital VelociRaptor (SATA-2, 300 GB, 10000 RPM, 16 MB cache, NCQ) in a Scythe Quiet Drive 3.5″ enclosure
- Backup disk: Samsung Ecogreen F4 HD204UI (SATA-2, 2 TB, 5400 RPM, 32 MB cache, NCQ)
- Computer case: Antec Twelve Hundred (front panel: three Noiseblocker NB-Multiframe S-Series MF12-S2 fans at 1020 RPM; back panel: two Noiseblocker NB-BlackSilentPRO PL-1 fans at 1020 RPM; top panel: one preinstalled 200mm fan at 400 RPM)
- Control & monitoring panel: Zalman ZM-MFC3
- Power supply: Corsair AX1200i (1200 W), 120mm fan
- Monitor: 27″ Samsung S27A850D (DVI-I, 2560×1440, 60 Hz)
As we are going to check out the performance growth within AMD’s own model range, we will compare the Radeon R9 290X with a Radeon R9 280X (previously known as Radeon HD 7970 GHz Edition) in its version from MSI.
The MSI card’s GPU is pre-overclocked by 50 MHz, but we guess this is a negligible thing for such top-end graphics solutions.
The second opponent we’ve chosen for our Radeon R9 290X is, of course, a GeForce GTX 780. We don’t have any reference versions of that card left, so we take an EVGA GeForce GTX 780 Superclocked ACX 3GB and run it in two modes: at the clock rates of the reference GTX 780 and at the highest clock rates supported by our sample of the EVGA card.
We will also compare the Radeon R9 290X with a GeForce GTX 780 Ti which is currently somewhat more expensive, yet can be viewed as the Radeon’s market opponent. We will use a reference GTX 780 Ti with default clock rates:
In order to lower the dependence of the graphics cards’ performance on the overall platform speed, we overclocked our 32nm six-core CPU to 4.8 GHz by setting its frequency multiplier at x48 and enabling Loadline Calibration. The CPU’s voltage was increased to 1.38 volts in the mainboard’s BIOS:
Hyper-Threading was turned on. We used 32 GB of system memory at 2.133 GHz with timings of 9-11-11-20_CR1 and voltage of 1.6125 volts.
The testbed ran Microsoft Windows 7 Ultimate x64 SP1 with all critical updates installed. We used the following drivers:
- Intel Chipset Drivers – 220.127.116.116 WHQL dated 21.09.2013
- DirectX End-User Runtimes, dated 30 November 2010
- AMD Catalyst 13.12 WHQL (18.104.22.168) dated 18.12.2013
- GeForce 331.93 Beta dated 27.11.2013
We benchmarked the graphics cards’ performance at two display resolutions: 1920×1080 and 2560×1440 pixels. There were two visual quality modes: “Quality+AF16x” means the default texturing quality in the drivers + 16x anisotropic filtering whereas “Quality+ AF16x+MSAA 4x(8x)” means 16x anisotropic filtering and 4x or 8x antialiasing. We enabled anisotropic filtering and full-screen antialiasing from the game’s menu. If the corresponding options were missing, we changed these settings in the Control Panels of the Catalyst and GeForce drivers. We also disabled Vsync there. There were no other changes in the driver settings.
The graphics cards were tested in two benchmarks and 15 games updated to the latest versions. Batman: Arkham Origins returns to our test programs whereas Battlefield 4 debuts in it:
- 3DMark (2013) (DirectX 9/11) – version 22.214.171.124; Cloud Gate, Fire Strike and Fire Strike Extreme scenes.
- Unigine Valley Bench (DirectX 11) – version 1.0, maximum visual quality settings, 16x AF and/or 4x MSAA, 1920х1080.
- Total War: SHOGUN 2 – Fall of the Samurai (DirectX 11) – version 1.1.0, integrated benchmark (the Sekigahara battle) with the maximum visual quality settings and 8x MSAA.
- Battlefield 3 (DirectX 11) – version 1.4, Ultra settings, two successive runs of a scripted scene from the beginning of the “Going Hunting” mission (110 seconds long).
- Sniper Elite V2 Benchmark (DirectX 11) – version 1.05, we used Adrenaline Sniper Elite V2 Benchmark Tool v1. 0.0.2 BETA with maximum graphics quality settings (“Ultra” profile), Advanced Shadows: HIGH, Ambient Occlusion: ON, Stereo 3D: OFF, Supersampling: OFF, two sequential runs of the test.
- Sleeping Dogs (DirectX 11) – version 1.5, we used Adrenaline Sleeping Dogs Benchmark Tool v126.96.36.199 with maximum image quality settings, Hi-Res Textures pack installed, FPS Limiter and V-Sync disabled, two consecutive runs of the built-in benchmark with quality antialiasing at Normal and Extreme levels.
- Hitman: Absolution (DirectX 11) – version 1.0.447.0, built-in test with Ultra settings, enabled tessellation, FXAA and global lighting.
- Crysis 3 (DirectX 11) – version 188.8.131.520, maximum visual quality settings, Motion Blur – Medium, lens flares – on, FXAA and MSAA 4x, two consecutive runs of a scripted scene from the beginning of the “Swamp” mission (110 seconds long).
- Tomb Raider (2013) (DirectX 11) – version 1.1.748.0, we used Adrenaline Benchmark Tool, all image quality settings set to “Ultra”, V-Sync disabled, FXAA and 2x SSAA antialiasing enabled, TessFX technology activated, two consecutive runs of the in-game benchmark.
- BioShock Infinite (DirectX 11) – version 184.108.40.20618, we used Adrenaline Action Benchmark Tool with “Ultra” and “Ultra+DOF” quality settings, two consecutive runs of the in-game benchmark.
- Metro: Last Light (DirectX 11) – version 220.127.116.11, we used the built-in benchmark for two consecutive runs of the D6 scene. All image quality and tessellation settings were at “Very High”, Advanced PhysX technology was enabled, we tested with and without SSAA antialiasing.
- GRID 2 (DirectX 11) – version 18.104.22.16879, we used the built-in benchmark, the visual quality settings were all at their maximums, the tests were run with and without MSAA 8x antialiasing with eight cars on the Chicago track.
- Company of Heroes 2 (DirectX 11) – version 22.214.171.12411, two consecutive runs of the integrated benchmark at maximum image quality and physics effects settings.
- Total War: Rome II (DirectX 11) — version 1.8.0 build 8891.481024, Extreme quality, V-Sync disabled, SSAA enabled, two consecutive runs of the integrated benchmark.
- ArmA III (DirectX 11) — version 1.08.0.113494, we used the ArmA3Mark benchmark with the Ultra quality, V-Sync disabled, two consecutive runs of the integrated benchmark.
- Batman: Arkham Origins (DirectX 11) – version 1.0 update 8, Ultra visual quality, V-Sync disabled, all the effects enabled, all DX11 Enhanced features enabled, Hardware Accelerated PhysX = Normal, two consecutive runs of the in-game benchmarks.
- Battlefield 4 (DirectX 11) – version 1.4, Ultra settings, two successive runs of a scripted scene from the beginning of the “Tashgar” mission (110 seconds long).
We publish the bottom frame rate for games that report it. Each test was run twice, the final result being the best of the two if they differed by less than 1%. If we had a larger difference, we reran the test at least once to get repeatable results.
The results of the AMD Radeon R9 290X at its default and overclocked frequencies are indicated in red. The color of the Nvidia GeForce GTX 780 Ti is green. The EVGA GeForce GTX 780 Superclocked ACX is turquoise and the MSI Radeon R9 280X Gaming is lilac.
AMD-based cards have always been strong in Futuremark’s tests, so it is no wonder that the Radeon R9 290X feels at its ease in the newest 3DMark:
At the default clock rates the Radeon R9 290X is up to 31% faster than the MSI Radeon R9 280X Gaming and is also ahead of the GeForce GTX 780. When overclocked, it can challenge the slightly more expensive GeForce GTX 780 Ti. Of course, the latter can be easily overclocked as well, yet the first results of the Radeon R9 290X look encouraging.
Unigine Valley Bench
This benchmark shows us a different picture:
The Radeon R9 290X still enjoys a hefty advantage over the MSI Radeon R9 280X Gaming – 34 to 36%, yet this is barely enough to compete with the more affordable GeForce GTX 780. When both cards are overclocked, the GTX 780 is superior.
Now let’s turn to our gaming tests.
Total War: SHOGUN 2 – Fall of the Samurai
Excepting the most resource-consuming settings (2560×1440 with 8x antialiasing), the Radeon R9 290X proves to be a little faster than the GeForce GTX 780, both working at their standard clock rates. But thanks to the higher frequency potential of the EVGA card, the overclocked GTX 780 is just as good as the overclocked R9 290X. Original Radeon R9 290X cards will probably overclock much better, yet Nvidia has the advantage of time and also an ace in its sleeve in the way of the GeForce GTX 780 Ti.
Note also that the Radeon R9 290X is 24 to 29% ahead of the MSI Radeon R9 280X Gaming here.
The standings are overall the same as in the previous game:
We can just note that the Radeon R9 290X enjoys a larger advantage over the MSI Radeon R9 280X Gaming: 28 to 32%.
Sniper Elite V2 Benchmark
As opposed to the two previous tests, it is the Nvidia-based cards that are ahead here.
The Radeon R9 290X is 8 to 11% behind the GeForce GTX 780 at the standard clock rates and only manages to bridge the gap by means of overclocking. The new card from AMD can’t compete with the GeForce GTX 780 Ti in this game and outperforms the MSI Radeon R9 280X Gaming by 22 to 25%. That doesn’t seem to be as large an advantage as we expected from the specs of the new Hawaii XT GPU.
The AMD Radeon R9 290X is unrivaled in this game:
It is only at 2560×1440 with antialiasing that the GeForce GTX 780 and the GeForce GTX 780 Ti can fight back.
This game is yet another example of an application where AMD-based cards are traditionally stronger than their Nvidia-based opponents.
The Radeon R9 290X is 14 to 32% faster than the GeForce GTX 780. Its advantage over its predecessor Radeon R9 280X (or Radeon HD 7970 GHz Edition) amounts up to 46%.
The Radeon R9 290X is always a little faster than the GeForce GTX 780 when both work at their default clock rates. When overclocked, they are roughly equal to each other and ahead of the GeForce GTX 780 Ti. The MSI Radeon R9 280X Gaming is a typical 29 to 31% behind the new AMD solution.
Tomb Raider (2013)
The gap between the Radeon R9 290X and the MSI Radeon R9 280X Gaming is smaller than average in this game: 22 to 24%.
The Radeon R9 290X is 5 to 11% ahead of the GeForce GTX 780 in every test mode. When overclocked, it even beats the standard GeForce GTX 780 Ti!
This game’s rendering engine is favorable towards AMD GPUs, therefore the Radeon R9 290X is always better than the GeForce GTX 780. It must be noted that the new card from AMD isn’t far ahead when we enable antialiasing, but each of the AMD-based solutions can deliver higher bottom speed, which is important for smooth gameplay.
Metro: Last Light
First, let’s check this game out with the Advanced PhysX option turned on:
The GeForce series are unrivaled in this case. The Radeon R9 290X is left to compete against the MSI Radeon R9 280X Gaming, beating it by 29 to 44%.
It’s different when we disable Advanced PhysX:
The Radeon R9 290X is superior in every test mode, enjoying a large advantage over the GeForce GTX 780 when SSAA is turned off. When overclocked, it even beats the standard GeForce GTX 780 Ti.
GRID 2 runs faster on the AMD-based solutions. The Radeon R9 290X is 8 to 15% ahead of the GeForce GTX 780 and gets very close to the GTX 780 Ti in some of the test modes. Its advantage over the MSI Radeon R9 280X Gaming amounts to 20-28%.
Company of Heroes 2
The Radeon R9 290X is even more impressive in this game:
It beats the GeForce GTX 780 by 20-36% and leaves the GTX 780 Ti behind, too. Its Tahiti XT-based predecessor is slower by 25 to 39%.
Total War: Rome II
Here is yet another game where the Radeon R9 290X beats the GeForce GTX 780:
The gap isn’t large, yet AMD is superior. The overclocked Radeon R9 290X is comparable to the standard GeForce GTX 780 Ti again.
Top-end graphics cards can only be compared at the most resource-consuming settings in this benchmark. Otherwise, their speed is limited by the overall performance of the platform.
Batman: Arkham Origins
Returning to our testing program after a recent update, the game suggests that the Radeon R9 290X is overall slower than the GeForce GTX 780.
However, we can note that the Radeon R9 290X is just as good as the GeForce GTX 780 when we enable antialiasing. And most gamers are likely to enable it on such top-end graphics cards. The Radeon R9 290X even beats its opponent and the GeForce GTX 780 Ti in the heaviest test mode. It is in this game that we observe the largest gap between the Radeon R9 290X and the Radeon R9 280X. It is as large as 47%!
It’s the first time we ever use Battlefield 4 for our benchmarking, so we want to show you the graphics settings we use (we vary the resolution and the antialiasing mode, of course):
As you can see, we choose the maximum visual quality possible. But you should lower the numbers by 20-25% to estimate the real-life frame rate because the intro scene is not actual gameplay with its numerous explosions and shootouts.
Here are the results:
The Radeon R9 290X beats the GeForce GTX 780 once again. The new Radeon is fast indeed and can outperform the standard GeForce GTX 780 Ti when overclocked. Its advantage over the Radeon R9 280X amounts to 26-28% here.
Now we can proceed to our performance summary charts.
Performance Summary Charts
First of all, let’s see how faster the new single-GPU flagship Radeon R9 290X in comparison with its predecessor (Radeon HD 7970 GHz Edition) as represented by the MSI Radeon R9 280X Gaming. The latter serves as the baseline:
The average performance growth is 23-26% without antialiasing and 29-31% with antialiasing across all the tests. We can recall the GeForce GTX Titan which was 34-55% faster than the GeForce GTX 680 at the time of its release, but the Titan costs $1000 even now. Moreover, the Gaming card from MSI is pre-overclocked by 50 MHz, which adds 2-3% to the performance of the reference Radeon R9 280X.
The second pair of our summary charts helps compare the Radeon R9 290X and the GeForce GTX 780 at their default clock rates, the latter card serving as the baseline.
The GeForce GTX 780 wins in Sniper Elite V2, Metro: Last Light (with Advanced PhysX) and in Batman: Arkham Origins (without antialiasing). In the rest of the games the Radeon R9 290X is at least as good as the GeForce GTX 780 and even beats the latter in Sleeping Dogs, Hitman: Absolution, GRID 2, Company of Heroes 2, Total War: Rome II and Battlefield 4. The average advantage of the Radeon R9 290X across all the games is 7-8% without antialiasing and 6-10% with antialiasing.
As we mentioned in the Introduction, today even original GeForce GTX 780s with high-efficiency coolers and factory overclocking are considerably cheaper than reference Radeon R9 290X cards, so the new card should also be compared with the slightly more expensive GeForce GTX 780 Ti. Here are the diagrams:
Nvidia wins this round. The Radeon R9 290X is only ahead in Hitman: Absolution, Company of Heroes 2, and in some test modes of Sleeping Dogs and Batman: Arkham Origins. In the rest of the games the GeForce GTX 780 Ti remains the fastest single-GPU gaming card.
The last pair of diagrams indicate the performance scalability of the Radeon R9 290X. We overclocked our sample’s GPU and memory by 130 MHz (13%) and 760 MHz (15.2%) and here are the benefits:
So the overclocked Radeon R9 290X is 9-10% faster at 1920×1080 and 10-11% faster at 2560×1440. That’s normal scalability for a top-end graphics card.
We measured the power consumption of computer systems with different graphics cards using a multifunctional panel Zalman ZM-MFC3 which can report how much power a computer (the monitor not included) draws from a wall socket. There were two test modes: 2D (editing documents in Microsoft Word and web surfing) and 3D (the intro scene of the Swamp level from Crysis 3 running four times in a loop at 2560×1440 with maximum visual quality settings but without MSAA). Here are the results:
The peak power consumption of the Radeon R9 290X configuration is 69 watts (14.4%) higher compared to the MSI Radeon R9 280X Gaming and only 20 watts (3. 8%) higher compared to the standard GeForce GTX 780. The GeForce GTX 780 Ti configuration needs about as much power as the AMD Radeon R9 290X. When the latter is overclocked, the computer’s power draw rises to 574 watts (by 4.6%). Any of these configurations can be powered by a 600-watt PSU.
At the time of its announcement in October 2013 the AMD Radeon R9 290X made a worthy opponent to the Nvidia GeForce GTX 780 despite all the problems with noise and overheat. However, Nvidia was quick to react by cutting the price of its single-GPU flagship (we don’t count the Titan in whereas the GTX 780 Ti wasn’t yet released then). Today, in early 2014, the reference Radeon R9 290X costs more than original GeForce GTX 780s. Even though it is faster on average, the small difference in performance doesn’t make up for the higher noise level and potential GPU frequency drops due to overheat.
If you want to buy a Radeon R9 290X, you should wait for original versions from first-tier brands as all of them have already announced their Hawaii XT based products. If priced lower than the GeForce GTX 780 Ti, they will be quite an attractive buy. We’ll test one such product very soon. We’re also preparing a review of the AMD Radeon R9 290 (without the “X”), so stay tuned to us in the new year!
AMD Radeon R9 review: 280X, 285, 290 & 290X
We’ve tested four AMD chipsets for gaming enthusiasts. First up is the Radeon R9 280X. This has a Tahiti XTL or XT2 core, depending on the manufacturer, with 2,048 GPU cores and stock 850MHz core and 1,000MHz boost speeds. The Tahiti XTL and XT2 chips both have the same number of GPU cores and the same clock speeds, but the XT2 variant apparently has some power efficiency improvements.
The Radeon R9 285 cards are based on AMD’s newest Tonga PRO GPUs, and are the first cards to support AMD’s Graphics Core Next (GCN) 1.2 architecture. According to Wikipedia, this brings «improved tessellation performance, lossless delta colour compression in order to reduce memory bandwidth usage, [and] an updated and more efficient instruction set». The R9 285 cards we’ve tested all have 2GB of GDDR5 RAM, compared to the 3GB in the mid-range R9 280, and have the same 1,792 GPU cores. AMD’s stock R9 285 speed is 918MHz.
The R9 290 is significantly more expensive than the R9 285, but is a big step up in terms of specification. You get 2,560 GPU cores, 4GB of GDDR5 memory and a 947MHz core clock speed. Finally, there’s the most powerful card we’ve tested: the R9 290X. This has a huge 2,816 GPU cores running at 1,080MHz, and is a rival for the top-end Nvidia GTX 970 cards.
|Card||Rating||Award||Price inc VAT||Supplier|
|XFX Radeon R9 280X Double Dissipation Black Edition||5||Best Buy||£186||Asus Strix R9 285||3||£207||www.scan.co.uk|
|Club3D Radeon R9 285 royalQueen||3||£187||www.morecomputers.com|
|Sapphire R9 285 2GB GDDR5 ITX Compact Edition||4||£185||XFX AMD Radeon R9 290 Black Double Dissipation Edition||5||Recommended||£242||www.ebuyer.com|
|Sapphire VAPOR-X R9 290X 4GB GDDR5 TRI-X OC||4||£273||www.dabs.com|
Radeon R9 280X
The XFX Radeon R9 280X is a very big card; at 295mm long it’s only slightly shorter than the huge Sapphire R9 290X, so measure your case carefully. The card needs both six-pin and eight-pin PCI Express power connectors, so you’ll need a decent power supply. The R9 280X has one dual-link and one single-link DVI port, HDMI and twin Mini DisplayPort sockets, so you may need a Mini DisplayPort to DisplayPort adaptor (around £10 from www.scan.co.uk) to plug into your DisplayPort monitor.
The R9 280X was seriously impressive in our tests. It could play Dirt Showdown smoothly at 3,840×2,160 at almost 42fps, had no problem with Tomb Raider at 2,560×1,440 with Ultra quality and demanding SSAA switched on, and could just about play Tomb Raider smoothly at 3,840×2,160 using the FXAA rather than SSAA anti-aliasing technique, with a 30.5fps average. In the highly challenging Metro: Last Light Redux benchmark, we saw a playable 35.3fps average at 1,920×1,080 with Very High quality and SSAA enabled. Disabling SSAA at this resolution gave us a silky-smooth 59.8fps, and even at 2,560×1,440 we still saw a playable 39.8fps.
It may be over a year old, but the AMD R9 280X is still an impressive card, and is very good value for its performance. It wins a Best Buy award.
Radeon R9 285
We have three cards based on AMD’s newer R9 285 chipset. The Asus Strix R9 285 is the largest model, at 269mm long, and also the tallest, at 126mm high. This is due to a huge twin-fan cooler, which is almost silent when the graphics card is running at idle, but builds to a low rush under heavy load. The Club3D R9 285 royalQueen is smaller, at 218mm long, and also has twin cooling fans. However, this is a much noisier card than the Asus model under load when the twin fans spin up. Both the Asus and Club3D R9 285 cards have the standard twin DVI, HDMI and DisplayPort outputs.
The Sapphire R9 285 2GB GDDR5 ITX Compact Edition is something a bit different, as it’s a single-fan design in a short 172mm-long package. This is amazingly short for such a small card, meaning it will fit in even compact Mini-ITX cases, as its name suggests. The Sapphire card has two Mini DisplayPort outputs, so you may need an adaptor for your monitor (see the XFX Radeon R9 280X review above). The Sapphire R9 285 also requires a single 8-pin PCI Express power plug, compared to the twin 6-pin PCI Express plugs on the other two R9 285-based cards. Despite its single fan, the card is very quiet indeed, even under load.
All three cards are mildly overclocked compared to AMD’s reference R9 285 speed. The Asus card runs at 954MHz, the Club3D model at 945MHz and the Sapphire card 928MHz. The cards have broadly similar performance in all three game tests; they produced very high, perfectly smooth frame rates in Dirt Showdown and Tomb Raider at 1,920×1,080 with Ultra detail, and maintained a just-playable average of 29fps (28fps for the Sapphire card) in Metro: Last Light Redux. This is a bit close for comfort, so you should turn off SSAA to get properly smooth frame rates in this challenging title.
All three cards managed around a playable 34fps in Dirt Showdown at 3,840×2,160 with Ultra detail, and in Tomb Raider at 2,560×1,440 we saw a playable 34fps from the Club3D and Sapphire cards and 35fps from the Asus model. To get a playable 2,560×1,440 frame rate in Metro: Last Light Redux we just had to turn off SSAA, whereupon we saw smooth 45fps averages from all three cards.
Among the R9 285 cards, the Sapphire model is our favourite, as it has similar performance to the other models but is smaller and quieter. The MSI GTX 960 Gaming 2G is cheaper and just as quick, so is a credible Nvidia-based rival, but our pick at this price is the XFX Radeon R9 280X, which is far quicker than all three Radeon R9 285 models.
Radeon R9 290
The last two AMD-based cards are very much in the hardcore gamer price range. The £242 XFX AMD Radeon R9 290 Black Double Dissipation Edition is overclocked to 980MHz, and the low roar its twin fans make under load isn’t intrusive.
The card is very quick indeed, with a smooth 44.1fps in Metro: Last Light Redux at 1,920×1,080, with Very High detail and SSAA enabled. This is very close to the scores managed by the more expensive Nvidia GTX 970-based cards. The card was quicker than the GTX 970 models in the Tomb Raider benchmark, too, with a huge 84.3fps.
48.6fps in Dirt Showdown at 3,840×2,160 shows the card isn’t troubled by this benchmark, and we even saw a smooth 34fps average in Tomb Raider at this huge resolution once we’d swapped the resource-hungry SSAA for the lighter FXAA anti-aliasing technique. In the Metro benchmark, leaving quality on Very High but turning off SSAA led to a smooth 49fps at 2,560×1,440, and we saw a just-playable 33fps frame rate at 3,840×2,160 by turning detail down to High.
The XFX AMD Radeon R9 290 Black Double Dissipation Edition is expensive, but incredibly powerful for the money. The Nvidia GTX 970 cards can’t match it for bang for buck, and it makes the Sapphire Vapor-X R9 290X look like overkill. If you’re going to play games at up to 2,560×1,440, the much cheaper XFX Radeon R9 280X is a better buy, but if you want to dabble with 4K gaming the R9 290 is a great way to do it.
Radeon R9 290X
At the top of the AMD tree is the Sapphire VAPOR-X R9 290X 4GB GDDR5 TRI-X OC. This card’s grandiose title is matched by its appearance. It’s huge, at 300mm long, heavy, and has three big fans and fancy metallic turquoise paint. It also needs two 8-pin PCI Express power connectors.
The card is very quiet at idle, as the two outer fans power down completely. It makes a low roar under load, but the noise’s low pitch makes it unobtrusive. This is easily the fastest card we tested, on either the AMD or Nvidia side. The 1,920×1,080 Dirt Showdown and Tomb Raider benchmarks were dispatched without a problem, and 49fps in Metro: Last Light Redux with Very High detail and SSAA is a couple of frames per second better than the more expensive Nvidia GTX 970-based cards managed.
Dirt Showdown and Tomb Raider weren’t a problem at 2,560×1,440 either, and turning off SSAA in Metro gave us 54fps at this resolution, even with the game set to Very High detail. We also saw a playable 37fps at 3,840×2,160 once we’d dropped detail to High.
The Sapphire VAPOR-X R9 290X 4GB GDDR5 TRI-X OC is a highly impressive card, which manages to be faster than the Nvidia GTX 970-based models despite being slightly cheaper. However, the XFX AMD Radeon R9 290 Black Double Dissipation Edition does most of what the R9 290X can do at a lower price, so is a better buy overall.
|Model||Radeon R9 280X Double Dissipation Black Edition||Strix R9 285||Radeon R9 285 royalQueen||R9 285 2GB GDDR5 ITX Compact Edition||AMD Radeon R9 290 Black Double Dissipation Edition||VAPOR-X R9 290X 4GB GDDR5 TRI-X OC|
|Slots taken up||2||2||2||2||2||3|
|GPU||AMD Radeon R9 280X||AMD Radeon R9 285||AMD Radeon R9 285||AMD Radeon R9 285||AMD Radeon R9 290||AMD Radeon R9 290X|
|GPU clock speed||1,080MHz||954MHz||945MHz||928MHz||980MHz||1,080MHz|
|Memory||3GB GDDR5||2GB GDDR5||2GB GDDR5||2GB GDDR5||4GB GDDR5||4GB GDDR5|
|Max memory bandwidth||176GB/s||176GB/s||176GB/s||320GB/s||361GB/s|
|Graphics card length||295mm||269mm||218mm||172mm||283mm||300mm|
|Mini HDMI outputs||0||0||0||0||0||0|
|Mini DisplayPort outputs||2||0||0||2||0||0|
|Power leads required||1x 6-pin, 1x 8-pin PCI Express||2x 6-pin PCI Express||2x 6-pin PCI Express||1x 8-pin PCI-Express||1x 6-pin, 1x 8-pin PCI Express||2x 8-pin PCI Express|
|Accessories||Mini DisplayPort to DisplayPort adaptor, 2x Molex to 8-pin PCI Express adaptors||2x Molex to 6-pin PCI Express adaptors||None||DVI to VGA adaptor, DisplayPort cable, Mini DisplayPort to DisplayPort adaptor, 2x 6-pin to 8-pin PCI-Express adaptors||1x twin Molex to 6-pin PCI Express adaptor, 1x twin 6-pin to 8-pin PCI Express adaptor||2x twin Molex to 8-pin PCI Express adaptors|
|Price including VAT||£186||£207||£187||£185||£242||£273|
|Warranty||Two years RTB||Two years RTB||Two years RTB||Two years RTB||Two years RTB||Two years RTB|
AMD Radeon R9 290X video card review and testing GECID.com. Page 1
> AMD Radeon R9 290X
It’s no secret that the announcement of a new family of AMD Volcanic Islands graphics accelerators with rather low recommended prices for flagship models surprised not only users, but also the main competitor. So, the answer to the declared value for the heroine of our review is AMD Radeon R9 290X — at $549, it was a price cut for two of the three «top» graphics cards from NVIDIA. At the end of October 2013, the recommended price for the NVIDIA GeForce GTX 780 model was reduced from $649 to $499, and for the NVIDIA GeForce GTX 770 version from $399 to $329. And only the flagship — NVIDIA GeForce GTX TITAN — remained unshakable with its price tag of $999.
Let’s see what exactly made the competitor react so quickly to the release of AMD Radeon R9290X. After all, the announcement of a flagship video card with a price tag of $549 is an amazing event in itself, but without an appropriate level of performance it is not able to stir up the market so much.
The graphics core of the novelty called AMD Hawaii XT is quite logically made on the 28-nm AMD Graphics Core Next microarchitecture and consists of four unified blocks called the «Shader Engine». We also note the increase since the AMD Radeon HD 7970 L2 cache from 384 KB to 1 MB and L1-L2 bus bandwidth from 700 GB/s to 1 TB/s.
In turn, each Shader Engine, in addition to a separate geometry processing unit, contains 11 computational units and 16 rasterization units.
Each compute unit consists of 64 stream processors and 4 texture units.
Another advantage of the AMD Radeon R9 290X is the increased memory bus to 512-bit. This allowed us to increase the throughput by 20% compared to the AMD Radeon HD 79.70, which, given the effective frequency of video memory 5000 MHz, is now capable of passing an impressive 320.0 GB of information per second.
As a result, the performance characteristics of new items in comparison with other models are as follows:
Radeon R9 290X
Radeon R9 280X
Radeon HD 7970 GHz Edition
Number of stream processors
GPU frequency, MHz
Effective video memory frequency, MHz
Memory size, MB
Video memory interface, bit
features of the new flagship.
As it became known, AMD Radeon R9 290X received an updated mechanism for controlling power consumption and overclocking — AMD PowerTune.
The novelty uses a special VID interface with a dedicated telemetry channel, the presence of which is designed to reduce the processing time of the received data for more efficient voltage control in the range from 0 to 1.55 V. automatic voltage change is only 6.25 mV, which allows you to more precisely regulate the performance and TDP of the video card.
AMD Radeon R9 290X is also notable for the lack of connectors for connecting AMD CrossFireX bridges, which means we have the first flagship with a purely software implementation of this function. According to AMD, the bandwidth of the PCI Express 3.0 bus is more than enough to keep the frame buffers in sync. Even if connected via a PCI Express 2.0 connector, there will be no significant drop in performance. In the future, we will definitely test the work of AMD Radeon R9290X in this mode, but for now we can only take the manufacturer’s word for it.
An interesting feature is support for the VESA Display ID v1.3 standard, which allows you to connect and configure displays with resolutions up to «4K» (Ultra HD) in plug’n’play mode without user intervention, which greatly simplifies setup .
Thanks to DSP TrueAudio support, the AMD Radeon R9 290X graphics card is able to output processed (by the developer or automatically using the built-in middleware libraries) sound not only via the HDMI interface, but also to any other audio devices connected using USB or 3.5mm audio jack.
Of course, there was support for API DirectX 11.2 and AMD Mantle, which we will not dwell on in detail, since you can find their features in the material related to the announcement of the AMD Volcanic Islands family.
And now we propose to move from theory to practice, having examined in more detail the reference video card AMD Radeon R9 290X .
AMD Radeon R9 290X
AMD Hawaii XT
Number of universal shader processors
Supported APIs and Technologies
DirectX 11.2, OpenGL 4.3, AMD Mantle, AMD Eyefinity, AMD App Acceleration, AMD HD3D, AMD CrossFireX, AMD PowerPlay, AMD PowerTune, AMD ZeroCore, AMD TrueAudio
Graphics core frequency, MHz
Memory frequency (effective), MHz
Memory capacity, GB
Memory bus width, bit
Memory bandwidth, GB/s
PCI Express 3.0 x16
Digital — up to 4096 x 2160
Analog — up to 2048 x 1536
Image output interfaces
2 x DVI-D
1 x HDMI
1 x DisplayPort
Support for HDCP and HD video decoding
Minimum power supply unit, W
Dimensions from the official website (measured in our test lab), mm
Not specified (290 x 112)
Fresh drivers can be downloaded from the website of the manufacturer of GPU
Appearance and element base
The novelty is made on a completely new black printed circuit board. As you can see, it cannot boast of compact dimensions, which, in principle, is not surprising for a flagship solution. All sixteen memory chips are located on the front side of the textolite. They are soldered around the graphics core.
The AMD Radeon R9 290X is powered according to the scheme familiar to us from the AMD Radeon HD 7970, which includes seven phases: five for the GPU and one for the video memory chips and PLL. Speaking about the element base of the video card, we note the presence of solid capacitors, chokes with a ferrite core and field-effect transistors with a metal case.
The core power subsystem is implemented on the 3567B PWM controller manufactured by International Rectifier, which, according to the documentation, supports eight-phase 6 + 2 control, supports automatic switching of active phases, as well as a number of protective technologies. Recall that it is thanks to the presence of this controller with the support of a separate telemetry channel that the work of AMD PowerTune technology has been significantly improved.
It is interesting to note that AMD itself is modestly silent about the power consumption and TDP level of its flagship. There are also no recommendations for a power supply. However, if you look at the information provided by the main partners, who currently only release reference versions under their brand, you can see that the power supply in a system with an AMD Radeon R9 290X must be at least 750 watts.
For additional cables, you will need one 6-pin and one reinforced 8-pin. Also note that the power cables connect and disconnect without much difficulty.
On the same side as the connectors for additional power cables is the switch familiar from the AMD Radeon HD 7970. But if in the case of the previous flagship it switched BIOS versions, then in the new product it is responsible for changing the operating mode: normal and quiet, the differences between which we will talk about a little later.
290X is implemented in software. According to AMD, the user can combine up to four video cards to jointly calculate graphic effects. When installing two video cards, the declared performance increase will be 187%, and in the case of three — 260%, which we will definitely check in subsequent materials.
The following set of video outputs is provided for displaying images on the graphics adapter:
- 2 x DVI-D;
- 1 x HDMI;
- 1 x DisplayPort.
Up to six displays can be connected simultaneously: three to the DisplayPort video output using the appropriate adapter, and one each to the remaining three ports.
The reverse side of the printed circuit board, in addition to mounting the cooler, carries several elements of the power subsystem, so we recommend that you exercise some caution when handling the novelty.
The AMD Radeon R9 290X is based on the AMD Hawaii XT graphics chip. It is produced according to the 28nm process technology and consists of 2816 universal shader pipelines, 64 rasterization units and 176 texture units. The frequency of the GPU dynamically changes depending on the load, and can rise up to 1000 MHz.
The memory of the novelty, with a total capacity of 4 GB, is assembled using sixteen chips with a capacity of 256 MB each, manufactured by SK Hynix. Judging by the marking (H5GQ2h34AFR-ROC) and documentation, their nominal frequency is 6 GHz, although in our case it was lowered to 5000 MHz. This allows us to hope for a good overclocking potential of the microcircuits used. Data exchange between the graphics core and memory is carried out through a 512-bit bus, which is capable of passing 320.0 GB of information per second.
The AMD Radeon R9 290X graphics card with the cooling system installed occupies two expansion slots and has a total length of 290 mm, according to our measurements.
The turbine-type cooler itself consists of one radiator with an evaporation chamber at the base, a fan with a diameter of 76 mm, and a plastic lining.
A massive plate is soldered to the back of the radiator, on which the fan is fixed. With the help of a special thermal interface, it contacts with video memory chips and field-effect transistors, removing heat to the main radiator.
As you can see, all elements of the heatsink are securely soldered to each other, and the cooler contacts directly with the graphics core using a layer of thermal paste applied to the copper base.
With automatic fan speed control, in the maximum load mode, the graphics core heated up to 94 degrees, and the cooler, judging by the monitoring readings, worked at 55% of its maximum power. Subjectively, the noise level can be described as «above average». At the same time, it stands out significantly from the background of other PC components.
After forcibly increasing the fan speed to the maximum level, the GPU temperature dropped to 73 degrees. The generated noise level can be described as «very high, and not comfortable for prolonged use. »
When there is no load, the graphics core and memory frequencies are automatically reduced to reduce their power consumption and heat dissipation. In this mode, the GPU temperature does not exceed 44 degrees, and the noise level is quite quiet.
As a result, it is worth noting a few facts. First of all, it’s a pity that quiet turbine-type cooling systems are so far only possible for HIS, while AMD didn’t surprise us by releasing a rather noisy cooler. Secondly, high temperature performance with automatic fan speed control is standard for AMD Radeon R9290X in normal mode, which favors GPU high frequency support. If you put the video card in quiet mode, then the frequency of the graphics core will decrease until the propeller speed of 2000 rpm is enough to cool it, respectively, not only the noise level will decrease, but also performance along with power consumption.
Review and testing of MSI Radeon R9 290X LIGHTNING GECID.
com. Page 1
> MSI Radeon R9 290X LIGHTNING
Users are well aware of MSI LIGHTNING graphics card line, which Micro-Star Int’l Co.,Ltd was introduced back in 2009. Its engineers are not limited to just replacing the cooling system and slightly raising the frequencies of the main components, but completely redesigning video cards. The result is completely unique hi-end products that are typically among the fastest among the competition. In addition, the MSI LIGHTNING line of graphics adapters have a set of additional features that can be used to achieve even greater performance for the user. Such improvements significantly increase the price of already expensive solutions, but such a fee is justified. After all, such video cards are aimed not only at fans of modern games, but also at enthusiasts and overclocking professionals. For the last category of users, it is important to get the highest results in various kinds of benchmarks and win awards in overclocking competitions.
MSI’s LIGHTNING line of graphics adapters have been improved year by year with new features and optimizations to make them even better. They not only emphasize the high position of the Taiwanese company in the video card market, but also demonstrate its significant engineering potential. Therefore, it is not surprising that video cards of this line received a large number of awards and helped to set a lot of world records. Moreover, overclockers themselves take an active part in the development and improvement of the Lightning line, which positively affects the consumer properties of graphics accelerators .
In this article, we will consider a single-chip video card based on the AMD Radeon R9 290X graphics processor from MSI. It belongs to the high-performance segment of the market and has a large set of interesting and unique features.
Let’s start the review of MSI Radeon R9 290X LIGHTNING version by examining its characteristics and comparing them with the reference solution.
MSI Radeon R9 290X LIGHTNING
AMD Radeon R9 290X (Hawaii XT)
Number of universal shader processors
Supported APIs and Technologies
DirectX 11.2, OpenGL 4.3, AMD Mantle, AMD Eyefinity, AMD App Acceleration, AMD HD3D, AMD CrossFireX, AMD PowerPlay, AMD PowerTune, AMD ZeroCore, AMD TrueAudio
Graphics core frequency, MHz
Memory frequency (effective), MHz
Memory size, GB
Memory bus width, bit
Memory bandwidth, GB/s
Internal interface type
PCI Express 3.0 x16
Digital — up to 4096 x 2160
Image output interfaces
2 x DVI-D
1 x HDMI
1 x DisplayPort
HDCP support and HD video decoding
Minimum power supply unit, W
Dimensions from the official website (measured in our test lab), mm
302 x 131 x 55 (315 x 131)
Latest drivers can be downloaded from MSI website or GPU manufacturer website
According to the table, the differences between the MSI Radeon R9 290X LIGHTNING and the reference graphics card are minimal. Only the increased frequency of the graphics chip stands out immediately from the reference 1000 to 1080 MHz (an increase of 8%). The rest of the characteristics are completely identical. At first glance, it may seem that we have only a slightly overclocked version of the AMD Radeon R9.290X with alternative cooling system. However, there are quite a few significant differences in the MSI Radeon R9 290X LIGHTNING model from the «reference» solution. We will talk about them in the following sections.
Packaging and contents
The video card is delivered in a rather large cardboard box with brightly designed printing, dominated by yellow and black colors. The front side shows a jet plane, which symbolizes the high speed of this graphics adapter. In the lower left corner there is an inscription that tells us the model of the video card (Radeon R9290X) and its belonging to the proprietary LIGHTNING series. The top corner contains the MSI logo. On the opposite side are the logos of the overclocker series, and below is the logo of the AMD Radeon series.
The back of the packaging is much more informative and tells us about some of the key features and technologies that were used in the production of the MSI Radeon R9 290X LIGHTNING. Let’s look at them in more detail. The novelty is based on the exclusive Triple Force architecture, which includes:
- Triple Level Signals LED — a set of green, blue and red indicator lights that tell us about the low, medium and high degree of load on the video card.
- TriFrozr Thermal Design is an innovative thermal design with independent control of each fan, which should ensure better heat dissipation from the GPU and power elements than the traditional version.
- PureDigital PWM — The digital power subsystem is more stable and reliable than analog. Its reaction time is significantly reduced, which helps to obtain greater stability at maximum load. There is the ability to control the indicators of the main voltages using the MSI Afterburner program. It also provides the ability to programmatically control the power phases.
There is room for other benefits on the back of the packaging. Among them it is worth highlighting:
- Support for Twin BIOS technology, which provides for the presence of two BIOS chips: original and for extreme overclocking. If everything is understood with the first one, then the second one removes all voltage and TDP restrictions, which will help to unlock the full frequency potential of the graphics accelerator when using modified liquid nitrogen-based cooling systems.
- Use of the Enhanced Power Design power subsystem with an increased number of phases, which will provide more stable operation during overclocking.
- Support for SuperPipes technology, which provides for the presence of two 8 mm heat pipes in the cooling system for more efficient heat dissipation. The presence of technology Dust Removal , which is used to clean the fans. The first 30 seconds after the start of the system, the propellers spin in the opposite direction, removing accumulated dust.
- Support for the improved Military Class 4 element base, the elements of which meet the strict requirements of the military standard MIL-STD-810G. It included: new power transistors CopperMOS , which guarantee the robot under the highest loads; black solid capacitors Dark Solid CAP with an aluminum core, corrosion protection, low ESR resistance and a guaranteed 10-year life; economical stabilizing capacitors Hi-C CAP with tantalum core; new efficient Super Ferrite Choke (SFC) ferrite coils with polished surface for better heat dissipation and corrosion protection.
On the bottom of the package, in addition to the accelerator specification in several languages (including Russian), there was also a place for a list of system requirements. In particular, the minimum power of the power supply must be 500 watts. It must contain two 8-pin PCIe auxiliary power connectors.
Inside the package is a black cardboard box with a golden imprint «LIGHTNING», which has two compartments. As you can see, even the packaging looks very original.
Inside the box is an MSI Radeon R9 290X LIGHTNING graphics card wrapped in a transparent anti-static bag and securely fixed in polyethylene foam. By pulling the tape, we have access to the compartment with accessories.
In the box we found:
- CD with drivers and MSI Afterburner utility, which allows overclocking and monitoring of key components of the video card;
- quick user guide and a booklet describing proprietary technologies;
- Military Class 4 quality and stability certificate;
- two 6-pin to 8-pin power adapters and one of two 4-pin Molex (PATA) to 6-pin PCIe;
- three adapters for connecting measuring devices in order to monitor key parameters;
- additional radiator for power keys.
This package can be called optimal for a video card of this class. Note the presence of PCIe adapters, which will make it easy to connect a graphics adapter, even if the power supply is not equipped with the necessary connectors. We should not forget about the presence of an additional VRM heatsink, which will come in handy during extreme test sessions.
The MSI Radeon R9 290X LIGHTNING is 315mm long and 131mm wide (measured in our test lab), which is larger than the reference card (290 x 112). The mass of new items is more than 1.5 kg. Not every case can accommodate such a video accelerator and provide it with the necessary air circulation, however, most gaming system units are specially designed for mounting such flagships.
The video card attracts attention with its solid design and considerable size. The entire front part of the accelerator is covered by a shroud with three fans, which harmoniously combines black with yellow accents. Owners of system cases with a side window will undoubtedly be pleased with this design solution.
On the reverse side of this novelty is a black-painted aluminum plate, which not only complements the overall style, but also protects the board elements from mechanical damage, prevents the textolite from bending under the weight of the massive design of the cooling system, and also takes part in partial heat dissipation. In addition, there is a row of 12 light indicators that are highlighted in blue and make it possible to monitor the operation of the main power phases.
A little to the left there are three more LEDs with multi-colored illumination. If red is on, there is insufficient power, blue is normal operation, and yellow is overclocking. Such illumination will help to quickly determine the operating mode of the accelerator and diagnose the cause of the malfunction.
There is a transparent MSI LIGHTNING series logo along the top edge of the graphics adapter, which has three lighting modes depending on the load. Green tells us about the general mode of operation, blue indicates the playback of demanding games, and red indicates the maximum load. The video card is designed in such a way that it occupies almost three expansion slots, but does not cover the last one completely.
Closer to the interface panel of the MSI Radeon R9 290X LIGHTNING is a switch between the BIOS: Original and LN2 chips. The first of them contains settings for normal operation, and the second allows you to unleash the full potential of new items when using extreme cooling systems. It should also be noted that there are no conventional AMD CrossFireX connectors, which are used to combine several accelerators for general calculations. Now this technology is implemented at the software level and a bunch of two to four video cards is supported.
The interface panel has wide slots for extra ventilation. But for displaying an image on the MSI Radeon R9 290X LIGHTNING graphics adapter, the following set of interfaces is provided:
- 2 x DVI-D;
- 1 x HDMI 1. 4a;
- 1 x DisplayPort 1.2.
Supports a maximum digital signal resolution of up to 4096 x 2160.
This configuration of external interfaces is quite successful, as it provides comfortable connection of various digital monitors and TVs, but does not support analog displays. In addition, the organization of multiscreen systems is facilitated.
Three connectors for measuring key supply voltages are located on the rear end of the novelty. V-Check Points connectors are connected here with sockets for probes of a multimeter or other measuring device, which makes it as easy as possible to control the voltage on the GPU, video memory and PLL during overclocking.
Printed circuit board and element base
Having dismantled the cooling system, you can examine the printed circuit board in more detail. Before us is a unique design using a multilayer textolite, a reinforced power subsystem and a high-quality and durable element base.
The tested model is powered by 10+3+2 phases (GPU/MEM/PLL). In this case, only high-quality Military Class 4 components are used: CopperMOS power transistors, which guarantee the robot at the highest loads; Dark Solid CAP black solid capacitors with an aluminum core, corrosion protection, low ESR resistance and a guaranteed 10-year life; economical tantalum-core Hi-C CAP capacitors, and new efficient Super Ferrite Choke (SFC) coils with a polished surface for better heat dissipation and corrosion protection. Recall that in the reference solution, the power supply system is built according to a modest scheme — 5 + 1 + 1 (GPU / MEM / PLL).
PWM controllers IOR 3567B and IOR 3570B manufactured by International Rectifier are controlled. They support the I2C protocol and a large number of energy-saving and protective technologies. Thanks to the first controller, the active phase switching function is implemented and the performance of AMD PowerTune technology is improved. In addition, on the board, next to the power phases, there is a soldered row of capacitors on both sides, which provide improved smoothing of the supply voltage ripples. This should have a positive impact on the overclocking potential of new items.
In addition to the PCI Express 3.0 x16 slot, the MSI Radeon R9 290X LIGHTNING graphics card is powered by three additional PCIe connectors (two 8-pin and one 6-pin) located on the side of the graphics adapter. All of these connectors can deliver up to 450 watts of power, which should ensure the stability of all components. Recall that the reference solution was equipped with only one 6- and one 8-pin connector. Nothing prevents connecting and disconnecting additional power cables, despite the proximity of the cooling system.
The core component of the MSI Radeon R9 290X LIGHTNING is the AMD Radeon R9 290X (Hawaii XT) GPU, which is manufactured using the 28nm process. It is based on the AMD Graphics Core Next (GCN) microarchitecture. AMD Hawaii XT includes 2816 universal shader pipelines (stream processors), 64 ROP units and 176 texture units. This crystal contains 6.2 billion transistors on an area of 438 mm 2 . Note that it is devoid of a heat-distributing cover, but has a protective frame that protects it from damage. As we have already noted, the frequency of the graphics core is higher than recommended by AMD and is 1080 MHz. And thanks to the support of AMD PowerTune technology in idle, it drops to 300 MHz.
The GDDR5 memory of the MSI Radeon R9 290X LIGHTNING video card, with a total volume of 4 GB, is assembled using 16 FCBGA chips with a capacity of 256 MB each, manufactured by Samsung Semiconductor. Chips are marked K4G20325FD-FC03 and operate at an effective frequency of 5 GHz. The bandwidth of the 512-bit memory bus is 320 GB of information per second.
MSI engineers replaced the reference turbine-type cooling system with a brand new TriFrozr Thermal Design cooler, which is part of the exclusive Triple Force architecture. The cooling system is noticeably different from the usual Twin Frozr Advanced, which has become a privilege of the MSI GAMING gaming series. To remove the TriFrozr, simply unscrew the four spring-loaded screws around the perimeter of the graphics chip.
The cooler is based on five nickel-plated copper heat pipes: three 6 mm and two 8 mm (Superpipe technology), which have improved heat transfer performance compared to conventional ones.
The radiator consists of two massive sections. In one part there is a large nickel-plated base with flattened and tightly nested heat pipes. The other section is pierced through with heat pipes.
All plates are fixed neatly and very firmly. The copper heatpipes are securely soldered to the aluminum heatsink and to the copper base. At the same time, they did not spare the solder and everything was done with high quality.
The active part of the cooling system is formed by three fans mounted on the radiator plate. Two extreme (PLA09215B12H) sizes 92 mm (real 87 mm) and one central (PLA08010S12HH) with a diameter of 80 mm (real 74 mm). Fans manufactured by Power Logic with an operating voltage of 12 V and a current of 0.55 A. They have a 4-pin connection, so their rotation speed is controlled by PWM modulation. The blades themselves are made using the PropellerBlade design, which provides 20% more airflow. And for constant automatic cleaning of dust, proprietary Dust Removal technology is used.
Note that the innovative TriFrozr Thermal Design provides independent control of each fan, depending on the temperature of different components: GPU, PWM controller and power circuit elements.
Since the main cooling system does not come into contact with power switches and memory chips, a single black aluminum plate heatsink is provided for them. There are special slots in the locations of high elements.
Ten screws must be removed to remove the rear stiffening plate. As you can see, the front and back plates provide a secure fixation of the printed circuit board. This design protects it from mechanical damage and deflection under a massive cooling system. Contact with power elements is carried out through thermal pads.
Now let’s proceed directly to testing the cooling system of the MSI Radeon R9 290X LIGHTNING. According to the manufacturer, it is capable of dissipating up to 500 watts of thermal power.
With automatic fan speed control, at maximum load, the graphics core heated up to 74 degrees, and the fans, according to the monitoring readings, worked at 61% of their maximum power, which is approximately 2402 rpm. In this mode, the noise level, according to subjective sensations, is a little higher than average, but it is quite acceptable for continuous use.
After the forced increase in the speed of the blades to the maximum level, the temperature of the GPU dropped to 72 degrees, and the fans, according to the monitoring readings, spun up to 3649rpm At the same time, the generated noise level, according to subjective sensations, can be described as “high”. It is hard to call it comfortable for long-term work at the computer.
Thanks to energy-saving technologies, when there is no load, the frequencies of the graphics core and memory are automatically reduced. This made it possible to significantly reduce the power consumption and heat dissipation of the video card as a whole. In this mode, the GPU temperature did not exceed 41 degrees, and the noise level was very low.
Summing up all of the above, we would like to note that the branded TriFrozr cooler is a high-tech and high-quality cooling system with a moderate noise level with automatic control. Recall that the «reference» cooling system automatically provides the GPU robot at a temperature of 94 ° C, so the TriFrozr performance is extremely good. They give hope for a good overclocking potential.
AMD Radeon R9 290X video card overview
The reference version of the Radeon R9 290X is quite similar to the R9 270X. The model is based on Hawaii GPU. The chip has a useful die area of 438 mm2. The GPU has 6.2 billion transistors. Processor architecture — GCN (Graphics Core Next). The card was created using the 28 nm process technology. Due to the increase in the crystal, it was possible to place 44 CU-blocks. In this model of the graphics accelerator, a 512-bit bus is used, due to which the throughput is increased to 320 GB / s. Card length 275 mm. The cooling system occupies two slots.
|AMD Radeon R9 270X||AMD Radeon R9 280X||AMD Radeon HD 7970||AMD Radeon R9 290X||NVIDIA GeForce GTX 760||NVIDIA GeForce GTX 770||NVIDIA GeForce GTX 780|
|Processor name||Curacao XT||Tahiti XTL||Tahiti XT||Hawaii||GK104||GK104||GK110|
|Process||28 nm||28 nm||28 nm||28 nm||28 nm||28 nm||28 nm|
|Processor frequency||1050 MHz||1000 MHz||925 MHz||800/1000 MHz||980 (1033) MHz||1046 (1085) MHz||863 (900) MHz|
|Number of stream processors||1280||2048||2048||2816||1152||1536||2304|
|Number of texture units||80||128||128||176||96||128||192|
|Number of ROPs||32||32||32||64||32||32||48|
|Memory||GDDR5 2/4 GB||GDDR5 3 GB||GDDR5 3 GB||GDDR5 4 GB||GDDR5 2 GB||GDDR5 2 GB||GDDR5 3 GB|
|Tire||256 bit||384 bit||384 bit||512 bit||256 bit||256 bit||384 bit|
|Memory frequency||1400 (5600) MHz||1500 (6000) MHz||1375 (5500) MHz||1250 (5000) MHz||1502 (6008) MHz||1753 (7012) MHz||1502 (6008) MHz|
|Maximum power consumption of the graphics card||180 W||250 W||250 W||N/A||170 W||230 W||250 W|
|Video outputs||2x DVI, 1x HDMI, 1x DisplayPort||2x DVI, 1x HDMI, 1x DisplayPort||1x DVI, 1x HDMI, 2x Mini-DisplayPort||2x DVI, 1x HDMI, 1x DisplayPort||2x DVI, 1x HDMI, 1x DisplayPort||2x DVI, 1x HDMI, 1x DisplayPort||2x DVI, 1x HDMI, 1x DisplayPort|
|Supported APIs|| DirectX 11. 2
| DirectX 11.2
| DirectX 11.1
| DirectX 11.2
| DirectX 11
| DirectX 11
| DirectX 11
The Radeon R9 290X is equipped with 4 GB of GDDR5 memory with an effective frequency of 5000 MHz. Unfortunately, the figure cannot be called a record. For example, the GeForce GTX 770, which can be called a direct competitor of this model, has a frequency of 7000 MHz.
Hawaii GPU maximum temperature limit is 95 degrees. In the Catalyst driver settings, in the OverDrive menu, there is a flat card that allows you to independently select the GPU frequency settings, taking into account the power limit of the graphics adapter. You can also set the turbine rotation speed, the maximum allowable core temperature. This menu is useful for users who want to independently overclock the video card. The
Radeon R9 290X supports two BIOS profiles: quiet and uber. In uber mode, the graphics card runs at maximum frequencies, that is, 1000 MHz, and in the first mode, the GPU frequency can be reduced depending on how well the cooler copes with cooling.
Benchmarks show a significant victory for the Radeon R9 290X over the GeForce GTX 780. The difference is about 13%. In terms of performance, AMD’s R9 290X outperforms its direct competitor in top solutions.
The video card was tested in games with Full HD resolution at high, maximum settings. In this battle, the GeForce GTX 780 won. Good results were recorded only in Battlefield 3 and Company of Heroes 2. In all other tested video games, the Radeon R9The 290X is a bit behind in terms of fps.
Manufacturers of the
Radeon R9 290X are:
SAPPHIRE R9 290 4GB GDDR5
Despite the fact that many manufacturers of graphics accelerators equip their models with proprietary CO, this video card received a reference cooling system from AMD. The video card is based on AMD Hawaii PRO, which is manufactured using a 28nm process technology and includes 2560 universal shader processors, 160 texture units and 64 ROP units. Depending on the load and heating, the core clock frequency changes dynamically, reaching a maximum of 947 MHz. SAPPHIRE R9 290 4GB is made on black textolite. PCB height — 112 mm, length — 290 mm. Together with CO, the card occupies two slots. In the center is a graphic chip equipped with a protective metal frame for safe installation and removal of CO. Under the plastic casing is a massive radiator, a 78-mm turbine-type fan.
The set of new interface panel ports is as follows:
- 2 x Dual-link DVI-D;
- 1 x HDMI;
- 1 x DisplayPort.
HDMI and DisplayPort connectors capable of supporting 4K Ultra HD (4096 x 2160) resolution. It is possible to connect up to six displays simultaneously in AMD Eyefinity mode using the DisplayPort interface and special adapters, or up to four monitors or TVs when using all interface panel connectors directly.
The power subsystem of the video adapter was created according to the reference seven-phase scheme: five phases for the graphics core, one for the memory and one for the PLL. To power the card, in addition to the PCI Express 3.0 x16 slot, there are two additional PCIe connectors: 6-pin and 8-pin.
MSI R9 290X Lightning
This model has impressive dimensions and is a copy of MSI N780 Lightning. The length of the video card is 30 cm. Together with the CO, the card occupies three slots. Instead of the standard configuration of six and eight pin power connectors, there are two connectors for eight pins and one for six. The model received a standard set of interfaces for image output: DisplayPort, HDMI, two DVI.
CO consists of three fans, four tubes of 6 mm and 8 mm. Side fans have a diameter of 92 mm, central — 80 mm. The impeller is made according to the Propeller Blade technology with a bend at the edges of the blades, which allows to increase the airflow area. The radiator consists of two sections on which frames with fans are screwed on top.
This video card has been overclocked from the recommended frequencies of 1000/5000 to 1080/5000 MHz. Taking into account the fact that 1100 MHz in the core was difficult even on the ASUS R9290X-DC2OC-4GD5, the factory frequency of 1080 MHz for such a hot processor is a rather high value.
Lightning received advanced temperature monitoring capabilities. In addition to data on core heating, you can monitor the temperature of the memory and VRM unit. Expanded voltage control functions.
As for gaming tests, compared to the reference model, this video card showed better results at maximum graphics settings by about 5-7%. The increase from overclocking was 11%. At the same time, the overclocked competitor GeForce GTX 780 and GeForce GTX 780 Ti turned out to be faster in such games as Assassin’s Creed, Crysis 3, Metro: Last Light, Blacklist.
The video card strictly repeats the previous version of ASUS R9290-DC2OC-4GD5. The difference is that there are no red decorative elements on the case. The graphics accelerator, together with the cooling system, occupies two slots. Card length — 290 mm.
The standard cooling of the reference version was replaced by a proprietary DirectC II system. The main feature is the use of two 100-mm fans of different design in the cooling system. The model learned a standard set of video interfaces: HDMI, DisplayPort and two DVI. Cooling is represented by a powerful radiator with five thermal copper pipes. Direct contact of the pipes with the surface of the graphics chip is implemented. The end tubes do not touch the chip. The most powerful central ones, two of which have a diameter of 8 mm, one 10 mm.
There are 10 power phases on the PCB, six of which are for GPU power. Each channel is implemented on DrMOS chips. The processor is surrounded by a protective metal frame. In the standard version, it operates at a clock frequency of 1000 MHz, the effective memory frequency is 5000 MHz.
ASUS Radeon R9 290X is equipped with a switch between two modes. The model also received contact pads (ASUS ROG Connect) to control supply voltages.
Tested on the following system:
- Processor Intel Core i5-3570K 3400 MHz.
- ASRock Z77 Extreme11 s1155 E-ATX Motherboard.
- RAM Corsair DDR3 2x8GB PC-17066 2133 MHz.
- Hard drive Toshiba 500 GB 32 MB 7200 rpm 3.5″ SATA III
- Power Supply Chieftec Nitro II BPS-850C2 850W
- Operating system Windows 7 64-bit Service Pack 1;
- HP ZR30w Monitor.
The temperature of the video card during the passage of gaming tests and the «quiet» mode of operation of the CO was 93 degrees. The video adapter almost all the time worked at a frequency of 1050 MHz.
The graphics adapter has been tested in several games at the same time as other graphics cards. All graphics accelerators were tested at nominal frequencies. The AMD Radeon R9 290X/R9 290 and ASUS Radeon R9 290X DirectCU II graphics cards have been set to maximum cooling performance (Uber mode).
ASUS Radeon R9 290X DirectCU II performed well in benchmarks. Most modern video games go without lags, freezes at medium, maximum graphics settings. The proprietary cooling system does an excellent job. Under load, the noise is negligible. In general, this video card model has balanced technical characteristics. The Radeon R9 290X beats the GeForce GTX 780 in all respects, but at the same time it has a lower cost, which allows you to create a good gaming system without extra financial costs.
AMD Radeon R9 290X
Reach Hawaii! You will receive new vertices of speed and functionality
- Part 1 — theory and architecture
- Part 2 — Practical acquaintance
- Features of the video card
- Stand configuration, list of test instruments
- The results of synthetic tests
- — results 3 — results game tests (performance)
Just imagine, you are: on the seashore, even the ocean… White sand, the sound of the surf, the foamy tide… Somewhere in the distance there are huge castles of cumulus clouds. And no one … The gaze involuntarily glides over the sand and notices the workers of crabs, who selflessly in a short time, while the waves recede at low tide, produce thousands of sand balls. Eyes stopped at two crabs, two meters apart from each other … One, greenish in color, makes big balls, but does not throw it into the mink, but leaves it. But having made one big one, he then quickly sculpts small balls and throws them into his mink one by one. And the second, reddish in color, makes large and small balls, while its large balls are smaller than those of a greenish relative. Already tired, he made a very large ball and, at the end of his strength, pushed him into a hole. At the same time, the greenish crab simply took the same large ball previously made and sent it to its hole with a slight movement of the claw.
Looking at this unpretentious picture, I suddenly caught myself thinking that this is all — a complete analogy of the situation with 3D accelerators. One has only to imagine that minks are the market for those same video cards. Who is greenish and who is reddish crab — the reader can guess for himself. And that’s how it really is…
Actually, today we will tell about one «very large» ball of «reddish crab». It is no coincidence that the code name of Hawaii and the analogy on the beach that came to mind are combined … And after all, the “reddish crab” did not work so hard in vain. A very powerful and solid product has been released, smashing its rival Geforce GTX 780 to smithereens. That’s what the material is about.
Part 1: Theory and Architecture
We have already released a detailed theoretical material about AMD’s new line of graphics solutions, consisting of the Radeon R7 and R9 families. But then only those video cards were officially announced that turned out to be renamed solutions of the previous generation. And although they differ in very attractive prices, technically these are the same well-known video cards of the Radeon HD 7000 series, and almost all video cards presented then have twins from the near past.
And finally, AMD decided to announce the older solution of its line — Radeon R9 290X, which is interesting for the new graphics chip used in it, codenamed Hawaii. It was presented to the profile press on the island of Oahu — one of the Hawaiian Islands, which are distinguished by a wonderful climate and luxurious conditions for relaxing off the coast of the Pacific Ocean.
But AMD’s employees could only rest after the announcement of new solutions, and before the release of a new top-end GPU, they had to seriously work on it. After all, the energy consumption of the previous top-end Tahiti chip (for some reason we were not taken to Tahiti, by the way), which formed the basis of the Radeon HD 79 video card70, and so already close to the limit, possible, provided that the usual set of additional power connectors is connected: 6- and 8-pin.
The energy efficiency of Nvidia’s top video chips of the current generation is clearly higher than that of Tahiti, so AMD’s competitor released not only the Geforce GTX 780, but also the GTX Titan, which outperformed the Radeon HD 7970 GHz in terms of performance. AMD needed to respond with something, if not the Titan, which is overpriced and aimed at the rare enthusiast, but the GTX 780, which has become quite popular among non-poor PC gamers.
Therefore, in Hawaii, AMD developers had to significantly increase the number of execution units, but there was almost nowhere to increase power consumption. More precisely, it had to be kept within certain limits, increased, but not as much as productivity increased. That is, the main task in the development of a new GPU was to increase the very energy efficiency on which everything now depends. Did they get the job done? Finding this out is the main goal of our material.
Recall which video cards are included in the new families of AMD Radeon video cards. The company’s lineup now contains several series: the R9 and R7 series (in the future, the budget R5 series is also expected, but for the players it is not particularly interesting). The new line of the company contains the following models covering most of the market segments:
So, video cards of the R7 250 and R7 260X models are intended for the price range of $90-$140 (prices in the US market), R9 270X is sold for $200, and R9 280X for $300. As for the flagship of the line — the R9 model290X — then its recommended price in the US market is only $549, and this is a fairly strong move by AMD, given that the same Geforce GTX 780 is more expensive.
Let’s see what AMD itself thinks about the positioning of its new top-level solution. To do this, they traditionally take the Fire Strike benchmark from the latest version of Futuremark 3DMark, in two different versions: Performance and Extreme. Due to the fact that this benchmark is well suited for modern AMD graphics cards, the number of points scored by the Radeon R9 graphics card in it290X (we’ll talk about two modes of operation later) allows you to significantly outperform its main competitor — Geforce GTX 780.
see detailed information about AMD’s early solutions:
- [10/08/13] AMD Radeon R7 and R9 — an updated line of video cards: new families so far without their flagship
- [19.04.13] AMD Radeon HD 7790: a senior representative of the middle class of accelerators
- [12. 22.11] AMD Radeon HD 7970: a new single-processor 3D graphics leader based on a new GPU codenamed «Hawaii».
Radeon R9 290X graphics accelerator
- Chip code name: «Hawaii»
- Manufacturing technology: 28 nm
- 6.2 billion transistors (Tahiti in the Radeon HD 7970 has 4.3 billion)
- Unified architecture with an array of common processors for streaming processing of multiple types of data: vertices, pixels, etc.
- DirectX hardware support 11.2, including Shader Model 5.0
- 4 geometry processors
- 512-bit memory bus: eight 64-bit wide controllers, supporting GDDR5 memory
- Core clock up to 1000 MHz (dynamic)
- 44 GCN Compute Units, including 176 SIMD cores, consisting of a total of 2816 floating point ALUs (integer and float formats supported, with FP32 and FP64 precision)
- 176 texture units, with support for trilinear and anisotropic filtering for all texture formats
- 64 ROPs with support for full-screen anti-aliasing modes with the possibility of programmable sampling of more than 16 samples per pixel, including with FP16 or FP32 framebuffer format. Peak performance up to 64 samples per clock, and in colorless mode (Z only) — 256 samples per clock
- Integrated support for up to six monitors connected via DVI, HDMI and DisplayPort
Radeon R9 290X graphics card specifications
- Core frequency: up to 1000 MHz
- Number of universal processors: 2816
- : 64
- Effective memory frequency: 5000 MHz (4×1250 MHz)
- Memory type: GDDR5
- Memory capacity: 4 gigabytes
- Memory bandwidth: 320 gigabytes per second.
- Compute performance (FP32) 5.6 teraflops
- Theoretical maximum fill rate: up to 64 gigapixels per second.
- Theoretical texture sampling rate: up to 176 gigatexels per second.
- PCI Express 3.0 bus
- Two Dual Link DVI, one HDMI, one DisplayPort
- Power consumption up to 275 W
- One 8-pin and one 6-pin power connector;
- Dual slot design
- US MSRP $549(for Russia — 19990 rubles).
From the name of the novelty it is clear that the naming system for AMD video cards has changed and it is not ideal, in our opinion. Its introduction is partially justified by the fact that such a system has long been used in APUs of its own production (A8 and A10 families, for example), and other manufacturers (for example, Intel Core i5 and i7 have a similar processor naming system), but for video cards, the previous system names was clearly more logical and understandable. I wonder what made AMD change it right now, although they had at least the Radeon HD 9 line in stock000, and the prefix «HD» could be changed to another.
The division into the R7 and R9 families remains not entirely clear to us: why does the 260X still belong to the R7 family, while the 270X already belongs to the R9? However, with the Radeon R9 290X considered in the material, everything is somewhat more logical, it belongs to the top R9 family and has the maximum serial number in the series — 290. But why was it necessary to start leapfrog with “X” suffixes? Why was it impossible to get by with numbers, as it was in the previous family? If three digits are not enough, and numbers like 285 and 295 do not like, then you could leave four numbers in the name: R9 2950 and R9 2970. But then the system would not differ much from the previous one, and marketers need to somehow justify their jobs. Well, okay, the name of the video card is the tenth thing, as long as the product is good and justifies its price.
And there are no problems with this, the recommended price for the Radeon R9 290X is lower than that of the corresponding top-end competitor’s solution from the same price segment. The release of the Radeon R9 290X is clearly aimed at fighting the Nvidia Geforce GTX 780 based on the GK110 chip, which is the competitor’s top motherboard (Geforce GTX Titan is not taken into account, since this model has always been a separate, purely fashion solution) and so far which has a slightly higher recommended price. However, there is a high probability that it will not last so long, an even more productive version of Geforce may come out, and prices for current top models from Nvidia may be reduced.
The new AMD graphics card model we’re reviewing today has four gigabytes of GDDR5 memory. Since the Hawaii graphics chip has a 512-bit memory bus, it would theoretically be possible to put 2 GB on it, but this amount of GDDR5 memory is already too small for a top-end solution, especially since the Radeon HD 7970 featured 3 GB of memory, yes and modern titles like Battlefield 4 already recommend at least 3 GB of VRAM. And four gigabytes is definitely enough in any modern games at the highest settings and resolutions.
As far as energy consumption is concerned, this is not an easy question. Although on paper the power consumption of the new model has not increased much compared to the Radeon HD 7970 GHz, there are nuances. Like some previous top solutions, the AMD Radeon R9 290X has a special switch on the card that allows you to select one of two BIOS firmwares. This switch is located on the end of the video card next to the mounting plate with video outputs. Naturally, after switching, you will need to restart the PC in order for the changes to take effect. Factory on all Radeon R9290X is flashed with two BIOS versions and these modes are noticeably different from each other in terms of power consumption.
«Quiet Mode» — switch position «one», closest to the graphics card mounting plate. This mode is intended for players who are concerned about the noise of the gaming system. For example — playing with headphones in a room where you need to keep silence and having PCs with quiet cooling systems.
«Uber Mode» (Super mode or normal mode) — switch position «two», farthest from the mounting plate with video outputs. This mode is designed for maximum performance in gaming, testing and CrossFire systems. By the name of the modes, it is clear that the quiet mode provides less noise from the cooling system at the cost of slightly reduced performance, and the super mode provides the maximum possible with higher power consumption and noise from the video card cooling system fan. It is good that the user has a choice and is free to use any of the modes according to his needs without restrictions.
The new Hawaii graphics chip at the heart of the AMD Radeon R9 290X graphics card is based on the already known Graphics Core Next (GCN) architecture, which has been slightly modified in terms of computing capabilities and to fully support all the features of DirectX 11.2, as was previously done in the Bonaire chip (Radeon HD 7790), which also became the basis for the Radeon R7 260X. The architectural changes in Bonaire and Hawaii relate to improvements in computing capabilities (support for more concurrently executing threads) and a new version of AMD PowerTune technology, which we will talk about more below.
New DirectX 11.2 features include tile resources that use Hawaii’s GPU virtual memory hardware called partially-resident textures (PRT). Using virtual video memory, it is easy to obtain efficient hardware support for algorithms that allow applications to use huge amounts of textures and stream them into video memory. PRT allows for more efficient use of video memory in such tasks, and similar techniques are already being used in some game engines.
We have already described PRT in the material devoted to the release of the Radeon HD 7970, but in Bonaire and Hawaii these possibilities have been expanded. These video chips support all additional features that were added in DirectX 11.2, mainly related to level of detail (LOD) and texture filtering algorithms.
Even though the GCN capabilities have been expanded, AMD’s main focus in designing the new top-end GPU was to improve the chip’s energy efficiency, as Tahiti was already consuming too much power and Hawaii included more compute units. Let’s see what AMD engineers managed to do to put a competitive product on the market:
The new graphics processor is logically divided into four parts (Shader Engine), each of which contains 11 enlarged computing units (Compute Unit), including texture units, one geometric processor and rasterizer, as well as several ROP units. In other words, the block diagram of the most modern AMD chip has become even more similar to the block diagram of Nvidia chips, which also have a similar organization.
In total, the Hawaii graphics chip includes: 44 Compute Units containing 2816 stream processors, 64 ROPs and 176 TMUs. The GPU in question has a 512-bit memory bus consisting of eight 64-bit controllers, as well as 1 MB of L2 cache. It is produced on the same 28 nm process technology as Tahiti, but already contains 6.2 billion transistors (Tahiti has 4.3 billion).
Consider the block diagram of the shader engine that makes up the Hawaii GPU. This is a large-block part of the chip that contains four such engines:
Each Shader Engine includes one geometry processor and one rasterizer, which are capable of processing one geometry primitive per clock. It looks like Hawaii’s geometric performance has not only improved, but should be well balanced compared to AMD’s previous GPUs.
A GCN architecture shader engine can contain up to four enlarged Render Back-ends (RB) blocks, which include four ROP blocks each. The number of Compute Units in the shader engine can also be different, but in this case there are 11 of them, although the caches for instructions and constants are divided for every four Compute Units. That is, it would be more logical to include not 11, but 12 computing units in the Shader Engine, but it seems that such a number was no longer included in Hawaii’s power consumption limits.
Computing unit of the GCN architecture includes various functional units: texture fetch modules (16 pieces), texture filtering modules (four pieces), branch prediction unit, scheduler, computational units (four vector and one scalar), first-level cache memory (16 KB per Compute Unit), memory for vector and scalar registers, and shared memory (64 KB per Compute Unit).
Since there are four shader engines in the Hawaii GPU, it has four geometry processing units and rasterization engines in total. Accordingly, AMD’s new top-end GPU can process up to four geometric primitives per clock. In addition, geometry data buffering has been improved in Hawaii and caches for geometric primitive parameters have been increased. All together, this provides a significant increase in performance with large volumes of calculations in geometric shaders and the active use of tessellation.
Also, some changes have been made to the computing capabilities of the new, albeit graphics, but still processor. The chip includes two DMA engines that provide full use of the PCI Express 3.0 bus capabilities, a bidirectional bandwidth of 16 GB / s is declared. The possibility of asynchronous computing, which is carried out using eight (in the case of the Hawaii chip) Asynchronous Compute Engines (ACE), can also be called relatively new.
ACE blocks work in parallel with the GPU and each of them is capable of managing eight instruction streams. Such an organization provides independent scheduling and operation in a multitasking environment, access to data in global memory and L2 cache, as well as fast context switching. This is especially important in computing tasks, as well as in gaming applications when using the GPU for both graphics and general computing. Also, this innovation could theoretically be an advantage when using low-level access to GPU capabilities using APIs such as Mantle.
Let’s go back to Hawaii’s features that apply to graphical computing. Due to the increase in resolution requirements with the expected spread of UltraHD monitors, it becomes necessary to increase the computing power of raster operations units — ROP. The Hawaii chip includes 16 Render Back End (RBE) blocks, which is twice as many as Tahiti. Sixteen RBEs contain 64 ROPs, which are capable of processing up to 64 pixels per clock, and this can be very useful in some cases.
As for the memory subsystem, Hawaii has 1 megabyte of L2 cache, which is divided into 16 64 KB partitions. Claimed as a 33% increase in cache memory, and an increase in internal throughput by a third. The total throughput of L2 / L1 caches is declared equal to 1 TB / s.
Memory is accessed via eight 64-bit controllers, which together make up a 512-bit bus. The memory chips in the Radeon R9 290X are clocked at 5.0 GHz for a total memory bandwidth of 320 GB/s, over 20% higher than the Radeon HD 7970 GHz. At the same time, the chip area occupied by the memory controller was reduced by 20% compared to the 384-bit controller in Tahiti.
Low-level graphics API Mantle
We already wrote about this API in the article on the new family of AMD video cards, and here we will only briefly repeat, supplementing the information with a few thoughts. The introduction of a new graphics API dubbed Mantle was quite unexpected. AMD entered the sphere of interest of Microsoft with their DirectX, and decided on some … let’s say, confrontation. Of course, the reason for the move was that for the next generation of game consoles, AMD is the supplier of all the GPUs for Sony, Microsoft and Nintendo, and AMD wanted to get a tangible advantage from this.
AMD decided to release this API largely due to the influence of DICE and EA, which released the Frostbite game engine that underpins Battlefield and several others. Technicians at DICE, which runs the Frostbite engine, consider the PC to be a great gaming platform, a staple for DICE. They have been working with AMD for a long time to develop and implement new technologies in the Frostbite 3 engine, the company’s new engine, which is the basis for more than 15 games in the series: Battlefield, Need for Speed, Star Wars, Mass Effect, Command & Conquer, Dragon Age, Mirror’s Edge etc.
No wonder AMD jumped at the opportunity to deep Frostbite optimization for their GPUs. This game engine is very modern and supports all the important features of DirectX 11 (even 11.1), but the developers wanted to take full advantage of the capabilities of PC systems, move away from the limitations of DirectX and OpenGL and use the CPU and GPU more efficiently, as some functionality exceeds the DirectX specifications and OpenGL remains unused by developers.
The Mantle graphics API offers full hardware capabilities of AMD graphics cards, going beyond the current software limits and using a thinner software shell between the game engine and GPU hardware resources, similar to what is done on game consoles. And taking into account the fact that all future game consoles of the “desktop” format (Playstation 4 and Xbox One, first of all) are based on AMD graphics solutions based on the GCN architecture familiar from PCs, AMD and game developers have an interesting opportunity — a special graphics API that will allow you to program game engines on PC in the same style as on consoles, with minimal API impact on the game engine code.
According to preliminary data, the use of Mantle provides a ninefold advantage in the execution time of draw calls (draw calls) compared to other graphics APIs, which reduces the load on the CPU. Such a multiple advantage is possible only in artificial conditions, but some superiority will be provided in typical conditions of 3D games.
This low-level, high-performance graphics API was developed at AMD with significant input from leading game developers, especially DICE, and the near-released Battlefield 4 game is the first project to use Mantle, and other game developers will be able to use this API in the future — for now it is not known exactly when.
The release version of Battlefield 4 will only support DirectX 11.1, and support for the Mantle API is scheduled for December with a free update that is further optimized for AMD Radeon graphics cards. On PC systems with GCN graphics cards, the Frostbite 3 engine will use Mantle, which will reduce the load on the CPU by parallelizing work on eight processing cores, and will introduce special low-level performance optimizations with full access to the GCN hardware capabilities.
The public is left with more questions than answers about Mantle. For example, it is not very clear how the low-level Mantle driver with its direct access to GPU resources in a Windows operating system with DirectX, which usually manage GPU resources themselves, will work, and how these resources will be shared between the Mantle-based game application and the Windows system. . All questions are expected to be answered in mid-November at the AMD Developer Summit, which will reveal the technical details of the Mantle implementation, a list of partners, and even show demos.
But we already know something. Now that some time has passed since AMD announced the initiative with their Mantle graphics API, something has already become clear. Although there were initial expectations among enthusiasts that future generation consoles would also support Mantle, this will not be a reality simply because it is not necessary and not beneficial for console developers.
So, Microsoft has its own graphics API and this company has already confirmed that their Xbox One will use exclusively DirectX 11.x, close in capabilities to DirectX 11.2, also supported by modern AMD video chips. Other graphics APIs, such as OpenGL and Mantle, will simply not be available on Xbox One — and this is the official position of Microsoft. Probably the same applies to the Sony PlayStation 4, although representatives of this company have not yet officially announced anything about this.
In addition, according to some reports, Mantle will not be available to game developers other than DICE for several more months. And if you add all the available information together, then the prospects for Mantle at the moment really look vague. AMD, in turn, claims that Mantle was not intended for use in consoles, that it is just a low-level API «similar» to console ones. How it is similar, if the APIs are still different, is not very clear. Well, perhaps only a “low” level and proximity to hardware, but this is clearly not necessary for all developers and will require additional development time.
As a result, in the absence of Mantle support on consoles, this graphics API can only be used on the PC, which reduces interest in it. Many even remember such graphical APIs of the distant past as Glide. And although the difference with Mantle is great, there is a good chance that without support on consoles and on two-thirds of dedicated GPUs (approximately this share has been occupied by corresponding solutions from Nvidia for several years), this API will not become really popular. It is likely to be used by individual game developers who will show interest in low-level GPU programming and receive appropriate support from AMD.
The main question is how close Mantle is to the low-level console APIs and whether it actually reduces the cost of development or porting. It also remains unclear how great the real advantage of moving to low-level GPU programming is, and how many features of graphics chips are not disclosed in the existing popular APIs that can be used with Mantle.
TrueAudio sound processing technology
We have already talked about this technology in as much detail as possible in the theoretical material dedicated to the release of the new line from AMD. With the release of the Radeon R7 and R9 seriesthe company introduced AMD TrueAudio technology to the world, a programmable audio engine that is only supported on the AMD Radeon R7 260X and R9 290(X). It is the Bonaire and Hawaii chips that are the latest in terms of technology, they have the GCN 1. 1 architecture and other innovations, including TrueAudio support.
TrueAudio is an embedded programmable audio engine in AMD’s GPUs, the first being the Bonaire chip on which the Radeon R7 260X is based, and the second being Hawaii. TrueAudio provides guaranteed real-time processing of audio tasks on a system with a compatible GPU, regardless of the installed CPU. To do this, several Tensilica HiFi EP Audio DSP DSP cores are integrated into Hawaii and Bonaire chips, as well as other piping:
TrueAudio capabilities are accessed using popular audio processing libraries, whose developers can use the resources of the built-in audio engine using the special AMD TrueAudio API. In the case of such new technologies, the most important issue is the issue of partnership with the developers of audio engines and libraries for working with sound. AMD works closely with many companies known for their developments in this area: game developers (Eidos Interactive, Creative Assembly, Xaviant, Airtight Games), audio middleware developers (FMOD, Audiokinetic), audio algorithm developers (GenAudio, McDSP), and etc.
TrueAudio is interesting given the stagnant PC audio processing hardware. There remains the question of the relevance of the decision at the moment. We doubt that game developers will rush to integrate this technology into their projects, given the extremely limited compatibility (at the moment, TrueAudio is supported only on three video cards: Radeon HD 7790, R7 260X and R9 290X) without additional motivation from AMD. But we welcome all the innovations in the field of complex audio processing and hope that the technology will spread.
Improved PowerTune power management and overclocking settings
Some of the enhancements to AMD’s Radeon R9 290X graphics card are PowerTune power management technology. We already wrote about these improvements in the Radeon HD 7790 review, for more efficient power management, the latest AMD graphics chips have multiple states with different frequencies and voltages, which allows you to achieve higher clock speeds than before. At the same time, the GPU always works with the optimal voltage and frequency for the current GPU load and video chip power consumption, on which the switching between states is based.
The Hawaii chip integrates a second-generation serial VID interface, SVI2. All recent GPUs and APUs, including Hawaii and Bonaire, as well as all APUs with Socket FM2, have this voltage regulator. The accuracy of the voltage regulator is 6.25 mV, 255 possible values fit between voltages of 0.00 V and 1.55 V. The voltage regulator is capable of managing multiple power lines.
In the new algorithm, known since the days of Bonaire, PowerTune technology does not have to abruptly drop the frequency when the consumption level is exceeded, plus the voltage also decreases with it. The transitions between states are very fast, in order not to exceed the set consumption limit even for a short time, the GPU switches PowerTune states 100 times per second. Therefore, Hawaii simply does not have any single operating frequency, there is only an average for a certain period of time. This approach helps to «squeeze all the juice» out of the available hardware solutions, improves energy efficiency and reduces the noise of cooling systems.
Accordingly, new features have appeared in the Catalyst Control Center driver settings in the OverDrive tab — it has been completely redesigned in order to get the most out of the innovations in PowerTune for R9 290 series solutions.
The first thing you can notice is the power limiter connection ( Power Limit) and GPU clock. These parameters are now linked together in the energy consumption and heat dissipation diagram. Because consumption and performance are directly related in Hawaii’s new PowerTune algorithm, this interface makes overclocking more intuitive and straightforward.
It also reflects the fully dynamic GPU clock control introduced in the R9 290 series solutions. Overclocking is now indicated by increasing the corresponding value (GPU Clock) by a certain percentage, and the possibilities of previous solutions in the form of specifying a specific frequency are no longer available.
The second major change in the new OverDrive interface is the fan speed control. This setting has also been completely redesigned. In previous generations, on the OverDrive tab, the user could only set a fixed fan speed, which was maintained constantly. In the new interface, this setting has changed and is called “Maximum Fan Speed”, which sets the upper speed limit for the fan, which will be maximum. But the fan speed will change based on the load of the GPU and its temperature, and will not remain fixed, as it was before.
By default, the fan speed on the Radeon R9 290X depends on the current settings of the loaded BIOS firmware. Manual change of the maximum fan speed allows you to select any other value. And when overclocking, it is desirable to take into account not only the power and frequency settings, but also increase the fan speed limit, otherwise the maximum performance will be limited by the temperature of the GPU and its cooling.
AMD CrossFire 9 Technology Changes1397
One of the most interesting hardware innovations in the AMD Radeon R9 290 series video cards is support for AMD CrossFire technology without the need to connect video cards to each other using special bridges. Instead of dedicated communication lines, GPUs communicate with each other over the PCI Express bus using a hardware DMA engine. At the same time, the performance and image quality is provided exactly the same as with connecting bridges. This solution is much more convenient, and AMD claims that they have not encountered compatibility problems on different motherboards.
It is important that for maximum performance in AMD CrossFire mode on all Radeon R9 290X video cards, it is desirable to set the BIOS switch to super mode «Uber Mode», and cooling for all cards must be provided well, otherwise the newfangled PowerTune technology will lower clock speeds GPU, which will lead to a drop in performance.
CrossFire technology provides excellent scaling in multi-chip systems with the R9 290X, if we take into account the average frame rate (CrossFire still has issues with the smoothness of the video sequence, which we examined earlier). The following chart shows the comparative performance of a single AMD Radeon R9290X and two such cards working together on rendering using AMD CrossFire technology.
All the games shown in the diagram provide an excellent increase in average frame rate, up to a twofold increase when a second video card is connected. In the worst case, these applications show 80% CrossFire efficiency, and the average is 87%.
Adding a third AMD Radeon R9 290X board to a CrossFire system would reduce performance even further, but three of these cards still provide a 2.6x performance boost over a single board, which is also pretty good.
AMD Eyefinity Technology and UltraHD Resolution Support
AMD is one of the leaders in the field of output information to display devices, they were among the first to introduce DVI Dual Link support for monitors with a resolution of 2560 × 1600 pixels, DisplayPort support, made output to three or more monitors from one GPU (Eyefinity technology), HDMI output with 4K resolution, etc.
4K resolution, also known as Ultra HD, corresponds to a value of 3840×2160 pixels, which is exactly four times higher than Full HD (1920×1080) and it is very important for the industry. The problem remains in the low prevalence of Ultra HD monitors and TVs at the present time. 4K TVs are only sold very large and expensive, and matching monitors are extremely rare and also super expensive. But the situation is about to change according to analysts predicting a bright future for Ultra HD devices.
AMD provides two options for connecting Ultra HD displays: TVs that support only 30 Hz and below at a resolution of 3840×2160 and connect via HDMI or DisplayPort, as well as monitors whose image is divided into two halves with a resolution of 1920×2160 at 60Hz. The second type of monitors is also supported with DisplayPort 1.2 MST hubs, which have recently gone on sale.
To support split monitors, the new VESA Display ID 1.3 standard has been introduced, which describes additional display capabilities. The new VESA standard will automatically «glue» the image for such monitors, if supported by both the monitor and the driver. This is planned for the future, but for now, these 4K tiled monitors require manual configuration. AMD says that the latest versions of the Catalyst driver already have an auto-configuration option for the most popular monitor models.
In addition, AMD Radeon graphics cards will also support a third type of Ultra HD display, which requires only one thread to run at ultra-high resolution at a refresh rate of 60 Hz. The Radeon R9 290X delivers enough 3D performance for multi-monitor configurations, which is essential at the highest gaming settings and highest rendering resolutions on such systems. Also, the AMD Radeon R9 290X has an advantage over the Nvidia Geforce GTX 780 in terms of more video memory, which is important at resolutions like 5760×1080 pixels and 4K.
The AMD Radeon R9 290X graphics card supports UltraHD resolutions over both HDMI 1.4b (with a low refresh rate not exceeding 30 Hz) and DisplayPort 1.2. Moreover, the performance of the new solution makes it possible to play at maximum settings in this resolution, getting an acceptable frame rate in almost any game.
The ability to use multiple monitors is also very important for gaming enthusiasts. Eyefinity technology in the Radeon R9 graphics card serieshas been updated and the new Radeon R9 290X graphics card supports up to six display configurations. The AMD Radeon R9 series supports up to three HDMI/DVI displays when running with AMD Eyefinity technology.
This feature requires a set of three identical displays that support identical timings, output is configured at system startup, and does not support display hot-plugging for a third HDMI/DVI connection. To take advantage of the ability to connect more than three displays on the AMD Radeon R9290X requires either DisplayPort-enabled monitors or certified DisplayPort adapters.
Theoretical Performance Evaluation
First, let’s look at the theoretical performance. Let’s try to figure out how much faster the new Radeon R9 290X should be faster than the previous top-end Radeon HD 7970 GHz. So far, we do not take into account the possible improvement in efficiency associated with small architectural changes in GCN, but if you count all the blocks in R9290X and HD 7970 are identical, we get the following picture:
With a not so big difference in area and theoretically almost the same level of power consumption (it is not in the table), the peak geometry processing speed has almost doubled, the computational and texture performance has increased by 30% , video memory bandwidth — by 20%, and fill rate (fillrate) — by as much as 90%! The latter value will be very important given the planned popularization of UltraHD resolution in the near future, because the number of pixels on the screen will increase markedly.
All improvements made have improved the effective performance per millimeter of area. It would be interesting to know about the increase in power efficiency, but AMD does not like to specify the TDP level for their modern top-end solutions, and the official figure of 275 W for the new board is questionable. We can only hope that energy efficiency has not deteriorated. But the performance should definitely improve by at least 20-30% compared to the Radeon HD 7970, and in some cases even more.
As if to testify to the increased capabilities, especially in terms of fill rate, AMD is citing the average frame rates achieved in the newest Battlefield 4 game, which is coming out the other day. Battlefield 4 is the sequel to the popular Battlefield series developed by DICE and this game is perhaps the most anticipated game of the year.
It is important to us that Battlefield 4 and its developer DICE are part of the AMD Gaming Evolved Partner Program, and therefore there will definitely be no problems with optimizing Battlefield 4 for GCN architecture GPUs. What’s more, the new Frostbite 3 game engine on which Battlefield 4 is based takes advantage of many of AMD’s most advanced video chip capabilities, and a Mantle API-enabled version is expected in December. In the meantime, let’s look at the performance in the normal version of the game:
As you can see, even in «quiet» mode, the Radeon R9 290X is clearly ahead of the competing Geforce GTX 780 in both modes with different resolutions. However, there is a theoretical possibility that the Nvidia graphics card at such high resolutions is hampered by the lack of video memory, which it has less than the R9 290X. Of course, a larger amount of video memory is also an advantage of AMD’s new product, but it would be interesting to see a comparison at a lower resolution, where this is not a determining factor.
We will test the performance of the new solution in our game suite in the third part of the article. Summing up the results of the theoretical part, it can be noted that the presented model of the AMD Radeon R9 290X video card should become one of the most productive single-chip 3D accelerators in general, and a rather successful acquisition in the top price segment intended for enthusiasts. New from AMD provides some new features and excellent performance for its price.
Conclusions on the theoretical part
So, at the end of October 2013, AMD offered the market a model of the Radeon R9 290X video card with a very competitive price and features. Based on the above theoretical characteristics and the recommended price, even without testing in games, we can confidently say that the presented top model of the AMD video card has an excellent ratio of price, performance and functionality.
The functionality of the novelty is additionally enhanced by very interesting initiatives from AMD: a sound DSP engine built into modern chips in the form of TrueAudio technology and a new low-level graphics API Mantle. Their development was made possible largely due to the fact that AMD is the supplier of graphics solutions for all next-generation game consoles. And although the prospects for these initiatives in PC games are still vague and they have not gained much popularity among game developers, but this is just the beginning, and with the proper approach from AMD to promote their technologies, they will succeed.
In the previous article, we noted that AMD has not yet said the main word in the form of a top-end graphics card of the new line, known as the Radeon R9 290X, and today we are talking about how this solution, based on the latest Hawaii GPU, has become a powerful locomotive , which should drag along new technologies in the form of Mantle and TrueAudio, and the entire modern product line of the company. High end graphics cards are the products that help sell everyone else. And Radeon R9The 290X should do a good job in this role. The only controversial point seems to be the likely high power consumption of the novelty, but if its power and cooling systems cope with the task, then this is not too big a problem.
After we got acquainted with the characteristics and capabilities of the Radeon R9 290X video card in the theoretical part of the material, it’s time to move on to practice. The next part of the material will be devoted to the study of the rendering speed of the new top-end video card from AMD in our set of familiar synthetic tests. It will be very interesting to compare the performance of the new product from the upper price segment with the top motherboards of the Radeon HD 7000 line, as well as with the speed of competing video cards from Nvidia.
AMD Radeon R9 290X — Part 2: Video cards and synthetic benchmarks →
MSI R9 290X LIGHTNING
graphics card review
sufficiency. The idea of rational purchases is not removed from the agenda, but the manufacturer has something to offer even the most demanding enthusiasts. We have already looked at the capabilities of AMD’s flagship graphics line, the Radeon R9.290X. The performance of the device is pleasantly surprising, but the reference model definitely needs some work. Today we will look at more than just the original version of the adapter, the MSI R9 290X LIGHTNING claims to be the fastest and most equipped graphics card in its class.
LIGHTNING devices appear much later than the reference models — it takes a lot of time to develop a fundamentally new circuit and layout. However, the results are justified. Branded «lightning bolts» of the manufacturer often become not just a ray of light against the background of one-sided reference devices, but demonstrative tiles that allow revealing the ultimate potential of devices of a certain line. Let’s see if MSI R9 succeeds290X LIGHTNING to meet such a high mission.
Earn $1800 already in two weeks and study at the open hour
As expected, the video card received an improved frequency formula. The graphics processor instead of the standard 1000 MHz operates at 1080 MHz. The adapter has 4 GB of GDDR5 memory on board, operating at the recommended 5000 MHz. At first glance, the frequency of local RAM is not high, but do not forget about the use of a 512-bit bus, which significantly increases the bandwidth of the «GPU-memory» channel.
The external similarity of the novelty with the MSI N780 LIGHTNING, which we reviewed earlier, is obvious. However, there are also differences. First of all, we note that the Hawaii-based device is noticeably larger than the model with a chip from NVIDIA, a larger cooler is used for the GPU-based adapter from AMD.
Given the rather hot temper of the Radeon R9 290X, it’s not surprising that the original TriFrozr cooler is used to cool the video card. This is a very dimensional system of non-trivial design.
The large heatsink block consists of an array of small aluminum plates held together by five heat pipes. Two of them have a diameter of 8 mm, three more — 6 mm. At the junction of the tubes with the plates, soldering is used to improve the contact between the elements. The cooler is based on a massive copper heat sink, which, like heat pipes, has a nickel-plated finish. The tubes run along the entire radiator block, using the potential of the passive cooler.
Three fans are used to blow the structure. Two of them have a diameter of 90 mm, one more — 80 mm. The blades of all fans are made using Propeller Blade technology. The manufacturer claims that due to the special shape of the impeller with beveled edges of the blades, it is possible to increase the air flow without increasing the rotation speed.
Two independent channels are used to control the fans. The side ones are parallel, while the central one is regulated separately. According to MSI, the version of the TriFrozr cooler used can dissipate up to 500W of power.
The manufacturer paid a lot of attention to the external design of the device. The top of the cooler is covered with a decorative anodized aluminum cover. Bright yellow center fan, shroud screw fastening, round metal plates with concentric notches on the side fans. Everything is done to make the device memorable even outwardly.
MSI R9 290X LIGHTNING uses the original PCB, which is transformed beyond recognition. From above, the board is covered with a massive radiator plate, which is used for additional passive cooling of VRM elements and memory chips. Note that some of the chips that are not covered by the plate are in contact with the main heatsink through heat-conducting stickers.
The power regulator is 15-phase, 10 of which are used for the GPU. The circuit uses elements that comply with the Military Class 4 specification.
In particular, expensive and efficient Copper MOS power switches, SFC chokes, Dark Solid Caps and low profile tantalum core capacitors are involved here.
Interestingly, MSI in this case abandoned the GPU Reactor, a miniature board with a set of capacitances to improve the stability of voltages in the GPU circuit. The three-slot format of the device obliges to look for alternative solutions. The developers proposed an alternative in the form of a whole bandolier of 24 tantalum core capacitors, 12 on each side of the PCB. This allowed to reduce the ripple under load.
A shortened version of the radiator plate is also supplied with the video card. It is relevant for cases when a CBO will be used for the adapter or a session with liquid nitrogen is due. The standard cooler will interfere with the installation of the overall heat sink, while the compact version covers only the stabilizer elements.
The Enhanced Power Design uses three separate controllers to control GPU, memory, and PLL/PCI-E power phases instead of the reference model’s one. To power the RAM chips, a separate channel is allocated, coming from an additional 12 V line. In this case, the circuit is designed in such a way as to reduce the impact on the electrical parameters of the motherboard during overclocking of the video card chip / memory.
After a large part of SK Hynix’s production capacity in China was damaged by a fire in the fall of 2013, graphics card manufacturers have experienced a serious shortage of high-speed, high-capacity memory chips. In some cases, even top-end devices had to use Elpida chips, which work fine at stock frequencies, but, as a rule, have much more modest overclocking potential. What this means for enthusiasts, no need to explain. At one time, MSI also faced a similar problem — Elpida chips were used on LIGHTNING series devices for lack of alternatives. Later, chips with great potential were used for the same models, but for the owners of the devices of the first wave, this was little consolation. Now the situation with the availability of fast memory chips has improved, but MSI is approaching this issue with even greater scrupulousness.
In the case of the MSI R9 290X LIGHTNING, the manufacturer selected the optimal set of chips by consulting directly with the GPU developers. As a result, preference was given to Samsung K4G20325FD-FC03, which should provide a good margin of safety. All 16 microcircuits are installed on the front side of the printed circuit board.
Twin BIOS technology, involves the presence of two chips with firmware. In standard mode, we have the recommended clock speeds and the ability to increase the Power Limit by 20%. The second microcircuit is flashed by the BIOS, which removes some restrictions that could become a hindrance during extreme overclocking of the video card. A miniature toggle switch for switching between chips is located at the top edge of the PCB.
An anodized aluminum plate is attached to the back of the PCB. It is not used for cooling, but it can protect against mechanical damage while working on an open bench. Considering that this is actually a natural habitat for the MSI R9 290X LIGHTNING, then rear cover will definitely not hurt. The plate is fastened with 10 screws, increasing the overall strength of the structure along the way. For a device weighing 1560 grams, such precautions do not seem redundant.
On the back side of the PCB there is a line of LEDs that reflect the current loading of the power phases.
There are 3 connectors for connecting additional power — two 8-pin and one additional 6-pin.
The MSI R9 290X LIGHTNING has an interesting power supply indication system. If no additional power lines are connected to the adapter, or only one of them is connected, the red LED on the reverse side lights up. If two 8-pin cables are connected, the blue indicator lights up. In this mode, you can already use the video card. However, if serious overclocking of the adapter is expected, a third 6-pin connector must be connected. In this case, the orange LED lights up, indicating the readiness of the device for frequency experiments.
Illumination lovers will love this new MSI product. In addition to the already mentioned light elements, on the upper edge there is an indicator of the load on the GPU, made in the form of the inscription «LIGHTNING». In rest mode, it glows green. It changes to blue at medium load, and red at maximum.
If all three power lines are connected to the video card, the GPU load indicator is always red, regardless of the current state of the device.
MSI keeps up good tradition with a gentleman’s 3×3 OC Kit for the LIGHTNING series. Under such a kit is meant the support of three useful functions. Triple Overvoltage implies the ability to change the voltage of the GPU, memory and bus. Triple Temperature Monitor allows you to monitor the temperature of the GPU, power regulator and memory chips. Triple Voltage Monitor provides contact groups that allow you to measure the voltage of the main components of the video card using a multimeter.
MSI Afterburner app allows temperature monitoring and voltage control. To measure voltages with an external device, three connectors are located at the side edge of the adapter. In this case, separate adapters are provided for connecting the probes. There is a systematic approach to the issue. This method is much more convenient if it is necessary to measure voltages during the entire benchmark session.
The video card is about 300 mm long. Not every case will be ready to accept such an adapter. The device is quite large, so you should first make sure that there is enough space from the mounting panel to the basket with drives. You should also consider the three-slot format if you plan to use multiple video cards in CrossFire mode.
The video interface panel contains four video outputs: two DVI (DVI-I and DVI-D), as well as HDMI and DisplayPort. On the mounting plate there is a grill for the output of heated air.
- 1 Supply set
- 2 in the work
- 3 Acceleration
- 4 performance
- 5 video review MSI 290X LIGHTNING
- 6 Entities
- 6.1 The device for the account was provided by MSI.MSI.
Package Includes Manual, Software CD, Two 6-Pin to 8-Pin Power Adapters, and Molex Pair to One 6-Pin 12V Adapter its fasteners. The delivery also includes three adapters for connecting multimeter probes.
Considering any alternative version of the Radeon R9 290X, first of all, it is interesting to evaluate the capabilities of the original cooling system. The reference CO is not at all happy, because any improvement here is very valuable.
At rest, the GPU temperature is kept at 37 C. The fans are running at just over 900 rpm. In this case, we are talking about side fans, the performance of which is reflected in all monitoring utilities. The central fan in idle mode rotates at 30% of the maximum value.
Some of my colleagues in the shop during the first tests of the MSI R9 290X LIGHTNING noticed an increased noise level during idle operation. The nuance is that in the first versions of the firmware, the initial rotation speed of the central fan was 45%, not 30%. In this case, indeed, the noise is somewhat higher than expected. However, according to the manufacturer, the devices that went into retail sale have updated firmware, which has already corrected the nuance with the base fan speed.
After the load tests, the temperature of the GPU on the open bench increased to only 71 degrees. Against the background of 95 C for the reference model, these results are impressive. The speed of rotation of the side fans increased to 2000 rpm. The central one also accelerated to 50-55%. Under load, the noise level is average. The video card is much quieter than the reference model, but the device is definitely not silent.
Given the high efficiency of the cooling system, silence lovers may well try to find the optimal fan speed. You can control the side ones using the same MSI Afterburner, but to adjust the central one you will need a special application — MSI Fan Control. The utility allows you to use the automatic mode or explicitly set the required value. You can find the most comfortable combination experimentally.
During the experiments, the clock frequency of the graphics processor was increased to 1174 MHz. A very good indicator.
I was also pleased with the potential of memory chips, whose frequency was raised to 6364 MHz without consequences. Given the 512-bit bus, the throughput in this case increases to 407 GB / s.
The full potential of a video card can only be revealed during an extreme overclocking method using liquid nitrogen. MSI R9The 290X LIGHTNING has a large technical reserve and is ready to work in difficult conditions.
Factory overclocked graphics card initially allows it to outperform the reference model by 4-8% in games and synthetic applications.
In general, the increase is relatively small. However, using the MSI R9 290X LIGHTNING in normal mode is like driving a lemon-colored Lamborghini on a country road. The device with all its essence provokes more active “driving”.
After overclocking, the video card performance increases by 10-12%. The role of each additional percentage in this class is very significant. Fast and Furious in most cases allows you to approach or even outpace the GeForce GTX 780 Ti, which is currently the fastest single-chip video card, as well as the most expensive. As you can see, now there is a worthy alternative in the camp of the «Reds».
Video Review MSI R9 290X LIGHTNING
AMD Radeon R9 290X
PowerColor OCAsus DirectCU IIVTX3D X-EditionPowerColor PCS PlusGigabyte 270X WindForce 3X OC
AMD Radeon R9 290X
Why is Radeon RX better than other Radeon RX09?
- Memory bus width?
512bit vs 251. 74bit
- Textured units (TMUs)?
176 vs 144.83
64 vs 56.68
- PassMark result (G3D)?
6720 vs 5002.43
- 3DMark Vantage Texture Fill result?
155.5GTexels/s vs 90.02GTexels/s
- 3DMark Vantage Pixel Fill result?
16.8GPixel/s vs 10.98GPixel/s
- PassMark result (DirectCompute)?
2986 vs 2102.76
- DVI outputs?
2 vs 0.8
Which comparisons are the most popular?
AMD Radeon R9 290X
AMD Radeon R9 290
AMD Radeon R9 290X
Nvidia GeForce GTX 1060
AMD Radeon R9 290X
AMD Radeon RX 550
AMD Radeon R9 290X
AMD Radeon R9 380
AMD Radeon R9 290X
AMD Radeon RX Vega 64
AMD Radeon R9 290X
Gigabyte GeForce GTX 1050 Ti
AMD Radeon R9 290X
Asus ROG Strix GeForce GTX 1060
AMD Radeon R9 290X
NVIDIA GeForce GTX 960
AMD Radeon R9 290x
PowerColor Devil 13 Dual-Core R9 390
AMD Radeon R
1. GPU Clock Speed
The Graphics Processing Unit (GPU) has a higher clock speed.
Unknown. Help us offer a price.
When the GPU is running below its limits, it can jump to a higher clock speed to increase performance.
The number of pixels that can be displayed on the screen every second.
FLOPS is a measurement of GPU processing power.
The number of textured pixels that can be displayed on the screen every second.
6.GPU memory speed
Memory speed is one aspect that determines memory bandwidth.
Shading units (or stream processors) are small processors in a graphics card that are responsible for processing various aspects of an image.
8.textured units (TMUs)
TMUs accept textured units and bind them to the geometric layout of the 3D scene. More TMUs generally means texture information is processed faster.
9 ROPs imaging units
ROPs are responsible for some of the final steps of the rendering process, such as writing the final pixel data to memory and performing other tasks such as anti-aliasing to improve the appearance of graphics.
1.memory effective speed
The effective memory clock frequency is calculated from the size and data transfer rate of the memory. A higher clock speed can give better performance in games and other applications.
2.max memory bandwidth
This is the maximum rate at which data can be read from or stored in memory.
VRAM (video RAM) is the dedicated memory of the graphics card. More VRAM usually allows you to run games at higher settings, especially for things like texture resolution.
4. memory bus width
A wider memory bus means it can carry more data per cycle. This is an important factor in memory performance, and therefore the overall performance of the graphics card.
5.GDDR memory versions
Later versions of GDDR memory offer improvements such as higher data transfer rates, which improve performance.
6. Supports memory troubleshooting code
✖AMD Radeon R9 290X
Memory troubleshooting code can detect and fix data corruption. It is used when necessary to avoid distortion, such as in scientific computing or when starting a server.
DirectX is used in games with a new version that supports better graphics.
OpenGL version 2.
The newer the OpenGL version, the better graphics quality in games.
OpenCL version 3.
Some applications use OpenCL to use the power of the graphics processing unit (GPU) for non-graphical computing. Newer versions are more functional and better quality.
4. Supports multi-monitor technology
✔AMD Radeon R9 290X
The video card has the ability to connect multiple displays. This allows you to set up multiple monitors at the same time to create a more immersive gaming experience, such as a wider field of view.
5. GPU temperature at boot
Lower boot temperature means the card generates less heat and the cooling system works better.
6.supports ray tracing
✖AMD Radeon R9290X
Ray tracing is an advanced light rendering technique that provides more realistic lighting, shadows and reflections in games.
7. Supports 3D
✔AMD Radeon R9 290X
Allows you to view in 3D (if you have a 3D screen and glasses).
✖AMD Radeon R9 290X
DLSS (Deep Learning Super Sampling) is an AI based scaling technology. This allows the graphics card to render games at lower resolutions and upscale them to higher resolutions with near-native visual quality and improved performance. DLSS is only available in some games.
9. PassMark result (G3D)
This test measures the graphics performance of a graphics card. Source: Pass Mark.
1.has HDMI output
✔AMD Radeon R9 290X
Devices with HDMI or mini HDMI ports can stream HD video and audio to the connected display.
Unknown. Help us offer a price.
More HDMI connections allow you to connect multiple devices at the same time, such as game consoles and TVs.