R9 390 dual gpu: AMD Radeon R9 390 X2 Specs

AMD R9 390 CrossFire vs. SLI GTX 970 Benchmark, Ft. Devil 13 Dual-Core 390 | GamersNexus

CrossFire R9 390 vs. SLI GTX 970 Benchmarks [Video]

PowerColor Devil 13 Dual-Core R9 390 Specs









Model AXR9 390 II 16GBD5
GPU R9 390 x2
Core Clock (GPU) 1000MHz
Stream Processors 5120 total stream processors
Memory Config 2x 8GB GDDR5 banks
Memory Interface 2x 512-bit
Memory Speed (GPU) 1350MHz
Price $600

PowerColor’s Devil 13 video card is a three-slot, five-pound behemoth. It’s a single-card solution branded as “dual-core,” pursuant to the existence of two R9 390 GPUs on the PCB. The weight primarily comes from a rigid backplate for structural support, opposed by a thick alloy heatsink under the faceplate. PowerColor goes heavy on the metals for this card, and that’s something that we’ll look at more closely in the forthcoming, standalone review – sag is definitely a concern. The card runs for $600 – similarly priced to dual-GTX 970s (ranging from ~$620-$660) – but has a pretty massive $100 MIR right now.

The Devil 13 uses a three-fan array for cooling, each using a five-fin scoop design. The power block is the most impressive – 32 pins of power (4×8-pin headers). Granted, that’s the same as you’d find for most dual-card configurations, it just looks impressive on one card.

PowerColor’s Devil 13 is equipped with 16GB of GDDR5 memory, but that’s not “stackable” memory for any of the games we tested; each GPU has access to its own 8GB GDDR5 pool, and they cannot share memory pools to exceed 8GB in DirectX 11.

At its stock speeds, the Devil 13 runs a core clock of 1100MHz and memory clock of 1350MHz. Overclocking won’t be tested until the review.

EVGA GTX 970 SuperSC Specs







  EVGA 970 Hybrid EVGA 970 SSC MSI GTX 970 Gaming GTX 970 Stock
Base Clock (GPU) 1140MHz 1190MHz 1140MHz 1050MHz
Boost Clock (GPU) 1279MHz 1342MHz 1279MHz 1178MHz
Memory Clock 7010MHz 7010MHz 7010MHz 7000MHz
Mem Spec 4GB GDDR5
256-bit
4GB GDDR5
256-bit
4GB GDDR5
256-bit
4GB GDDR5
256-bit
Price $400 $350 $350 $310

From our previous article:

Considerations of SLI

There are two primary scenarios where SLI or CrossFire are used: A later upgrade when half the configuration is already owned and a day-one, brand new build. In the event of the first scenario – where the user already owns one GTX 970 and is considering a second – the value considerations are different and will be discussed only in the conclusion. Also in that scenario, we’d always recommend buying that second card (if truly desired) before it exits production. Almost every time a card exits official production, prices spike on retailers and second-hand markets. It’s almost always better to buy a newer, single card once that happens, as the spiked prices are hardly sane or good value.

SLI and CrossFire are also historically prone to micro-stuttering as a result of their dual-processing technique (normally AFR, or alternate frame rendering). GPUs render alternating frames when using the AFR technique; more explicitly, GPU A will render all odd frames (1, 3, 5, 7) while GPU B renders all even frames (0, 2, 4, 6). Micro-stutter can be so extreme in some games and driver sets that SLI becomes undesirable, even if average FPS is improved over single-card configurations. In these situations, disabling one of the two GPUs (but leaving the GPU physically installed) will reduce or eliminate micro-stutter, but then you’re only getting half the investment outputting – certainly an unwanted situation.

Micro-sutter is observable as a result of disparate frame-time gaps, where the time between frame renders is inconsistent enough that the user can perceive a jarring difference – e.g. jumping from a 16ms render to a 30ms or 40ms render time (or worse). Adaptive synchronization technologies have helped to mitigate this phenomenon. Monitors supporting G-Sync and FreeSync are of particular importance for consideration when running SLI or CrossFire. The GPU connected to the display manages the sync technology. NVidia SLI setups fully support G-Sync. AMD CrossFire setups, as of driver version 15.7 from July 2015, also fully support FreeSync.

Another performance consideration for multi-GPU cards – in a similar vein to micro-stutter – 1% and 0.1% low frame performance can sometimes be worse than single-card setups. This is another point that could potentially favor a single, higher-end GPU, but recent optimizations made to drivers and games may reduce the impact to manageable territory – we’ll look at that below.

No overclocking was applied during these tests, which does mean that the slower of the two cards (the 970 Hybrid, at 1140MHz) will marginally impact the overall performance. SLI overclocking is being done in one of our next tests and has been reserved for that content. We’ve already tested this 1140MHz vs. 1190MHz differential over here, if you’re curious about what kind of delta that produces. We mostly saw differences centered around the 1.87% to 2.5% area.

Test Methodology

We tested using our 2015 multi-GPU test bench. Our thanks to supporting hardware vendors for supplying some of the test components.

The latest AMD drivers (15.12) were used for testing. NVidia’s 361.43 drivers were used for testing the latest games. Game settings were manually controlled for the DUT. All games were run at presets defined in their respective charts. We disable brand-supported technologies in games, like The Witcher 3’s HairWorks and HBAO. All other game settings are defined in respective game benchmarks, which we publish separately from GPU reviews. Our test courses, in the event manual testing is executed, are also uploaded within that content. This allows others to replicate our results by studying our bench courses.

Each game was tested for 30 seconds in an identical scenario, then repeated three times for parity. The results in the tables are averages of these three runs.

Z97 Bench:










GN Test Bench 2015 Name Courtesy Of Cost
Video Card

This is what we’re testing!

CPU Intel i7-4790K CPU CyberPower $340
Memory 32GB 2133MHz HyperX Savage RAM Kingston Tech. $300
Motherboard Gigabyte Z97X Gaming G1 GamersNexus $285
Power Supply NZXT 1200W HALE90 V2 NZXT $300
SSD HyperX Predator PCI-e SSD Kingston Tech. TBD
Case Top Deck Tech Station GamersNexus $250
CPU Cooler Be Quiet! Dark Rock 3 Be Quiet! ~$60

X99 Bench:










GN Test Bench 2015 Name Courtesy Of Cost
Video Card

This is what we’re testing!

CPU Intel i7-5930K CPU iBUYPOWER $580
Memory Kingston 16GB DDR4 Predator Kingston Tech. $245
Motherboard EVGA X99 Classified GamersNexus $365
Power Supply NZXT 1200W HALE90 V2 NZXT $300
SSD HyperX Savage SSD Kingston Tech. $130
Case Top Deck Tech Station GamersNexus $250
CPU Cooler NZXT Kraken X41 CLC NZXT $110

Average FPS, 1% low, and 0.1% low times are measured. We do not measure maximum or minimum FPS results as we consider these numbers to be pure outliers. Instead, we take an average of the lowest 1% of results (1% low) to show real-world, noticeable dips; we then take an average of the lowest 0.1% of results for severe spikes.

Overclocking was performed incrementally using MSI Afterburner. Parity of overclocks was checked using GPU-Z. Overclocks were applied and tested for five minutes at a time and, if the test passed, would be incremented to the next step. Once a failure was provoked or instability found — either through flickering / artifacts or through a driver failure — we stepped-down the OC and ran a 30-minute endurance test using 3DMark’s FireStrike Extreme on loop (GFX test 2).

Thermals and power draw were both measured using our secondary test bench, which we reserve for this purpose. The bench uses the below components. Thermals are measured using AIDA64. We execute an in-house automated script to ensure identical start and end times for the test. 3DMark FireStrike Extreme (GFX test 2) is executed on loop for 25 minutes and logged. Parity is checked with GPU-Z.

Thermals, power, and overclocking were all conducted on the Z97 bench above.

Fallout 4 Benchmark – 970 SLI vs. R9 390 CrossFire, 980 Ti, 980, 390X

Fallout 4 was unexpectedly brutal in this particular benchmark. The SLI 970 configuration handily stomps the CF 390 FPS by 36.5%, which seems to be a result of non-existent CrossFire scaling in Fallout 4. This issue presents itself to some degree in a few other games, but is most notable in Fallout 4. Until a point at which scaling is properly supported on this configuration (Devil 13 with 2x R9 390s), it appears that Fallout 4 runs most efficiently on just about any other configuration.

Metro: Last Light Benchmark – 970 SLI vs. R9 390 CrossFire, 980 Ti, 980, 390X

 

At the lower two resolutions – 1080p and 1440p – the SLI GTX 970s marginally outperform the dual R9 390 configuration. 1080p fronts a ~1FPS difference, or 0.87% delta. 1440p is, again, 1FPS different (1.13% delta). These measurements are outside of margin of error and were a combination of multiple test passes, validated as accurate. At 4K, the R9 390 setup pulls ahead by 3.51% (58FPS vs. 56FPS AVG on the 970s). This is similar to what we’ve seen on most of AMD’s cards, which is generally an upward performance trend as resolution increases.

At all three resolutions, performance is close enough to be inconsequential and unnoticeable to the end-user. There is effectively no perceptible difference in AVG FPS metrics. Even the 1% low and 0.1% low frametimes are respectable across the board here – respectable, but inconsequential for comparative purposes.

Shadow of Mordor Benchmark – 970 SLI vs. R9 390 CrossFire, 980 Ti, 980, 390X

 

The CF 390s finally pull ahead in Shadow of Mordor, leading the SLI 970s by 4.7% at 1080 (130 vs. 124FPS AVG) and 4.1% at 1440p (100FPS vs. 96FPS AVG). As with the previous test, 4K produces a slightly furthered lead for the AMD solution (8% – 61FPS vs. 56.3FPS AVG). Again, not hugely noticeable. The 8% delta begins to enter a realm of possible detection by users, but isn’t quite there yet. The average user will not detect the FPS difference between these two solutions.

As for frametimes, 0.1% metrics run poor on both the SLI and CrossFire configurations when compared against neighboring single-card alternatives. The 2x R9 390s run a bit worse. 51.3FPS 0.1% low against an average throughput of 130FPS (1080p) will be detected as the occasional ‘dip,’ ‘stutter,’ or ‘lag’ (or other colloquialism) on rare occasion.

Assassin’s Creed Syndicate Benchmark – 970 SLI vs. 980 Ti, 970, 980, 390X

We technically have a line for the PCS Devil 13 above – but in reality, the card was more of a “DNF.” We were only able to get Assassin’s Creed: Syndicate to survive for long enough to execute one test pass per game launch. The dual R9 390 setup experienced such frequent crashes in ACS that we could not operate the 1440p/ultra settings for longer than one minute in-game.

The data was included above because it was confidently collected, but from a user standpoint, ACS would have been unplayable in our tested configuration for the Devil 13. We have reached-out to appropriate parties and are researching this issue. This seems most likely to be some sort of driver or game optimization issue – not particularly surprising, as ACS has experienced SLI / CF scalability issues several times in the past.

COD: Black Ops III Benchmark – 970 SLI vs. 980 Ti, 970, 980, 390X

We’ve done extensive testing with Black Ops III, including a graphics optimization guide for performance tuning. The game has since released major patches which affected performance on nVidia and AMD devices, but most heavily on the AMD side. Our above chart includes the most up-to-date patch for benchmarking.

At 1440p, the CrossFire R9 390s win-out over the SLI GTX 970s by a measurable 10.24% (154FPS vs. 139FPS). At such framerates it becomes debatable how usable an extra 15 frames realistically is, but users seeking saturation of a 144Hz throughput potential would be advantaged by using the 2x R9 390s in this case. To be fair, performance tuning on the 970s could produce a similar FPS – but for maxed settings, the 390s punch a bit higher. The GTX 970s have superior 1% lows, but the gap between 65.3 and 72 1% low FPS is effectively imperceptible at the end of the day.

4K punishes the GTX 970s in the 0. 1% low department, something we saw reflected across two re-tests (nine total test passes) for validation.

The Witcher 3 Benchmark – 970 SLI vs. 980 Ti, 970, 980, 390X

The Witcher 3 benefits SLI GTX 970s, producing a large 22.2% advantage at 1440p and 40% advantage at 4K resolution. The CF 390s exhibit significantly worse 0.1% and 1% low frametimes (more than 100% worse than the 970s), something we saw ‘clinically’ as framedrops while testing.

GTA V Benchmark – 970 SLI vs. 980 Ti, 970, 980, 390X

 

GTA V had some anomalous issues with the CrossFire configuration during testing. Very rarely – about once per four-minute test period – we observed a significant freeze in performance which endured for about 1-1.5 seconds. This is reflected abysmally in low frametimes for some tests, depending on whether or not the issue was reproduced during the test period.

At 1080p, the SLI 970s are advantaged by a noticeable 17.86% (122FPS vs. 102FPS AVG). More noticeable is the 0. 1% low metric at 77.3FPS vs. 34.7FPS for the 2x R9 390s. 1440p saw a 6.1% AVG FPS delta that favored the 970s, but severely low 0.1% output (8FPS) on the 2x 390s prohibited smooth play.

Just Cause 3 Benchmark – SLI 970 vs. 980 Ti, 980, 970, 390X

We’ve already ruled that Just Cause 3 has poor multi-GPU scaling – twice, actually – but there’s no harm in re-testing. Our experience with Just Cause 3 has been disappointing for multi-card setups. Even as SLI scaling has improved, the delta against a single card is small enough that it’d be a waste to run two cards rather than a single, more powerful one. This is especially true for the 390s, where performance seems to indicate that there’s no CF scaling at all with the Devil 13.

Performance Data Recap










  AVG FPS Winner Delta Notes Noticeable?
MLL 1080: CF 390
1440: CF 390
4K: CF 390
980 Ti loses ground as resolution increases. No
FO4 1080: SLI 970 36.50% CF scaling does not seem to work with the 2×390 Devil 13. Yes!
Mordor 1080: CF 390
1440: CF 390
4K: CF 390
Poor 0.1% low frametimes on CF390 (and SLI, though not as bad). No, though 4K begins to enter this territory.
W3 1440: SLI 970
4K: SLI 970
22.2%
40%
Bad low frametimes on CF 390 (102% worse than SLI 970). Yes!
BLOPS3 1440: CF 390
4K: CF 390
970 has better low frametimes at lower resolutions, but falls hard at 4K. Yes — pushes into 144Hz range, desirable for competitive FPS.
ACS 1440: SLI 970 N/A DNF. CF 390 crashed ACS. Yes — CF 390 DNF.
JC3 Both SLI & CF are losers here. N/A No reasonable SLI or CF scaling in JC3 at this time. Very bad value to buy multi-card for this game. N/A
GTA 1080: SLI 970
1440: SLI 970
4K: CF 390
17.86%
6.1%
6.3%
SLI 970 noticeably better at 1080. Disparity vanishes as resolution increases. Yes, at 1080.
No at 1440/4K.

Power Draw Benchmark – SLI GTX 970s vs. CrossFire R9 390s

Here’s where we start seeing differentiating stats. The PCS Dual R9 390 card puts the peak system load up to 582.73W, compared against the SLI GTX 970 power draw of 411.72W. ~170W is certainly a noticeable difference and will impact PSU selection. The performance gain – when there is one – does not seem to correlate with the 34.4% difference in power.

Thermal Benchmark – Devil 13 R9 390 & GTX 970 SSC

This chart and data will be discussed more heavily in our future review of the card, since thermals are more heavily impacted by coolers and AIB partners in multi-GPU situations. The cards operate about 1C apart – but that’s pretty useless for shoppers of CrossFire configurations, since the thermals will hinge entirely upon which two cards you buy. The thermal measurement is only useful in scenarios where the Devil 13 is considered. We don’t objectively measure dBA output (though may begin soon), but can subjectively state that the Devil 13 is the loudest card we’ve ever tested. It’s within ‘unpleasant’ territory when operating at full-tilt.

Conclusion: SLI GTX 970s or CrossFire R9 390s?

The short answer, as we declared in the “SLI 970s or 980 Ti?” article, is “it depends.” It is still safer to trend towards similarly-priced single-GPU configurations; such a purchase avoids buyer’s remorse in situations like those generated by Just Cause 3, where scalability may as well not exist. This stated, existing PC builds which already contain one of the two GPUs could benefit from an upgrade – and it’d be in the ~$300-$350 range (GTX 970 or R9 390), not in the $600+ range for a new build.

For a brand new system build, we’d generally advise in favor of a single-GPU for its versatility. Some specific games are so heavily advantaged by multi-GPU – Black Ops III in particular, for both the 970s and CF 390s – that it’d be worth doing, but only if that game is going to be a primary source of entertainment or competition.

Our advice is to look over the charts relevant to your gaming, read the accompanying text, and determine what’s best for you.

Do note that running two R9 390s will draw substantially more power than two 970s (and more still than a single Fury X or 980 Ti). Adjust PSU purchases appropriately.

Editorial & Testing: Steve “Lelldorianx” Burke
Video & Video Editing: Keegan “HornetSting” Gallick

AMD Radeon R9 390X vs PowerColor Devil 13 Dual-Core R9 390: What is the difference?

46points

AMD Radeon R9 390X

46points

PowerColor Devil 13 Dual-Core R9 390

vs

54 facts in comparison

AMD Radeon R9 390X

PowerColor Devil 13 Dual-Core R9 390

Why is AMD Radeon R9 390X better than PowerColor Devil 13 Dual-Core R9 390?

  • 305W lower TDP?
    275Wvs580W
  • 150MHz faster memory clock speed?
    1500MHzvs1350MHz
  • 600MHz higher effective memory clock speed?
    6000MHzvs5400MHz
  • Has Double Precision Floating Point (DPFP)?
  • 29. 8mm narrower?
    275mmvs304.8mm

Why is PowerColor Devil 13 Dual-Core R9 390 better than AMD Radeon R9 390X?

  • 4.33 TFLOPS higher floating-point performance?
    10.24 TFLOPSvs5.91 TFLOPS
  • 60.8 GPixel/s higher pixel rate?
    128 GPixel/svs67.2 GPixel/s
  • 2x more VRAM?
    16GBvs8GB
  • 135 GTexels/s higher texture rate?
    320 GTexels/svs185 GTexels/s
  • 308GB/s more memory bandwidth?
    692GB/svs384GB/s
  • 512bit wider memory bus width?
    1024bitvs512bit
  • 2304 more shading units?
    5120vs2816
  • 144 more texture mapping units (TMUs)?
    320vs176

Which are the most popular comparisons?

AMD Radeon R9 390X

vs

Nvidia GeForce GTX 1060

PowerColor Devil 13 Dual-Core R9 390

vs

AMD Radeon R9 390

AMD Radeon R9 390X

vs

MSI Radeon RX 580 Gaming 8GB

PowerColor Devil 13 Dual-Core R9 390

vs

AMD Radeon R9 295X2

AMD Radeon R9 390X

vs

AMD Radeon R9 390

PowerColor Devil 13 Dual-Core R9 390

vs

EVGA GeForce GTX 1080 Ti Gaming

AMD Radeon R9 390X

vs

AMD Radeon RX 570

PowerColor Devil 13 Dual-Core R9 390

vs

MSI GeForce RTX 2060 Aero ITX

AMD Radeon R9 390X

vs

AMD Radeon RX 580

PowerColor Devil 13 Dual-Core R9 390

vs

Nvidia GeForce GTX 1080 Ti

AMD Radeon R9 390X

vs

Gigabyte Radeon R9 390 WindForce 2X

PowerColor Devil 13 Dual-Core R9 390

vs

AMD Radeon HD 6990

AMD Radeon R9 390X

vs

Nvidia GeForce GTX 1660 Ti

PowerColor Devil 13 Dual-Core R9 390

vs

Inno3D GeForce RTX 3060 Ti Twin X2

AMD Radeon R9 390X

vs

MSI GeForce GTX 1050 Ti

PowerColor Devil 13 Dual-Core R9 390

vs

Nvidia GeForce RTX 3090

AMD Radeon R9 390X

vs

AMD Radeon R9 Fury X

AMD Radeon R9 390X

vs

Nvidia GeForce GTX 980 Ti

Price comparison

User reviews

Performance

1. GPU clock speed

1050MHz

1000MHz

The graphics processing unit (GPU) has a higher clock speed.

2.GPU turbo

Unknown. Help us by suggesting a value. (AMD Radeon R9 390X)

Unknown. Help us by suggesting a value. (PowerColor Devil 13 Dual-Core R9 390)

When the GPU is running below its limitations, it can boost to a higher clock speed in order to give increased performance.

3.pixel rate

67.2 GPixel/s

128 GPixel/s

The number of pixels that can be rendered to the screen every second.

4.floating-point performance

5.91 TFLOPS

10.24 TFLOPS

Floating-point performance is a measurement of the raw processing power of the GPU.

5.texture rate

185 GTexels/s

320 GTexels/s

The number of textured pixels that can be rendered to the screen every second.

6. GPU memory speed

1500MHz

1350MHz

The memory clock speed is one aspect that determines the memory bandwidth.

7.shading units

Shading units (or stream processors) are small processors within the graphics card that are responsible for processing different aspects of the image.

8.texture mapping units (TMUs)

TMUs take textures and map them to the geometry of a 3D scene. More TMUs will typically mean that texture information is processed faster.

9.render output units (ROPs)

The ROPs are responsible for some of the final steps of the rendering process, writing the final pixel data to memory and carrying out other tasks such as anti-aliasing to improve the look of graphics.

Memory

1.effective memory speed

6000MHz

5400MHz

The effective memory clock speed is calculated from the size and data rate of the memory. Higher clock speeds can give increased performance in games and other apps.

2.maximum memory bandwidth

384GB/s

692GB/s

This is the maximum rate that data can be read from or stored into memory.

3.VRAM

VRAM (video RAM) is the dedicated memory of a graphics card. More VRAM generally allows you to run games at higher settings, especially for things like texture resolution.

4.memory bus width

512bit

1024bit

A wider bus width means that it can carry more data per cycle. It is an important factor of memory performance, and therefore the general performance of the graphics card.

5.version of GDDR memory

Newer versions of GDDR memory offer improvements such as higher transfer rates that give increased performance.

6.Supports ECC memory

✖AMD Radeon R9 390X

✖PowerColor Devil 13 Dual-Core R9 390

Error-correcting code memory can detect and correct data corruption. It is used when is it essential to avoid corruption, such as scientific computing or when running a server.

Features

1.DirectX version

DirectX is used in games, with newer versions supporting better graphics.

2.OpenGL version

OpenGL is used in games, with newer versions supporting better graphics.

3.OpenCL version

Some apps use OpenCL to apply the power of the graphics processing unit (GPU) for non-graphical computing. Newer versions introduce more functionality and better performance.

4.Supports multi-display technology

✔AMD Radeon R9 390X

✔PowerColor Devil 13 Dual-Core R9 390

The graphics card supports multi-display technology. This allows you to configure multiple monitors in order to create a more immersive gaming experience, such as having a wider field of view.

5. load GPU temperature

Unknown. Help us by suggesting a value. (AMD Radeon R9 390X)

Unknown. Help us by suggesting a value. (PowerColor Devil 13 Dual-Core R9 390)

A lower load temperature means that the card produces less heat and its cooling system performs better.

6.supports ray tracing

✖AMD Radeon R9 390X

✖PowerColor Devil 13 Dual-Core R9 390

Ray tracing is an advanced light rendering technique that provides more realistic lighting, shadows, and reflections in games.

7.Supports 3D

✔AMD Radeon R9 390X

✔PowerColor Devil 13 Dual-Core R9 390

Allows you to view in 3D (if you have a 3D display and glasses).

8.supports DLSS

✖AMD Radeon R9 390X

✖PowerColor Devil 13 Dual-Core R9 390

DLSS (Deep Learning Super Sampling) is an upscaling technology powered by AI. It allows the graphics card to render games at a lower resolution and upscale them to a higher resolution with near-native visual quality and increased performance. DLSS is only available on select games.

9.PassMark (G3D) result

Unknown. Help us by suggesting a value. (PowerColor Devil 13 Dual-Core R9 390)

This benchmark measures the graphics performance of a video card. Source: PassMark.

Ports

1.has an HDMI output

✔AMD Radeon R9 390X

✔PowerColor Devil 13 Dual-Core R9 390

Devices with a HDMI or mini HDMI port can transfer high definition video and audio to a display.

2.HDMI ports

Unknown. Help us by suggesting a value. (PowerColor Devil 13 Dual-Core R9 390)

More HDMI ports mean that you can simultaneously connect numerous devices, such as video game consoles and set-top boxes.

3.HDMI version

HDMI 1.4

Unknown. Help us by suggesting a value. (PowerColor Devil 13 Dual-Core R9 390)

Newer versions of HDMI support higher bandwidth, which allows for higher resolutions and frame rates.

4.DisplayPort outputs

Allows you to connect to a display using DisplayPort.

5.DVI outputs

Allows you to connect to a display using DVI.

6.mini DisplayPort outputs

Allows you to connect to a display using mini-DisplayPort.

Price comparison

Cancel

Which are the best graphics cards?

XFX USA

This is some text inside of a div block.

Heading

Buy NowWhere to Buy

Where to Buy

Overview

Tech Specs

Features

This is some text inside of a div block.

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.

Where to Buy

Drivers & Downloads

XFX Warranty

Drivers & DownloadsXFX WarrantyRegister Your ProductFAQsContact Us

AMD Radeon RX Series 6000

The new Radeon RX 6000 Series graphic cards are engineered to deliver the performance needed to rule your games.

Shop Direct from XFX

Buy the latest GPUs and more, straight from the source.

Shop Direct from XFXAMD Radeon RX 6000 SeriesAMD Radeon RX 5000 SeriesAMD Radeon RX 500 SeriesLegacy GPUs

GRAPHIC CARDS

Shop Direct from XFXAMD RX Radeon 6000 SeriesAMD RX Radeon 5000 SeriesAMD RX Radeon 500 Series

Where to Buy

SUPPORT

Drivers & DownloadsXFX WarrantyRegister Your ProductFAQsContact Us

R9-390X-F28M

The XFX Radeon™ R9 390 Series graphics cards puts premium 4K gaming and Virtual Reality within reach. Experience smooth, true-to-life and stutter-free gameplay with ultra-high performance and resolution thanks the 8gb of onboard memory. A whole new dimension of gaming, for a whole new reality. Your days of gaming in mere HD are behind you. All the power you need for the most immersive 4K gaming experience and beyond, now and tomorrow. Radeon™ is Gaming

Where to Buy

Where to Buy

Built for the gamer.

Core Clock (MHz)

Memory Clock (MHz)

Bit Memory

Stream Processors

Radeon™ is faster. Radeon™ is immersive. Radeon™ is gaming.

For the gamer that demands the absolute best the XFX Radeon™ R9 390 Series graphics cards puts premium 4K gaming and Virtual Reality within reach. With 8gb of onboard memory, nothing can stop you.

Extreme 4K Gaming For Serious Gamers. Teleport into life-like 4K gaming realism and beyond, supercharged with DirectX® 12 support, ultra-high performance, more memory and wider bandwidth than before with more muscle thanks to AMD CrossFire multi-GPU technology for extreme gaming performance.

No Stuttering. No Tearing. Just Gaming. Enjoy lag-free, tear-free, 4K gaming on the most demanding games with AMD FreeSync technology. Now that’s smooth.

AMD’s LiquidVR Technology makes the Virtual World Real. Step into enhanced gaming realism with AMD LiquidVR technology and sustain an ultra-immersive VR presence, liquid-smooth, low-latency, plug-and-play compatible.

  • Microsoft DirectX 12 Support
  • GCN Architecture
  • Virtual Super Resolution
  • AMD LiquidVR Technology
  • AMD FreeSync Technology
  • AMD HD3D Technology
  • Display Flexibility
  • HDMI 1.4a
  • Dolby TrueHD and DTS-HD Master Audio Support
  • AMD App Acceleration
  • AMD PowerTune technology.
  • PCI Express® 3.0
  • AMD CrossFire technology
  • AMD Eyefinity multi-display technology
  • Up to 6 displays supported, may require the use of DP1.2 MST Hubs.
  • 28nm Process Technology
  • Advanced GDDR5 Memory Technology
  • Enhanced Internet Applications
  • AMD App Acceleration
  • Microsoft® Windows® 10 Support
  • AMD TrueAudio Technology
  • Mantle
  • Frame Rate Target Control(FRTC)

Model Number R9-390X-F28M

Product Name AMD Radeon™ R9 390X

Product Description AMD Radeon™ R9 390X 1060M 8gb Dual Dissipation

UPC Number 778656068667

Specifications

Bus Type PCI-E 3. 0

GPU Clock, Up to 1060 MHz

Stream Processors 2816

Memory Bus 512 bit

Memory Clock: 6.0GHz

Memory Size 8 GB

Memory Type DDR5

Card Profile Dual

Thermal Solution Dual Slot DD fansink

Outputs

Dual link Support Y

Max Supported Resolution (DIGITAL) 4096 x 2160

Output — Display Port 3

Output — HDMI 1

Output — DL-DVI-D 1

Features

Display Port ready 1.2

HDMI Ready 1.4a

Requirements

External Power — 8-pins 1

External Power — 6-pins 1

Minimum Power Supply Requirement 500 watt

XFX Recommended Power Supply XFX 550W PSU

Certifications

RoHS

Package Contents

8-pin to 6-pin power cable 1

6-pin to 4-pin power cable 1

Driver Disk Installation Guide 1

Installation DVD 1

Estimated Dimensions And Weights

Card Dimension (cm) 29. 5 x 14.3 x 4.2

Card Dimension (inch) 11.61 x 5.63 x 1.65

Master Carton Dimensions (cm) 43 x 49 x 35.5

Master Carton Dimensions (inch) 16.93 x 19.29 x 13.98

Master Carton Weight (Kg) 17.16 est.

Master Carton Weight (lb) 37.83 est..

Package Dimensions (cm) 34 x 24 x 8

Package Dimensions (inch) 13.39 x 9.45 x 3.15

Package Weight (Kg) 1.57 est.

Package Weight (lb) 3.46 est.

Units/Carton 10

SUPPORT

Have questions?

We have answers.

DRIVERS

Keep your XFX products updated with the latest software.

See Drivers

FAQs

Have questions?

We have answers.

See FAQs

PowerColor Launches Devil 13 R9 390 Dual Grenada Pro Graphics Card

PowerColor has once again launched a dual core graphics card based on their Devil 13 design, the Devil 13 R9 390. The Devil 13 R9 390 is a Dual-GPU based graphics card featuring tremendous amount of performance and a gigantic cooling system that spans three slots. The design of the card requires is such that it requires an array of 8-Pin power connectors and has a minimum power requirement of 1000W.

The PowerColor Devil 13 R9 390 is a completely new graphics card since its based on two Grenada Pro chips. The Grenada Pro which is a revised SKU of the Hawaii Pro graphics chip, features lower power consumption and better thermal specifications due to a improved AISC design and efficient power management systems. The Dual Grenada Pro chips on the Devil 13 R9 390 pack 2560 stream processors, 160 texture mapping units and 64 raster operation units per chip. These amount to 5120 stream processors, 320 texture mapping units and 128 raster operation units on the entire board. The core is clocked at 1.00 GHz. The memory per chip is 8 GB GDDR5 making a total of 16 GB on board VRAM, running along a 512-bit x 2 bus interface and clocked at a 5. 4 GHz effective memory clock (1350 x 4). Each core has 345.6 GB/s bandwidth and the total bandwidth on the board is 691.2 GB/s.

In terms of power consumption, the board still requires a large amount of power to feed the dual Grenada graphics chips along with the power circuitry featured on the PCB. We are looking at no less than four 8-Pin PCI-Express connectors powering the board which puts the total wattage to 1275W and the minimum power draw is suggested around 1000W. So you are going to need a beefy Power Supply if you want one or two of these beasts running inside your rig. According to PowerColor themselves, the PCB is a neat and clean design which leaves room for overclocking. Some words from PowerColor themselves about the PCB are mentioned below:

PowerColor Devil 13 Dual Core R9 390 is built with carefully-designed Platinum Power Kit and ultra-efficient thermal design. It consists of massive 15-phase power delivery, PowerIRstage, Super Cap and Ferrite Core Choke that provides the stability and reliability for such high-end graphics solution. To support maximum performance and to qualify for the Devil 13 cooling system, 3 Double Blades Fans are attached on top of the enormous surface of aluminum fins heatsink connected with total of 10 pieces of heat pipes and 2 pieces of large die-cast panels. This superb cooling solution achieves a perfect balance between thermal solution and noise reduction. The PowerColor Devil 13 Dual Core R9 390 has the LED backlighting that glows a bright red color, pulsating slowly on the Devil 13 logo. via PowerColor

2 of 9

PowerColor has featured the same cooler they used on the Devil 13 R9 290X graphics card. With a massive triple slot cooler which is longer in length and much more wider compared to the reference Dual-Chip card offerings, the Devil 13 R9 390 features a triple slot, PWM controlled, triple fan design with the fans measuring 100mm that circulate air through a gigantic and dense aluminum fin array which features 10 heatpipes. This shroud is huge with strikes of red and black all around the surface while a large Devil 13 backplate supports the card from behind cooling the additional power components. The backplate is a nice touch to the card. The only thing odd about the Devil 13 R9 390 is that PowerColor went with the Grenada Pro version and not the XT GPU. The Grenada XT which is based on the Hawaii XT design was already featured on the Devil 13 R9 290X so it does make little sense to feature the Pro variant on their new card. Maybe we will see a Fiji XT based Dual-GPU card from PowerColor in the future.

The Devil 13 R9 390 will be bundled with Razer’s top of the line Ouroboros Ambidextrous mouse. The reference Radeon R9 390 graphics card retails for $329 US so we might see the Devil 13 variant around $899-$999 US. OCUK is already taking pre-orders for the Devil 13 R9 390 X2 at £689.99 and has an expected arrival date listed down for 18th of this month.

PowerColor Devil 13 R9 390 Specifications:

Graphics Card AMD Radeon R9 Fury X2 PowerColor Devil 13 R9 390 AMD Radeon R9 295X2 PowerColor Devil 13 R9 290X
GPU Fiji XT Grenada Pro Hawaii XT Hawaii XT
Stream Processors 8192 SPs 5120 SPs 5632 SPs 5632 SPs
TMUs/ROPs 512/128 320/128 352/128 352/128
Core Clock Up to 1000 MHz 1000 MHz 1018 MHz 1000 MHz
Memory 8 GB HBM (4 GB per Core) 16 GB GDDR5 (8 GB per Core) 8 GB GDDR5 (4 GB per Core) 8 GB GDDR5 (4 GB per Core)
Memory Bus 4096-bit x 2 512-bit x 2 512-bit x 2 512-bit x 2
Memory Clock 1000 MHz 1350 MHz 1250 MHz 1350 MHz
Memory Bandwidth 1024 GB/s 690 GB/s 640 GB/s 690 GB/s
TDP ~375W 500W+ 500W+ 500W+
Launch Price TBD $899 — $999 $1499 $1399

Radeon R9 390 [in 3 benchmarks]



Radeon R9 390

Buy

  • Interface PCIe 3. 0 x16
  • Core clock speed 0
  • Max video memory 0
  • Memory type GDDR5
  • Memory clock speed 1000
  • Maximum resolution

Summary

AMD started Radeon R9 390 sales 18 June 2015 at a recommended price of $329. This is GCN 2.0 architecture desktop card based on 28 nm manufacturing process and primarily aimed at gamers. 0 MB of GDDR5 memory clocked at 1000 GHz are supplied, and together with 512 Bit memory interface this creates a bandwidth of 384 GB/s.

Compatibility-wise, this is dual-slot card attached via PCIe 3.0 x16 interface. Its manufacturer default version has a length of 275 mm. 1 x 6-pin, 1 x 8-pin power connector is required, and power consumption is at 275 Watt.

It provides good gaming and benchmark performance at


30.80%

of a leader’s which is NVIDIA GeForce RTX 3090 Ti.


Radeon R9
390

vs


GeForce RTX
3090 Ti

General info


Of Radeon R9 390’s architecture, market segment and release date.

Place in performance rating 163
Value for money 8.16
Architecture GCN 2.0 (2013−2017)
GPU code name Grenada
Market segment Desktop
Design reference
Release date 18 June 2015 (7 years ago)
Launch price (MSRP) $329
Current price $380 (1. 2x MSRP) of 49999 (A100 SXM4)

Value for money

To get the index we compare the characteristics of video cards and their relative prices.

  • 0
  • 50
  • 100

Technical specs


Radeon R9 390’s general performance parameters such as number of shaders, GPU base clock, manufacturing process, texturing and calculation speed. These parameters indirectly speak of Radeon R9 390’s performance, but for precise assessment you have to consider its benchmark and gaming test results.

Pipelines / CUDA cores 2560 of 18432 (AD102)
Boost clock speed 1000 MHz of 2903 (Radeon Pro W6600)
Number of transistors 6,200 million of 14400 (GeForce GTX 1080 SLI Mobile)
Manufacturing process technology 28 nm of 4 (GeForce RTX 4080 Ti)
Thermal design power (TDP) 275 Watt of 900 (Tesla S2050)
Texture fill rate 160. 0 of 939.8 (h200 SXM5)
Floating-point performance 5,120 gflops of 16384 (Radeon Pro Duo)

Compatibility, dimensions and requirements


Information on Radeon R9 390’s compatibility with other computer components. Useful when choosing a future computer configuration or upgrading an existing one. For desktop video cards it’s interface and bus (motherboard compatibility), additional power connectors (power supply compatibility).

Bus support PCIe 3.0
Interface PCIe 3.0 x16
Length 275 mm
Width 2-slot
Supplementary power connectors 1 x 6-pin, 1 x 8-pin
Bridgeless CrossFire 1

Memory


Parameters of memory installed on Radeon R9 390: its type, size, bus, clock and resulting bandwidth. Note that GPUs integrated into processors don’t have dedicated memory and use a shared part of system RAM.

Memory type GDDR5
High bandwidth memory (HBM)
Maximum RAM amount 0 MB of 128 (Radeon Instinct MI250X)
Memory bus width 512 Bit of 8192 (Radeon Instinct MI250X)
Memory clock speed 1000 MHz of 21000 (GeForce RTX 3090 Ti)
Memory bandwidth 384 GB/s of 14400 (Radeon R7 M260)

Video outputs and ports


Types and number of video connectors present on Radeon R9 390. As a rule, this section is relevant only for desktop reference video cards, since for notebook ones the availability of certain video outputs depends on the laptop model.

Display Connectors 2x DVI, 1x HDMI, 1x DisplayPort
Eyefinity 1
Number of Eyefinity displays 6
HDMI +
DisplayPort support +

Technologies


Technological solutions and APIs supported by Radeon R9 390. You’ll probably need this information if you need some particular technology for your purposes.

AppAcceleration
CrossFire 1
Enduro
FreeSync 1
HD3D
PowerTune +
TrueAudio +
ZeroCore
VCE +
DDMA audio +

API support


APIs supported by Radeon R9 390, sometimes including their particular versions.

DirectX DirectX® 12
Shader Model 6.3
OpenGL 4. 6
OpenCL 2.0
Vulkan +
Mantle +

Benchmark performance


Non-gaming benchmark performance of Radeon R9 390. Note that overall benchmark performance is measured in points in 0-100 range.


Overall score

This is our combined benchmark performance rating. We are regularly improving our combining algorithms, but if you find some perceived inconsistencies, feel free to speak up in comments section, we usually fix problems quickly.


R9 390
30.80

  • Passmark
  • 3DMark Fire Strike Graphics
  • Unigine Heaven 4.0
Passmark

This is probably the most ubiquitous benchmark, part of Passmark PerformanceTest suite. It gives the graphics card a thorough evaluation under various load, providing four separate benchmarks for Direct3D versions 9, 10, 11 and 12 (the last being done in 4K resolution if possible), and few more tests engaging DirectCompute capabilities.

Benchmark coverage: 26%


R9 390
9045

3DMark Fire Strike Graphics

Fire Strike is a DirectX 11 benchmark for gaming PCs. It features two separate tests displaying a fight between a humanoid and a fiery creature seemingly made of lava. Using 1920×1080 resolution, Fire Strike shows off some realistic enough graphics and is quite taxing on hardware.

Benchmark coverage: 14%


R9 390
12730

Unigine Heaven 4.0

This is an old DirectX 11 benchmark, a newer version of Unigine 3.0 with relatively small differences. It displays a fantasy medieval town sprawling over several flying islands. The benchmark is still sometimes used, despite its significant age, as it was released back in 2013.

Benchmark coverage: 1%


R9 390
1520


Mining hashrates


Cryptocurrency mining performance of Radeon R9 390. Usually measured in megahashes per second.






Bitcoin / BTC (SHA256) 593 Mh/s  
Decred / DCR (Decred) 0.95 Gh/s  
Ethereum / ETH (DaggerHashimoto) 29.26 Mh/s  
Monero / XMR (CryptoNight) 0.81 kh/s  
Zcash / ZEC (Equihash) 380.03 Sol/s  

Game benchmarks


Let’s see how good Radeon R9 390 is for gaming. Particular gaming benchmark results are measured in frames per second. Comparisons with game system requirements are included, but remember that sometimes official requirements may reflect reality inaccurately.

Average FPS
Popular games

Relative perfomance


Overall Radeon R9 390 performance compared to nearest competitors among desktop video cards.



NVIDIA GeForce GTX TITAN Z
101.56


NVIDIA GeForce GTX TITAN BLACK
101.01


NVIDIA GeForce GTX 1060 5 GB
100.26


AMD Radeon R9 390
100


AMD Radeon RX 580
98.64


AMD Radeon RX 5500
97.89


AMD Radeon RX 480
96.07

NVIDIA equivalent


We believe that the nearest equivalent to Radeon R9 390 from NVIDIA is GeForce GTX 1060 5 GB, which is nearly equal in speed and higher by 1 position in our rating.


GeForce GTX
1060 5 GB


Compare


Here are some closest NVIDIA rivals to Radeon R9 390:


NVIDIA GeForce GTX TITAN Z
101.56


NVIDIA GeForce GTX TITAN BLACK
101.01


NVIDIA GeForce GTX 1060 5 GB
100.26


AMD Radeon R9 390
100


NVIDIA GeForce GTX TITAN
90.16


NVIDIA GeForce GTX 780
88.99


NVIDIA T1000 8 GB
86.62

Similar GPUs

Here is our recommendation of several graphics cards that are more or less close in performance to the one reviewed.


GeForce GTX
1060 5 GB


Compare


GeForce GTX
TITAN BLACK


Compare


GeForce GTX
TITAN Z


Compare


Radeon R9
295X2


Compare


GeForce GTX
1060 3 GB


Compare


GeForce GTX
TITAN


Compare

Recommended processors

These processors are most commonly used with Radeon R9 390 according to our statistics.


Ryzen 5
3600

3.1%


Ryzen 5
2600

2.7%


FX
8350

2.6%


FX
6300

2.2%


FX
8320

2.2%


Core i5
6500

2.1%


Ryzen 5
1600

2%


Core i5
4460

2%


Core i5
6600K

1. 6%


Core i3
10100F

1.6%

User rating


Here you can see the user rating of the graphics card, as well as rate it yourself.


Questions and comments


Here you can ask a question about Radeon R9 390, agree or disagree with our judgements, or report an error or mismatch.


Please enable JavaScript to view the comments powered by Disqus.

Radeon R9 390 8GB GDDR5 4M (2x DVI-D, DP, HDMI) – VisionTek.com

Shopping cart

Obliterate the opposition.

With 8GB of GDDR5 memory, engine clock speeds up to 1000MHz, award-winning Graphics Core Next (GCN) architecture, and DVI-D/DVI-D/HDMI/DisplayPort outputs, the VisionTek Radeon R9 390™ Graphics Card is designed to outperform the competition. It enables you to take advantage of 1440p and up to 4K Ultra HD high resolution displays to play the most demanding games at maximum detail better than any card in its class. With support for the DIRECTX® 12 graphics standard, you can elevate your gaming experience with stunning 3D visual effects, realistic lighting, and lifelike imagery.

Featuring AMD’s True Audio technology, found only on AMD’s top-of-the-line cards, you’ll hear what you’ve been missing in startling realistic surround sound. Make the virtual world seem “real” with the LiquidVR-enabled VisionTek Radeon™ R9 390 Graphics Card. AMD LiquidVR™ technology capitalizes on GCN architecture to enable liquid-smooth visual performance, high frame rates and realistic head-to-headset motion response to match what you see on your VR head-mounted display (HMD) screen. Enjoy plug-and-play ease and broad compatibility for today’s evolving VR head-mounted displays and future-ready support for fantastic new VR technologies that will transform tomorrow’s gaming and entertainment.

Gaming shouldn’t be a choice between choppy gameplay and high performance. With VisionTek Radeon™ R9 390 series graphics and FreeSync™ technology, it doesn’t have to be. Transform the most demanding games into a liquid-smooth, artifact-free, 4K cinematic experience with the highest performance at virtually any frame rate. FreeSync™ works at the speed of your game for incredible responsiveness and uncompromising smoothness.

Packed with AMD technologies.

  • Advanced GDDR5 Memory Technology: The highest memory bandwidth available today enables higher GPU performance.
  • PCI Express 3.0: Get the maximum performance from your GPU when paired with the latest platforms.
  • DirectX® 12: Microsoft’s new technology enables great performance and dramatically improved GPU and CPU multiprocessing and multithreading performance — thanks to Async Shaders and Multi-threaded Command Buffer Recording — for more efficient rendering of richer and more complex scenes.
  • AMD Eyefinity Technology: Expand your territory and customize your field of vision. Connect up to four displays on a single GPU for dynamic, panoramic multi-screen gaming. You’ll get an expansive experience that’s truly out of sight.
  • 4K Ultra HD Support: Experience what you’ve been missing even at 1080P! With support for 3840 x 2160 output via the HDMI port and 4096 x 2160 via the Displayport, textures and other detail normally compressed for lower resolutions can now be shown in full pixels per inch of screen definition.
  • AMD Crossfire™ Technology: Scale up to four GPUs with AMD CrossFire™ and amplify your system’s graphics processing capability.
  • 512-Bit Memory Bus: High-bandwidth bus interface delivers the performance needed for a better 4K experience; wider is better (compared to the competition’s 256-bit and 384-bit bus).
  • VSR (Virtual Super Resolution): Get quality that rivals 1440p, even on a 1080p display while playing your favorite games thanks to AMD’s VSR.
  • AMD LiquidVR™ Technology: AMD is making VR as comfortable as possible by lowering the motion-to-photo latency. Enhance gaming realism and maintain ultra-immersive VR presence. Enjoy liquid-smooth visual performance and ultra-high frame rates and cross over to the other side of realistic virtual environments and interaction.
  • AMD Freesync™ Technology: Maintain extreme frame rates while playing the most demanding games, without frame-tearing or video stuttering, using AMD’s open-standard dynamic refresh rate technology that automatically synchronizes your GPU output with AMD FreeSync™ technology-enabled DisplayPort monitors.

SPECIFICATIONS

Graphics Engine: Radeon R9 390
Video Memory: 8GB GDDR5
Memory Interface: 512-bit
DirectX® Support: 12
Bus Standard: PCI Express 3.0 x16
Core Speed: Up to 1000MHz
Memory Speed: 1500MHz (6Gb/s)
Number of Monitors Supported: Up to Four
Warranty: 2 Year Limited

SUPPORTED OUTPUTS

2 Dual link DVI-D
1 HDMI Connector (Supports Video and Audio)
1 DisplayPort Connector

SYSTEM REQUIREMENTS

  • PCI Express® based PC is required with one X16 lane graphics slot available on the motherboard
  • 750W (or greater) power supply with one 150W 8-pin PCI Express power connector and one 75W 6-pin PCI Express power connector recommended.
  • Minimum 8GB of system memory 16GB (or more) system memory recommended for AMD CrossFire™ technology
  • Installation software requires CD-ROM drive, a keyboard, a mouse, and a display
  • DVD playback requires DVD drive and a DVD
  • A display with digital input (HDMI, DisplayPort or DVI) is required
  • Blu-ray™ playback requires Blu-ray drive and a Blu-ray disc
  • Supported operating systems include Linux®, Windows® 10, Windows® 8.1 and Windows® 7
  • 64-bit operating system required
  • Not for industrial or commercial use.

 

Minimum recommended system power supply wattage is based on the specific graphics card and the typical power requirements of other system components. Your system may require more or less power. OEM and other pre-assembled PCs may have different power requirements.

Default Title — Available from Partners

$359.99

Availability:Out Of Stock

Country of Manufacture:

SKU:900809

Review and testing of the SAPPHIRE NITRO R9 390 8G D5 video card GECID.

com. Page 1

::>Video cards
>2015
> Sapphire NITRO R9 390 8G D5

22-08-2015

Page 1
Page 2
One page

SAPPHIRE Technology, widely known as one of the largest manufacturers of video cards and one of the main partners of AMD, of course, could not stay away from the announcement of a new line of AMD Radeon R9 graphics accelerators.300 by submitting their own versions.

One of these new products and became sapphire Nitro R9 390 8 G D 5 , which compares favorably with the standard AMD Radeon R90 , the presence of factory overclocking to 1010 MHz, as well as a high-quality element base and a number of other improvements, which we will talk about in this review. And we will start it according to tradition with the study of the detailed performance characteristics of the novelty:

Model

SAPPHIRE NITRO R9 390 8G D5 (11244-00-20G)

Graphics core

AMD Radeon R9 390 (Grenada PRO)

Number of universal shader processors

2560

Supported APIs and Technologies

DirectX 12, OpenGL 4.5, AMD Mantle, AMD Eyefinity, AMD App Acceleration, AMD HD3D, AMD CrossFireX, AMD PowerPlay, AMD PowerTune, AMD ZeroCore, AMD TrueAudio, AMD Virtual Super Resolution, AMD Frame Rate Target Control, Vulcan, AMD FreeSync , OpenCL 2.0, AMD LiquidVR

Graphics core frequency, MHz

1010

Memory frequency (effective), MHz

1500 (6000)

Memory size, GB

8

Memory type

GDDR5

Memory bus width, bit

512

Memory bandwidth, GB/s

384

Tire type

PCI Express 3. 0 x16

Maximum resolution

4096×2160

Image output interfaces

1 x DVI-D

1 x HDMI

3 x DisplayPort

Support for HDCP and HD video decoding

Yes

Minimum power supply unit, W

750

Dimensions from the official website (according to measurements in our test laboratory), mm

308 x 127 x 42.3 (315 x 125)

Drivers

Latest drivers can be downloaded from the SAPPHIRE website or the GPU manufacturer’s website

Manufacturer website

SAPPHIRE Technology

As you can see, the actual overclocking of the graphics core is only 10 MHz compared to the reference 1000 MHz, so you should not count on a significant performance increase compared to the reference versions. The key advantages of this novelty are its original cooling system and modified element base.

Packing and contents

The SAPPHIRE NITRO R9 390 8G D5 video card comes in a large box made of thick cardboard and decorated with original and stylish printing. In addition to a small viewing window, on the front side you can note the names of the manufacturer and device model, the amount and type of installed video memory, support for the proprietary Tri-X cooling system, and the presence of factory overclocking.

On the reverse side there is a small schematic image of the video card itself, as well as a list of its advantages: .

  • Dual Ball Bearing Fans — Fans with double ball bearings provide smoother running and longer service life.
  • Cooper Heatpipes — five copper heat pipes of different diameters are installed at the base of the radiator for more efficient heat dissipation.
  • Copper / Aluminum heatsink — the radiator itself is made of aluminum with a copper base.
  • Black Diamond Chokes — Chokes feature 10% cooler operating temperatures and 25% more efficiency.
  • DVI-D, HDMI + 3 DP ports — the set of external interfaces includes a large number of different digital ports.
  • The list of system requirements for the computer where you plan to install the video card is located on one of the sides of the box. Based on the recommendations, the power supply in such a system should have a power of at least 750 W and two 8-pin PCIe cables.

    Included with the SAPPHIRE NITRO R9 390 8G D5 graphics adapter is the standard documentation, the software CD, and the nice bonus of an HDMI cable.

    The following set of interfaces is used to display an image on the tested novelty:

    • 1 x DVI-D;
    • 1 x HDMI;
    • 3 x DisplayPort.

    Appearance and element base

    The SAPPHIRE NITRO R9 390 8G D5 model is made on the original black printed circuit board, with a reference layout principle. The used element base includes exclusively high-quality components: solid capacitors with extended life, advanced IR3553 microcircuits and efficient Black Diamond Chokes. This improves the stability and reliability of the graphics adapter as a whole, and also extends its service life.

    The novelty is powered by an enhanced 8-phase scheme: 6 phases are responsible for powering the graphics core, and 2 are for the video memory subsystem.

    The IR3567B chip is used as a digital PWM controller for the power subsystem.

    The graphics adapter under test is powered by a PCI Express x16 slot and two 8-pin PCIe connectors located on the side of the board. Note that the cooler does not make it difficult to disconnect the PCIe cables.

    The reverse side of the printed circuit board is almost completely devoid of significant elements. We also note the complete absence of connectors for AMD CrossFireX bridges, since this technology is implemented at the software level. Also, do not forget that starting from the version of the AMD Catalyst 15.7 WHQL driver, you can bundle the new product together with the AMD Radeon R9 290.

    GPU-Z 0.8.5 like AMD Hawaii. It is manufactured according to the 28nm process technology and consists of 2560 universal shader pipelines, 64 rasterization units and 160 texture units. The GPU frequency has been increased to 1010 MHz from the recommended 1000 MHz.

    The memory of the SAPPHIRE NITRO R9 390 8G D5 with a total capacity of 8 GB is made up of 16 SK hynix H5GC4h34AJR-T2C chips with a capacity of 4 Gb each, which operate at a recommended effective frequency of 6000 MHz. The exchange of data between the GPU and memory is carried out through a 512-bit bus, which is capable of passing 384 GB of information per second.

    Cooling System

    The graphics card with the Tri-X cooling system installed occupies two expansion slots and has a total length of 308mm according to the official website (315mm as measured by our test lab).

    The cooler consists of a massive two-section heatsink, which uses 55 and 70 transverse aluminum fins, and three 86mm axial fans mounted on a plastic shroud.

    The turntables themselves are manufactured by FirstDO and are marked «FDC10h22D9-C». The nominal voltage of their operation is 12 V, and the current strength is 0.35 A, which gives a total power of 4.2 watts.

    Five copper heat pipes of different diameters are used to evenly distribute heat over the entire area of ​​the radiator: one 10 mm, two 8 mm and two 6 mm. Note that the heat pipes are not coated with a layer of nickel, which is designed to reduce the drop in CO efficiency during operation, caused by the oxidation of metals.

    They are fixed without the help of solder by tightly fixing between the radiator fins.

    The main cooling system contacts directly with the power elements of the power subsystem and video memory chips using a thermal interface, which should have a positive effect on overclocking potential.

    When the fan blades were automatically controlled at maximum load, the graphics core heated up to 74°C, and the cooler, judging by the monitoring readings, worked at 50% of its maximum power. The noise was subjectively very quiet and barely perceptible.

    In the maximum fan speed mode, the GPU temperature dropped to 64°C. At the same time, the noise quite logically exceeded the average level and became uncomfortable for prolonged work near the PC.

    When there is no load, the graphics core and memory frequencies are automatically reduced to reduce their power consumption and heat dissipation. In this mode, the GPU temperature did not exceed 44°C, since the fans stopped spinning altogether and the cooling system worked in a completely passive mode.

    In general, the cooler of the tested model performed very well, demonstrating high cooling efficiency and very low noise level in everyday use. 9Fury XRadeon R9 FuryRadeon R9 NanoRadeon R9 390XRadeon R9 390Radeon R9 380XRadeon R9 380Radeon R7 370Radeon R7 360Radeon R9 295X2Radeon R9 290XRadeon R9 290Radeon R9 280XRadeon R9 285Radeon R9 280Radeon R9 270XRadeon R9 270Radeon R7 265Radeon R7 260XRadeon R7 260Radeon R7 250Radeon R7 240Radeon HD 7970Radeon HD 7950Radeon HD 7870 XTRadeon HD 7870Radeon HD 7850Radeon HD 7790Radeon HD 7770Radeon HD 7750Radeon HD 6990Radeon HD 6970Radeon HD 6950Radeon HD 6930Radeon HD 6870Radeon HD 6850Radeon HD 6790Radeon HD 6770Radeon HD 6750Radeon HD 6670 GDDR5Radeon HD 6670 GDDR3Radeon HD 6570 GDDR5Radeon HD 6570 GDDR3Radeon HD 6450 GDDR5Radeon HD 6450 GDDR3Radeon HD 5570 GDDR5Radeon HD 3750Radeon HD 3730Radeon HD 5970Radeon HD 5870Radeon HD 5850Radeon HD 5830Radeon HD 5770Radeon HD 5750Radeon HD 5670Radeon HD 5570Radeon HD 5550Radeon HD 5450Radeon HD 4890Radeon HD 4870 X2Radeon HD 4870Radeon HD 4860Radeon HD 4850 X2Radeon HD 4850Radeon HD 4830Radeon HD 4790Radeon HD 4770Radeon HD 4730Radeon HD 4670Radeon HD 4650Radeon HD 4550Radeon HD 4350Radeon HD 4350Radeon HD 43500 (IGP 890GX) Radeon HD 4200 (IGP)Radeon HD 3870 X2Radeon HD 3870Radeon HD 3850Radeon HD 3690Radeon HD 3650Radeon HD 3470Radeon HD 3450Radeon HD 3300 (IGP)Radeon HD 3200 (IGP)Radeon HD 3100 (IGP)Radeon HD 2900 XT 1Gb GDDR4Radeon HD 2900 XTRadeon HD 2900 PRORadeon HD 2900 GTRadeon HD 2600 XT DUALRadeon HD 2600 XT GDDR4Radeon HD 2600 XTRadeon HD 2600 PRORadeon HD 2400 XTRadeon HD 2400 PRORadeon HD 2350Radeon X1950 CrossFire EditionRadeon X1950 XTXRadeon X1950 XTRadeon X1950 PRO DUALRadeon X1950 PRORadeon X1950 GTRadeon X1900 CrossFire EditionRadeon X1900 XTXRadeon X1900 XTRadeon X1900 GT Rev2Radeon X1900 GTRadeon X1800 CrossFire EditionRadeon X1800 XT PE 512MBRadeon X1800 XTRadeon X1800 XLRadeon X1800 GTORadeon X1650 XTRadeon X1650 GTRadeon X1650 XL DDR3Radeon X1650 XL DDR2Radeon X1650 PRO on RV530XTRadeon X1650 PRO on RV535XTRadeon X1650Radeon X1600 XTRadeon X1600 PRORadeon X1550 PRORadeon X1550Radeon X1550 LERadeon X1300 XT on RV530ProRadeon X1300 XT on RV535ProRadeon X1300 CERadeon X1300 ProRadeon X1300Radeon X1300 LERadeon X1300 HMRadeon X1050Radeon X850 XT Platinum EditionRadeon X850 XT CrossFire EditionRadeon X850 XT Radeon X850 Pro Radeon X800 XT Platinum EditionRadeon X800 XTRadeon X800 CrossFire EditionRadeon X800 XLRadeon X800 GTO 256MBRadeon X800 GTO 128MBRadeon X800 GTO2 256MBRadeon X800Radeon X800 ProRadeon X800 GT 256MBRadeon X800 GT 128MBRadeon X800 SERadeon X700 XTRadeon X700 ProRadeon X700Radeon X600 XTRadeon X600 ProRadeon X550 XTRadeon X550Radeon X300 SE 128MB HM-256MBR adeon X300 SE 32MB HM-128MBRadeon X300Radeon X300 SERadeon 9800 XTRadeon 9800 PRO /DDR IIRadeon 9800 PRO /DDRRadeon 9800Radeon 9800 SE-256 bitRadeon 9800 SE-128 bitRadeon 9700 PRORadeon 9700Radeon 9600 XTRadeon 9600 PRORadeon 9600Radeon 9600 SERadeon 9600 TXRadeon 9550 XTRadeon 9550Radeon 9550 SERadeon 9500 PRORadeon 9500 /128 MBRadeon 9500 /64 MBRadeon 9250Radeon 9200 PRORadeon 9200Radeon 9200 SERadeon 9000 PRORadeon 9000Radeon 9000 XTRadeon 8500 LE / 9100Radeon 8500Radeon 7500Radeon 7200 Radeon LE Radeon DDR OEM Radeon DDR Radeon SDR Radeon VE / 7000Rage 128 GL Rage 128 VR Rage 128 PRO AFRRage 128 PRORage 1283D Rage ProNVIDIAGeForce RTX 4090GeForce RTX 4080 16GBGeForce RTX 4080 12GBGeForce RTX 3090 TiGeForce RTX 3090GeForce RTX 3080 TiGeForce RTX 3080 12GBGeForce RTX 3080GeForce RTX 3070 TiGeForce RTX 3070GeForce RTX 3060 TiGeForce RTX 3060 rev. 2GeForce RTX 3060GeForce RTX 3050GeForce RTX 2080 TiGeForce RTX 2080 SuperGeForce RTX 2080GeForce RTX 2070 SuperGeForce RTX 2070GeForce RTX 2060 SuperGeForce RTX 2060GeForce GTX 1660 TiGeForce GTX 1660 SuperGeForce GTX 1660GeForce GTX 1650 SuperGeForce GTX 1650 GDDR6GeForce GTX 1650 rev.3GeForce GTX 1650 rev.2GeForce GTX 1650GeForce GTX 1630GeForce GTX 1080 TiGeForce GTX 1080GeForce GTX 1070 TiGeForce GTX 1070GeForce GTX 1060GeForce GTX 1060 3GBGeForce GTX 1050 TiGeForce GTX 1050 3GBGeForce GTX 1050GeForce GT 1030GeForce GTX Titan XGeForce GTX 980 TiGeForce GTX 980GeForce GTX 970GeForce GTX 960GeForce GTX 950GeForce GTX TitanGeForce GTX 780 TiGeForce GTX 780GeForce GTX 770GeForce GTX 760GeForce GTX 750 TiGeForce GTX 750GeForce GT 740GeForce GT 730GeForce GTX 690GeForce GTX 680GeForce GTX 670GeForce GTX 660 TiGeForce GTX 660GeForce GTX 650 Ti BoostGeForce GTX 650 TiGeForce GTX 650GeForce GT 640 rev.2GeForce GT 640GeForce GT 630 rev.2GeForce GT 630GeForce GTX 590GeForce GTX 580GeForce GTX 570GeForce GTX 560 TiGeForce GTX 560GeForce GTX 550 TiGeForce GT 520GeForce GTX 480GeForce GTX 470GeForce GTX 465GeForce GTX 460 SEGeForce GTX 460 1024MBGeForce GTX 460 768MBGeForce GTS 450GeForce GT 440 GDDR5GeForce GT 440 GDDR3GeForce GT 430GeForce GT 420GeForce GTX 295GeForce GTX 285GeForce GTX 280GeForce GTX 275GeForce GTX 260 rev. 2GeForce GTX 260GeForce GTS 250GeForce GTS 240GeForce GT 240GeForce GT 230GeForce GT 220GeForce 210Geforce 205GeForce GTS 150GeForce GT 130GeForce GT 120GeForce G100GeForce 9800 GTX+GeForce 9800 GTXGeForce 9800 GTSGeForce 9800 GTGeForce 9800 GX2GeForce 9600 GTGeForce 9600 GSO (G94)GeForce 9600 GSOGeForce 9500 GTGeForce 9500 GSGeForce 9400 GTGeForce 9400GeForce 9300GeForce 8800 ULTRAGeForce 8800 GTXGeForce 8800 GTS Rev2GeForce 8800 GTSGeForce 8800 GTGeForce 8800 GS 768MBGeForce 8800 GS 384MBGeForce 8600 GTSGeForce 8600 GTGeForce 8600 GSGeForce 8500 GT DDR3GeForce 8500 GT DDR2GeForce 8400 GSGeForce 8300GeForce 8200GeForce 8100GeForce 7950 GX2GeForce 7950 GTGeForce 7900 GTXGeForce 7900 GTOGeForce 7900 GTGeForce 7900 GSGeForce 7800 GTX 512MBGeForce 7800 GTXGeForce 7800 GTGeForce 7800 GS AGPGeForce 7800 GSGeForce 7600 GT Rev.2GeForce 7600 GTGeForce 7600 GS 256MBGeForce 7600 GS 512MBGeForce 7300 GT Ver2GeForce 7300 GTGeForce 7300 GSGeForce 7300 LEGeForce 7300 SEGeForce 7200 GSGeForce 7100 GS TC 128 (512)GeForce 6800 Ultra 512MBGeForce 6800 UltraGeForce 6800 GT 256MBGeForce 6800 GT 128MBGeForce 6800 GTOGeForce 6800 256MB PCI-EGeForce 6800 128MB PCI-EGeForce 6800 LE PCI-EGeForce 6800 256MB AGPGeForce 6800 128MB AGPGeForce 6800 LE AGPGeForce 6800 GS AGPGeForce 6800 GS PCI-EGeForce 6800 XTGeForce 6600 GT PCI-EGeForce 6600 GT AGPGeForce 6600 DDR2GeForce 6600 PCI-EGeForce 6600 AGPGeForce 6600 LEGeForce 6200 NV43VGeForce 6200GeForce 6200 NV43AGeForce 6500GeForce 6200 TC 64(256)GeForce 6200 TC 32(128)GeForce 6200 TC 16(128)GeForce PCX5950GeForce PCX 5900GeForce PCX 5750GeForce PCX 5550GeForce PCX 5300GeForce PCX 4300GeForce FX 5950 UltraGeForce FX 5900 UltraGeForce FX 5900GeForce FX 5900 ZTGeForce FX 5900 XTGeForce FX 5800 UltraGeForce FX 5800GeForce FX 5700 Ultra /DDR-3GeForce FX 5700 Ultra /DDR-2GeForce FX 5700GeForce FX 5700 LEGeForce FX 5600 Ultra (rev. 2)GeForce FX 5600 Ultra (rev.1)GeForce FX 5600 XTGeForce FX 5600GeForce FX 5500GeForce FX 5200 UltraGeForce FX 5200GeForce FX 5200 SEGeForce 4 Ti 4800GeForce 4 Ti 4800-SEGeForce 4 Ti 4200-8xGeForce 4 Ti 4600GeForce 4 Ti 4400GeForce 4 Ti 4200GeForce 4 MX 4000GeForce 4 MX 440-8x / 480GeForce 4 MX 460GeForce 4 MX 440GeForce 4 MX 440-SEGeForce 4 MX 420GeForce 3 Ti500GeForce 3 Ti200GeForce 3GeForce 2 Ti VXGeForce 2 TitaniumGeForce 2 UltraGeForce 2 PROGeForce 2 GTSGeForce 2 MX 400GeForce 2 MX 200GeForce 2 MXGeForce 256 DDRGeForce 256Riva TNT 2 UltraRiva TNT 2 PRORiva TNT 2Riva TNT 2 M64Riva TNT 2 Vanta LTRiva TNT 2 VantaRiva TNTRiva 128 ZXRiva 128 9Fury XRadeon R9 FuryRadeon R9 NanoRadeon R9 390XRadeon R9 390Radeon R9 380XRadeon R9 380Radeon R7 370Radeon R7 360Radeon R9 295X2Radeon R9 290XRadeon R9 290Radeon R9 280XRadeon R9 285Radeon R9 280Radeon R9 270XRadeon R9 270Radeon R7 265Radeon R7 260XRadeon R7 260Radeon R7 250Radeon R7 240Radeon HD 7970Radeon HD 7950Radeon HD 7870 XTRadeon HD 7870Radeon HD 7850Radeon HD 7790Radeon HD 7770Radeon HD 7750Radeon HD 6990Radeon HD 6970Radeon HD 6950Radeon HD 6930Radeon HD 6870Radeon HD 6850Radeon HD 6790Radeon HD 6770Radeon HD 6750Radeon HD 6670 GDDR5Radeon HD 6670 GDDR3Radeon HD 6570 GDDR5Radeon HD 6570 GDDR3Radeon HD 6450 GDDR5Radeon HD 6450 GDDR3Radeon HD 5570 GDDR5Radeon HD 3750Radeon HD 3730Radeon HD 5970Radeon HD 5870Radeon HD 5850Radeon HD 5830Radeon HD 5770Radeon HD 5750Radeon HD 5670Radeon HD 5570Radeon HD 5550Radeon HD 5450Radeon HD 4890Radeon HD 4870 X2Radeon HD 4870Radeon HD 4860Radeon HD 4850 X2Radeon HD 4850Radeon HD 4830Radeon HD 4790Radeon HD 4770Radeon HD 4730Radeon HD 4670Radeon HD 4650Radeon HD 4550Radeon HD 4350Radeon HD 4350Radeon HD 43500 (IGP 890GX) Radeon HD 4200 (IGP)Radeon HD 3870 X2Radeon HD 3870Radeon HD 3850Radeon HD 3690Radeon HD 3650Radeon HD 3470Radeon HD 3450Radeon HD 3300 (IGP)Radeon HD 3200 (IGP)Radeon HD 3100 (IGP)Radeon HD 2900 XT 1Gb GDDR4Radeon HD 2900 XTRadeon HD 2900 PRORadeon HD 2900 GTRadeon HD 2600 XT DUALRadeon HD 2600 XT GDDR4Radeon HD 2600 XTRadeon HD 2600 PRORadeon HD 2400 XTRadeon HD 2400 PRORadeon HD 2350Radeon X1950 CrossFire EditionRadeon X1950 XTXRadeon X1950 XTRadeon X1950 PRO DUALRadeon X1950 PRORadeon X1950 GTRadeon X1900 CrossFire EditionRadeon X1900 XTXRadeon X1900 XTRadeon X1900 GT Rev2Radeon X1900 GTRadeon X1800 CrossFire EditionRadeon X1800 XT PE 512MBRadeon X1800 XTRadeon X1800 XLRadeon X1800 GTORadeon X1650 XTRadeon X1650 GTRadeon X1650 XL DDR3Radeon X1650 XL DDR2Radeon X1650 PRO on RV530XTRadeon X1650 PRO on RV535XTRadeon X1650Radeon X1600 XTRadeon X1600 PRORadeon X1550 PRORadeon X1550Radeon X1550 LERadeon X1300 XT on RV530ProRadeon X1300 XT on RV535ProRadeon X1300 CERadeon X1300 ProRadeon X1300Radeon X1300 LERadeon X1300 HMRadeon X1050Radeon X850 XT Platinum EditionRadeon X850 XT CrossFire EditionRadeon X850 XT Radeon X850 Pro Radeon X800 XT Platinum EditionRadeon X800 XTRadeon X800 CrossFire EditionRadeon X800 XLRadeon X800 GTO 256MBRadeon X800 GTO 128MBRadeon X800 GTO2 256MBRadeon X800Radeon X800 ProRadeon X800 GT 256MBRadeon X800 GT 128MBRadeon X800 SERadeon X700 XTRadeon X700 ProRadeon X700Radeon X600 XTRadeon X600 ProRadeon X550 XTRadeon X550Radeon X300 SE 128MB HM-256MBR adeon X300 SE 32MB HM-128MBRadeon X300Radeon X300 SERadeon 9800 XTRadeon 9800 PRO /DDR IIRadeon 9800 PRO /DDRRadeon 9800Radeon 9800 SE-256 bitRadeon 9800 SE-128 bitRadeon 9700 PRORadeon 9700Radeon 9600 XTRadeon 9600 PRORadeon 9600Radeon 9600 SERadeon 9600 TXRadeon 9550 XTRadeon 9550Radeon 9550 SERadeon 9500 PRORadeon 9500 /128 MBRadeon 9500 /64 MBRadeon 9250Radeon 9200 PRORadeon 9200Radeon 9200 SERadeon 9000 PRORadeon 9000Radeon 9000 XTRadeon 8500 LE / 9100Radeon 8500Radeon 7500Radeon 7200 Radeon LE Radeon DDR OEM Radeon DDR Radeon SDR Radeon VE / 7000Rage 128 GL Rage 128 VR Rage 128 PRO AFRRage 128 PRORage 1283D Rage ProNVIDIAGeForce RTX 4090GeForce RTX 4080 16GBGeForce RTX 4080 12GBGeForce RTX 3090 TiGeForce RTX 3090GeForce RTX 3080 TiGeForce RTX 3080 12GBGeForce RTX 3080GeForce RTX 3070 TiGeForce RTX 3070GeForce RTX 3060 TiGeForce RTX 3060 rev. 2GeForce RTX 3060GeForce RTX 3050GeForce RTX 2080 TiGeForce RTX 2080 SuperGeForce RTX 2080GeForce RTX 2070 SuperGeForce RTX 2070GeForce RTX 2060 SuperGeForce RTX 2060GeForce GTX 1660 TiGeForce GTX 1660 SuperGeForce GTX 1660GeForce GTX 1650 SuperGeForce GTX 1650 GDDR6GeForce GTX 1650 rev.3GeForce GTX 1650 rev.2GeForce GTX 1650GeForce GTX 1630GeForce GTX 1080 TiGeForce GTX 1080GeForce GTX 1070 TiGeForce GTX 1070GeForce GTX 1060GeForce GTX 1060 3GBGeForce GTX 1050 TiGeForce GTX 1050 3GBGeForce GTX 1050GeForce GT 1030GeForce GTX Titan XGeForce GTX 980 TiGeForce GTX 980GeForce GTX 970GeForce GTX 960GeForce GTX 950GeForce GTX TitanGeForce GTX 780 TiGeForce GTX 780GeForce GTX 770GeForce GTX 760GeForce GTX 750 TiGeForce GTX 750GeForce GT 740GeForce GT 730GeForce GTX 690GeForce GTX 680GeForce GTX 670GeForce GTX 660 TiGeForce GTX 660GeForce GTX 650 Ti BoostGeForce GTX 650 TiGeForce GTX 650GeForce GT 640 rev.2GeForce GT 640GeForce GT 630 rev.2GeForce GT 630GeForce GTX 590GeForce GTX 580GeForce GTX 570GeForce GTX 560 TiGeForce GTX 560GeForce GTX 550 TiGeForce GT 520GeForce GTX 480GeForce GTX 470GeForce GTX 465GeForce GTX 460 SEGeForce GTX 460 1024MBGeForce GTX 460 768MBGeForce GTS 450GeForce GT 440 GDDR5GeForce GT 440 GDDR3GeForce GT 430GeForce GT 420GeForce GTX 295GeForce GTX 285GeForce GTX 280GeForce GTX 275GeForce GTX 260 rev. 2GeForce GTX 260GeForce GTS 250GeForce GTS 240GeForce GT 240GeForce GT 230GeForce GT 220GeForce 210Geforce 205GeForce GTS 150GeForce GT 130GeForce GT 120GeForce G100GeForce 9800 GTX+GeForce 9800 GTXGeForce 9800 GTSGeForce 9800 GTGeForce 9800 GX2GeForce 9600 GTGeForce 9600 GSO (G94)GeForce 9600 GSOGeForce 9500 GTGeForce 9500 GSGeForce 9400 GTGeForce 9400GeForce 9300GeForce 8800 ULTRAGeForce 8800 GTXGeForce 8800 GTS Rev2GeForce 8800 GTSGeForce 8800 GTGeForce 8800 GS 768MBGeForce 8800 GS 384MBGeForce 8600 GTSGeForce 8600 GTGeForce 8600 GSGeForce 8500 GT DDR3GeForce 8500 GT DDR2GeForce 8400 GSGeForce 8300GeForce 8200GeForce 8100GeForce 7950 GX2GeForce 7950 GTGeForce 7900 GTXGeForce 7900 GTOGeForce 7900 GTGeForce 7900 GSGeForce 7800 GTX 512MBGeForce 7800 GTXGeForce 7800 GTGeForce 7800 GS AGPGeForce 7800 GSGeForce 7600 GT Rev.2GeForce 7600 GTGeForce 7600 GS 256MBGeForce 7600 GS 512MBGeForce 7300 GT Ver2GeForce 7300 GTGeForce 7300 GSGeForce 7300 LEGeForce 7300 SEGeForce 7200 GSGeForce 7100 GS TC 128 (512)GeForce 6800 Ultra 512MBGeForce 6800 UltraGeForce 6800 GT 256MBGeForce 6800 GT 128MBGeForce 6800 GTOGeForce 6800 256MB PCI-EGeForce 6800 128MB PCI-EGeForce 6800 LE PCI-EGeForce 6800 256MB AGPGeForce 6800 128MB AGPGeForce 6800 LE AGPGeForce 6800 GS AGPGeForce 6800 GS PCI-EGeForce 6800 XTGeForce 6600 GT PCI-EGeForce 6600 GT AGPGeForce 6600 DDR2GeForce 6600 PCI-EGeForce 6600 AGPGeForce 6600 LEGeForce 6200 NV43VGeForce 6200GeForce 6200 NV43AGeForce 6500GeForce 6200 TC 64(256)GeForce 6200 TC 32(128)GeForce 6200 TC 16(128)GeForce PCX5950GeForce PCX 5900GeForce PCX 5750GeForce PCX 5550GeForce PCX 5300GeForce PCX 4300GeForce FX 5950 UltraGeForce FX 5900 UltraGeForce FX 5900GeForce FX 5900 ZTGeForce FX 5900 XTGeForce FX 5800 UltraGeForce FX 5800GeForce FX 5700 Ultra /DDR-3GeForce FX 5700 Ultra /DDR-2GeForce FX 5700GeForce FX 5700 LEGeForce FX 5600 Ultra (rev. 2)GeForce FX 5600 Ultra (rev.1)GeForce FX 5600 XTGeForce FX 5600GeForce FX 5500GeForce FX 5200 UltraGeForce FX 5200GeForce FX 5200 SEGeForce 4 Ti 4800GeForce 4 Ti 4800-SEGeForce 4 Ti 4200-8xGeForce 4 Ti 4600GeForce 4 Ti 4400GeForce 4 Ti 4200GeForce 4 MX 4000GeForce 4 MX 440-8x / 480GeForce 4 MX 460GeForce 4 MX 440GeForce 4 MX 440-SEGeForce 4 MX 420GeForce 3 Ti500GeForce 3 Ti200GeForce 3GeForce 2 Ti VXGeForce 2 TitaniumGeForce 2 UltraGeForce 2 PROGeForce 2 GTSGeForce 2 MX 400GeForce 2 MX 200GeForce 2 MXGeForce 256 DDRGeForce 256Riva TNT 2 UltraRiva TNT 2 PRORiva TNT 2Riva TNT 2 M64Riva TNT 2 Vanta LTRiva TNT 2 VantaRiva TNTRiva 128 ZXRiva 128

    You can simultaneously select
    up to 10 video cards by holding Ctrl

    Reviews of video cards AMD Radeon R9 390:

    • Review and testing of ASUS STRIX-R9390-DC3OC-8GD5-GAMING video card based on Radeon R9 390

      ASUS STRIX-R9390-DC3OC-8GD5-GAMING

    Two video cards on one computer — how to connect and configure Nvidia or AMD, why you need to combine for simultaneous use

    It is quite common that installing two video cards on a computer makes it possible to increase system performance while saving money. However, not everything is always so clear cut.

    Reasons for using two video cards

    Such an installation can have different goals: increasing the power of the graphics system, distributing tasks between GPUs.

    Game Graphics Enhancement

    Using 2 video cards together on a computer allows to improve the system performance and frame rate (fps), which makes the «picture» in games more detailed and realistic. 3D graphics processing is facilitated , you can also set the image settings up to 4K or «split» it into several screens. Usually the increase reaches 50% -70%, in some games it is possible to increase fps by almost 2 times.

    Separate use

    Two GPUs are used separately to solve different tasks at the same time. You can play the game on one video card and then, using another, upload the recorded video to the Internet. The benefit here is that the two GPUs work independently from each other, so neither the graphics in the game nor the stream will suffer. When using one card, the game fps drop and twitching in the stream would be inevitable.

    Video editing takes up a large share of computer performance. Rendering on the graphics card is much more efficient than rendering frames using the central processing unit (CPU). Therefore, the use of a second graphics processor (GPU) for video editing will reduce the process time and will not load another video card.

    If there are two monitors, then you can display the image of two GPUs on them. On one you can watch a movie, and on the other you can access the Internet. The goals may be different, but the convenience of using two monitors is obvious. Although this can be achieved with a single video adapter.

    Combination technologies and algorithms for their operation

    Two GPU manufacturers, Nvidia and AMD, use similar principles when connecting two video adapters, but there are differences.

    SLI by Nvidia

    The abbreviation «SLI» was first used by 3dfx in 1998 with . It was deciphered as Scan Line Interleave , which means “line interleaving”. 3dfx wanted to combine 2 graphics chips to achieve what was then a huge 1024 x 768 resolution. Each chip handled an even or odd line, and then all the lines were added together. But flaws and limitations of the AGP graphics port did not allow the technology to be successfully implemented. In 2001, 3dfx was bought by Nvidia.

    After the development of PCI-Express, it became possible to connect 2 or more video cards in one computer. Nvidia revives the project in 2004 under the same acronym, but the principle of operation is already different. SLI now stands for Scalable Link Interface which translates to «scalable interface». In this technology, the process of interaction of maps and the construction of graphics occurs according to the following algorithms:

    1. Split Frame Rendering ( SFR ) or Split Frame Rendering A simple method, its essence is that each frame of the image is divided equally among the GPUs. Each part is processed separately. Then they add up, as a result, the total fps increases.
      The downside of this rendering is that in a real «picture» its efficiency will fall. The first GPU can get a simple section, without a lot of details and effects, and for it the processing time will be minimal. In the second, on the contrary, part of the frame may be replete with small details, it will take more time to render. Because of this, the first map will «idle» for a certain time, the total fps will be lower.
    1. Alternate Frame Rendering (AFR) or Alternate Frame Rendering . To avoid rendering lag issues, AFR mode is used. Here, even image maps are processed by one video processor, odd ones by another.
      Frames can be of varying complexity, so there is also a «subsidence» of fps. This method requires more video memory to store the previous and next frames. There are large delays between frames, so this algorithm does not find support among users.
    1. SLI Anti Aliasing (SLI AA) or Antialiasing . This mode improves image clarity without changing the frame rate. The frames are smoothed one by one on the video processors, as a result, the image is clearer than when using a single GPU. The smoothing factor can reach 32.

    AMD CrossFire

    Crossfire was developed by ATIRadeon in 1999. Initially, the mode of combining two video processors was called MAXX. In 2004, the name changed to Crossfire. After the purchase of ATI Radeon by AMD, the technology became known as AMD CrossFireX. AMD also has several GPU algorithms, some similar to Nvidia’s modes:

    1. S cissor (Slicing) or Slicing . The algorithm is similar to Nvidia’s SFR and has the same downsides.
    2. SuperTiling (Tiling) or Tiling . The algorithm is present only in Crossfire. Splits the frame into blocks of 32 x 32 pixels in a checkerboard pattern. In this option, each card has the same load, which means there will be no freezes and “waits”. The disadvantage of such an algorithm is the maximum identity of the GPU, which is not mandatory with other algorithms, unlike a competitor.
    3. AFR . Similar to Nvidia, right down to the name.
    4. SuperAA (AntiAliasing) or Super Antialiasing . The mode is similar to SLIAntiAliasing, but it uses SSAA antialiasing, while SLI uses MSAA. AMD’s variant gives a slightly better image, but loads the card more. Also, CrossFire has a maximum of x14, while SLI has a maximum of 32.

    Connection accessories

    The following requirements must be met in order for SLI or Crossfire to fully function and combine two graphics cards on a computer:

    1. Suitable GPU . GPU frequencies must be the same. If they differ, then the tandem will work at the lowest, and performance will be lost accordingly. The amount of GPU memory is not cumulative and the size of the memory must also be the same.
      For SLI cards must be the same models , can be from different manufacturers. For example, GTX1080 from Asus and Palit, but 1080 and 1050 are not allowed.0011 within the same series, such as 7870 and 7850.
    2. Motherboard support . The main board must support the ability to work with two GPUs. Usually such a function is indicated on the board itself or in the documentation.
      Motherboard must have 2 slots PCI Express in PCI-E x16 configuration. Or one slot x16, the second — x8. Otherwise, there will be no benefit from card sharing. After installing the video processors, there must be free space between them for air circulation.
    3. Power pack . If one card consumes 300 W, then two cards need at least 600 W, and preferably with a slight excess in total power.
    4. Equipped with a bridge to connect two cards. It is possible to work without a bridge, but this will lead to a decrease in performance.
    5. Spacious case with powerful ventilation. Two cards will generate a lot of heat, so good cooling is essential.

    How to connect two graphics cards to one computer

    Connecting two monitors

    Standard case, just instead of one video controller you need to use two video cards at the same time. To do this:

    1. Turn off the computer, unplug it from the mains and wait 20 — 30 seconds. Open the cover of the system unit.
    2. If you need to remove the old video adapter, then unscrew the screws holding it and pull the GPU out of the slot by pulling back the latch on the side of the slot.
    3. Insert new video cards. You have to be careful. They must be inserted at right angles to the slots to avoid breaking the contacts. First, install the GPU in the top slot, and then in the bottom one. After installing , fix the cards with screws .
    4. Close the unit and turn on the computer . After starting the system, start installing video card drivers.
    5. Reboot the computer after installing the drivers and connect the cards to the monitors. Process completed.

    Using graphics cards to display images on 2 monitors does not add performance to the system, so there will be no graphics improvement.

    Performance increase

    Two video cards are connected here using SLI or Crossfire technologies. To connect the cards you need to do the following steps:

    1. De-energize computer. Open the system unit and insert GPU into the slots, fastening them with screws for greater reliability.
    2. Power on card.
    3. Connect cards with bridges, depending on technology.

      SLI technology

      CrossFire Technology

    4. Close the housing cover and turn on the power . After loading Windows, update the drivers.
    5. In case of SLI , open « NVIDIA Control Panel » and find the menu « Configure SLI «. In it, note « Maximize 3D performance » and « Auto-Select » under « PhysX settings «. Confirm actions.

    For CrossFire , go to the AMD Catalyst Control Center and in the menu « Performance » check the item « Enable AMD CrossFireX » and tick the sub-item below.

    Restart the computer, the system should work. It must be remembered that not all games support the joint operation of cards. Therefore, productivity growth may not be.

    Different GPUs on the same computer

    If video cards from the same manufacturer are very different from each other or they are from different manufacturers (Nvidia and AMD), then it will not work to merge them . But you can use them to distribute tasks. For example, you can play on one map, and stream or do video editing on the other.

    In the case of cards from one manufacturer, it is enough to install the latest drivers and the system will work. Each board will operate discretely and perform its assigned task.

    If the cards are from different companies, then the motherboard must be able to work with them simultaneously. First you need to insert an AMD card and install the drivers. After that turn off the computer and insert the Nvidia GPU. After turning on the system, install drivers for Nvidia. Reboot, the cards will work separately.

    When creating a two-card system, consider the purpose of the combination and possible costs. It is often more profitable to buy one top-end GPU than to build a cascade of two low-power ones.
    8GB vs 6GB

  • 39.5 GTexels/s higher number of textured pixels? more memory bandwidth?
    384GB/s vs 192. 2GB/s
  • 320bit wider memory bus?
    512bit vs 192bit
  • 1280 more stream processors?
    2560 vs 1280
  • 1800million more transistors?
    6200 million vs 4400 million
  • 8°C lower GPU temperature at boot?
    66°C vs 74°C
  • Why is Nvidia GeForce GTX 1060 better than AMD Radeon R9 390?

    • GPU frequency 506MHz higher?
      1506MHz vs 1000MHz
    • 8.3 GPixel/s higher pixel rate?
      72.3 GPixel/s vs 64 GPixel/s
    • 155W below TDP?
      120W vs 275W
    • 502MHz faster memory speed?
      2002MHz vs 1500MHz
    • 2008MHz higher effective clock speed?
      8008MHz vs 6000MHz
    • Supports ray tracing?
    • Smaller 12nm semiconductors?
      16nm vs 28nm
    • 3139 higher PassMark (G3D) result?
      9795 vs 6656

    Which comparisons are the most popular?

    AMD Radeon R9 390

    vs

    Nvidia GeForce GTX 1070

    Nvidia GeForce GTX 1060

    vs

    AMD Radeon RX 580

    AMD Radeon R9 390

    vs

    MSI Radeon RX 580 Armor 8GB

    Nvidia GeForce GTX 1060

    vs

    Nvidia GeForce RTX 3050 Laptop

    AMD Radeon R9 390

    vs

    AMD Radeon RX 550

    Nvidia GeForce GTX 1060

    vs

    Nvidia GeForce RTX 3050 Ti Laptop

    AMD Radeon R9 390

    vs

    MSI Radeon RX 580

    Nvidia GeForce GTX 1060

    vs

    Nvidia GeForce GTX 1650

    AMD Radeon R9 390

    vs

    MSI GeForce GTX 1050 Ti OC

    Nvidia GeForce GTX 1060

    VS

    NVIDIA GeForce RTX 2060

    AMD Radeon R9 390

    VS

    AMD Radeon RX 570

    NVIDIA GTX 1060 9000 9000 VS

    AMD RADEN0003

    AMD Radeon R9 390

    vs

    MSI Radeon RX 6600 XT Gaming

    Nvidia GeForce GTX 1060

    vs

    Nvidia GeForce GTX 1650 Ti Laptop

    AMD Radeon R9 390

    vs

    Nvidia Geforce GTX 1660 Super

    NVIDIA GeForce GTX 1060

    VS

    Manli GeForce GTX 16500002 7. 3 /10

    3 reviews of users

    Functions

    Price and quality ratio

    10.0983 /10

    1 Votes

    7.3 /10 9000

    3 Votes

    Games

    /10

    1 votes

    7.0 /10

    3 Votes

    performance

    10.0983 /10

    1 Votes

    7.0 /10

    3 VOTES

    Fan noise

    10.0 /10

    1 Votes

    7.0 /10

    3 Votes

    Reliability

    10.0

    1 votes

    9000 9000 9000 9000 9000 9000 9000 9000 9000 9000 9000 9000 9000 9000 9000 9000 9000 9000 9000 9000 9000 9000 9000 9000 9000 9000 9000

    9000

    9000

    votes

    Performance

    1.GPU clock speed

    1000MHz

    1506MHz

    The graphics processing unit (GPU) has a higher clock speed.

    2.turbo GPU

    Unknown. Help us offer a price. (AMD Radeon R9390)

    1708MHz

    When the GPU is running below its limits, it can jump to a higher clock speed to increase performance.

    3.pixel rate

    64 GPixel/s

    72.3 GPixel/s

    The number of pixels that can be displayed on the screen every second.

    4.flops

    5.12 TFLOPS

    3.85 TFLOPS

    FLOPS is a measure of GPU processing power.

    5.texture size

    160 GTexels/s

    120.5 GTexels/s

    Number of textured pixels that can be displayed on the screen every second.

    6.GPU memory speed

    1500MHz

    2002MHz

    Memory speed is one aspect that determines memory bandwidth.

    7.shading patterns

    Shading units (or stream processors) are small processors in a video card that are responsible for processing various aspects of an image.

    8.textured units (TMUs)

    TMUs accept textured units and bind them to the geometric layout of the 3D scene. More TMUs generally means texture information is processed faster.

    9 ROPs

    ROPs are responsible for some of the final steps of the rendering process, such as writing the final pixel data to memory and for performing other tasks such as anti-aliasing to improve the appearance of graphics.

    Memory

    1.memory effective speed

    6000MHz

    8008MHz

    The effective memory clock frequency is calculated from the memory size and data transfer rate. A higher clock speed can give better performance in games and other applications.

    2.max memory bandwidth

    384GB/s

    192.2GB/s

    This is the maximum rate at which data can be read from or stored in memory.

    3. VRAM

    VRAM (video RAM) is the dedicated memory of the graphics card. More VRAM usually allows you to run games at higher settings, especially for things like texture resolution.

    4.memory bus width

    512bit

    192bit

    Wider memory bus means it can carry more data per cycle. This is an important factor in memory performance, and therefore the overall performance of the graphics card.

    5.versions of GDDR memory

    Later versions of GDDR memory offer improvements such as higher data transfer rates, which improves performance.

    6. Supports memory debug code

    ✖AMD Radeon R9 390

    ✖Nvidia GeForce GTX 1060

    Memory debug code can detect and fix data corruption. It is used when necessary to avoid distortion, such as in scientific computing or when starting a server.

    Functions

    1. DirectX version

    DirectX is used in games with a new version that supports better graphics.

    2nd version of OpenGL

    The newer version of OpenGL, the better graphics quality in games.

    OpenCL version 3.

    Some applications use OpenCL to use the power of the graphics processing unit (GPU) for non-graphical computing. Newer versions are more functional and better quality.

    4. Supports multi-monitor technology

    ✔AMD Radeon R9 390

    ✔Nvidia GeForce GTX 1060

    The video card has the ability to connect multiple screens. This allows you to set up multiple monitors at the same time to create a more immersive gaming experience, such as a wider field of view.

    5. GPU temperature at boot

    Lower boot temperature means that the card generates less heat and the cooling system works better.

    6. supports ray tracing

    ✖AMD Radeon R9 390

    ✔Nvidia GeForce GTX 1060

    Ray tracing is an advanced light rendering technique that provides more realistic lighting, shadows and reflections in games.

    7. Supports 3D

    ✔AMD Radeon R9 390

    ✔Nvidia GeForce GTX 1060

    Allows you to view in 3D (if you have a 3D screen and glasses).

    8.supports DLSS

    ✖AMD Radeon R9 390

    ✖Nvidia GeForce GTX 1060

    DLSS (Deep Learning Super Sampling) is an AI based scaling technology. This allows the graphics card to render games at lower resolutions and upscale them to higher resolutions with near-native visual quality and improved performance. DLSS is only available in some games.

    9. PassMark result (G3D)

    This test measures the graphics performance of a graphics card. Source: Pass Mark.

    Ports

    1.