R9 290 vs rx480: R9 290 vs RX 480 Game Performance Benchmarks (i7-3770K vs i7-6700K)

AMD Radeon R9 290 vs AMD Radeon RX 480: What is the difference?

38points

AMD Radeon R9 290

45points

AMD Radeon RX 480

PowerColor OCSapphire Tri-XVTX3D X-EditionPowerColor PCS PlusSapphire Battlefield 4

vs

54 facts in comparison

AMD Radeon R9 290

AMD Radeon RX 480

Why is AMD Radeon R9 290 better than AMD Radeon RX 480?

  • 5.9 GPixel/s higher pixel rate?
    41.7 GPixel/svs35.8 GPixel/s
  • 32GB/s more memory bandwidth?
    288GB/svs256GB/s
  • 256bit wider memory bus width?
    512bitvs256bit
  • 256 more shading units?
    2560vs2304
  • 500million more transistors?
    6200 millionvs5700 million
  • 16 more texture mapping units (TMUs)?
    160vs144
  • 32 more render output units (ROPs)?
    64vs32
  • 2 more DVI outputs?
    2vs0

Why is AMD Radeon RX 480 better than AMD Radeon R9 290?

  • 458MHz faster GPU clock speed?
    1120MHzvs662MHz
  • 0. 31 TFLOPS higher floating-point performance?
    5.16 TFLOPSvs4.85 TFLOPS
  • 130W lower TDP?
    120Wvs250W
  • 875MHz faster memory clock speed?
    2000MHzvs1125MHz
  • 3500MHz higher effective memory clock speed?
    8000MHzvs4500MHz
  • 2x more VRAM?
    8GBvs4GB
  • 9.3 GTexels/s higher texture rate?
    161.3 GTexels/svs152 GTexels/s
  • 0.8 newer version of DirectX?
    12vs11.2

Which are the most popular comparisons?

AMD Radeon R9 290

vs

AMD Radeon Vega 8

AMD Radeon RX 480

vs

AMD Radeon RX 6500 XT

AMD Radeon R9 290

vs

AMD Radeon R9 380

AMD Radeon RX 480

vs

Nvidia GeForce GTX 1060

AMD Radeon R9 290

vs

AMD Radeon RX 550

AMD Radeon RX 480

vs

AMD Radeon RX 580

AMD Radeon R9 290

vs

AMD Radeon RX 6500 XT

AMD Radeon RX 480

vs

Nvidia GeForce GTX 1050

AMD Radeon R9 290

vs

AMD Radeon R9 290X

AMD Radeon RX 480

vs

MSI GeForce GTX 1050 Ti GAMING X

AMD Radeon R9 290

vs

Nvidia GeForce GTX 1060

AMD Radeon RX 480

vs

MSI GeForce GTX 1650 Gaming X

AMD Radeon R9 290

vs

Nvidia GeForce GTX 1050

AMD Radeon RX 480

vs

Nvidia GeForce RTX 2060

AMD Radeon R9 290

vs

AMD Radeon RX 570

AMD Radeon RX 480

vs

AMD Radeon RX Vega 8

AMD Radeon R9 290

vs

AMD Radeon RX 580

AMD Radeon RX 480

vs

Nvidia Geforce GTX 1660 Super

AMD Radeon R9 290

vs

AMD Radeon R7 370

AMD Radeon RX 480

vs

Gigabyte Radeon RX 550

Price comparison

User reviews

Overall Rating

AMD Radeon R9 290

1 User reviews

AMD Radeon R9 290

10. 0/10

1 User reviews

AMD Radeon RX 480

0 User reviews

AMD Radeon RX 480

0.0/10

0 User reviews

Features

Value for money

10.0/10

1 votes

No reviews yet

 

Gaming

8.0/10

1 votes

No reviews yet

 

Performance

10.0/10

1 votes

No reviews yet

 

Fan noise

10.0/10

1 votes

No reviews yet

 

Reliability

10.0/10

1 votes

No reviews yet

 

Performance

1.GPU clock speed

662MHz

1120MHz

The graphics processing unit (GPU) has a higher clock speed.

2.GPU turbo

947MHz

1266MHz

When the GPU is running below its limitations, it can boost to a higher clock speed in order to give increased performance.

3.pixel rate

41.7 GPixel/s

35. 8 GPixel/s

The number of pixels that can be rendered to the screen every second.

4.floating-point performance

4.85 TFLOPS

5.16 TFLOPS

Floating-point performance is a measurement of the raw processing power of the GPU.

5.texture rate

152 GTexels/s

161.3 GTexels/s

The number of textured pixels that can be rendered to the screen every second.

6.GPU memory speed

1125MHz

2000MHz

The memory clock speed is one aspect that determines the memory bandwidth.

7.shading units

Shading units (or stream processors) are small processors within the graphics card that are responsible for processing different aspects of the image.

8.texture mapping units (TMUs)

TMUs take textures and map them to the geometry of a 3D scene. More TMUs will typically mean that texture information is processed faster.

9.render output units (ROPs)

The ROPs are responsible for some of the final steps of the rendering process, writing the final pixel data to memory and carrying out other tasks such as anti-aliasing to improve the look of graphics.

Memory

1.effective memory speed

4500MHz

8000MHz

The effective memory clock speed is calculated from the size and data rate of the memory. Higher clock speeds can give increased performance in games and other apps.

2.maximum memory bandwidth

288GB/s

256GB/s

This is the maximum rate that data can be read from or stored into memory.

3.VRAM

VRAM (video RAM) is the dedicated memory of a graphics card. More VRAM generally allows you to run games at higher settings, especially for things like texture resolution.

4.memory bus width

512bit

256bit

A wider bus width means that it can carry more data per cycle. It is an important factor of memory performance, and therefore the general performance of the graphics card.

5.version of GDDR memory

Newer versions of GDDR memory offer improvements such as higher transfer rates that give increased performance.

6.Supports ECC memory

✖AMD Radeon R9 290

✖AMD Radeon RX 480

Error-correcting code memory can detect and correct data corruption. It is used when is it essential to avoid corruption, such as scientific computing or when running a server.

Features

1.DirectX version

DirectX is used in games, with newer versions supporting better graphics.

2.OpenGL version

OpenGL is used in games, with newer versions supporting better graphics.

3.OpenCL version

Some apps use OpenCL to apply the power of the graphics processing unit (GPU) for non-graphical computing. Newer versions introduce more functionality and better performance.

4.Supports multi-display technology

✔AMD Radeon R9 290

✔AMD Radeon RX 480

The graphics card supports multi-display technology. This allows you to configure multiple monitors in order to create a more immersive gaming experience, such as having a wider field of view.

5.load GPU temperature

Unknown. Help us by suggesting a value. (AMD Radeon RX 480)

A lower load temperature means that the card produces less heat and its cooling system performs better.

6.supports ray tracing

✖AMD Radeon R9 290

✖AMD Radeon RX 480

Ray tracing is an advanced light rendering technique that provides more realistic lighting, shadows, and reflections in games.

7.Supports 3D

✔AMD Radeon R9 290

✔AMD Radeon RX 480

Allows you to view in 3D (if you have a 3D display and glasses).

8.supports DLSS

✖AMD Radeon R9 290

✖AMD Radeon RX 480

DLSS (Deep Learning Super Sampling) is an upscaling technology powered by AI. It allows the graphics card to render games at a lower resolution and upscale them to a higher resolution with near-native visual quality and increased performance. DLSS is only available on select games.

9.PassMark (G3D) result

Unknown. Help us by suggesting a value. (AMD Radeon RX 480)

This benchmark measures the graphics performance of a video card. Source: PassMark.

Ports

1.has an HDMI output

✔AMD Radeon R9 290

✔AMD Radeon RX 480

Devices with a HDMI or mini HDMI port can transfer high definition video and audio to a display.

2.HDMI ports

More HDMI ports mean that you can simultaneously connect numerous devices, such as video game consoles and set-top boxes.

3.HDMI version

Unknown. Help us by suggesting a value. (AMD Radeon R9 290)

HDMI 2.0

Newer versions of HDMI support higher bandwidth, which allows for higher resolutions and frame rates.

4.DisplayPort outputs

Allows you to connect to a display using DisplayPort.

5.DVI outputs

Allows you to connect to a display using DVI.

6.mini DisplayPort outputs

Allows you to connect to a display using mini-DisplayPort.

Price comparison

Cancel

Which are the best graphics cards?

Radeon RX 480 vs Radeon R9 290



  1. Home
  2. VGA Benchmarks
  3. Radeon RX 480 vs Radeon R9 290
  • Radeon RX 480

    119%

  • Radeon R9 290

    100%

Relative performance

  • Radeon RX 480

    109%

  • Radeon R9 290

    100%

Relative performance

Reasons to consider Radeon RX 480
9% higher gaming performance.
125 watts lower power draw. This might be a strong point if your current power supply is not enough to handle the Radeon R9 290 .
This is a much newer product, it might have better long term support.
Supports Direct3D 12 Async Compute
Supports FreeSync
Supports ReLive (allows game streaming/recording with minimum performance penalty)
Supports TrueAudio
Based on an outdated architecture (AMD GCN), there may be no performance optimizations for current games and applications
Reasons to consider Radeon R9 290
Supports Direct3D 12 Async Compute
Supports FreeSync
Supports ReLive (allows game streaming/recording with minimum performance penalty)
Supports TrueAudio
Based on an outdated architecture (AMD GCN), there may be no performance optimizations for current games and applications

HWBench recommends Radeon RX 480

The Radeon RX 480 is the better performing card based on the game benchmark suite used (29 combinations of games and resolutions).

Core Configuration
Radeon RX 480 Radeon R9 290
GPU Name Ellesmere (Ellesmere XT) vs Hawaii (Hawaii PRO)
Fab Process 14 nm vs 28 nm
Die Size 232 mm² vs 438 mm²
Transistors 5,700 million vs 6,200 million
Shaders 2304 vs 2560
Compute Units 36 vs 40
Core clock 1120 MHz vs 947 MHz
ROPs 32 vs 64
TMUs 144 vs 160

Memory Configuration
Radeon RX 480 Radeon R9 290
Memory Type GDDR5 vs GDDR5
Bus Width 256 bit vs 512 bit
Memory Speed 2000 MHz

8000 MHz effective
vs 1250 MHz


5000 MHz effective
Memory Size 8192 Mb vs 4096 Mb
Additional details
Radeon RX 480 Radeon R9 290
TDP 150 watts vs 275 watts
Release Date 29 Jun 2016 vs 5 Nov 2013
  • Radeon RX 480

    40. 50 GP/s

  • Radeon R9 290

    60.60 GP/s

GigaPixels — higher is better

  • Radeon RX 480

    182.30 GT/s

  • Radeon R9 290

    152.00 GT/s

GigaTexels — higher is better

  • Radeon RX 480

    256.00 GB/s

  • Radeon R9 290

    320.00 GB/s

GB/s — higher is better

  • Radeon RX 480

    5834.00 GFLOPs

  • Radeon R9 290

    4849.00 GFLOPs

GFLOPs — higher is better

  • Radeon RX 480

    18430

  • Radeon R9 290

    15440

Points (higher is better)

DX11, Ultra Quality, 4xAA

  • Radeon RX 480

    60

  • Radeon R9 290

    53

FPS (higher is better)

DX11, Ultra Quality, 4xMSAA,EP3 Gator Bait

  • Radeon RX 480

    60

  • Radeon R9 290

    59

FPS (higher is better)

OpenGL, Ultra Quality, SMAA 1tx

  • Radeon RX 480

    94

  • Radeon R9 290

    89

FPS (higher is better)

DX11, Ultra Details, Godrays, High shadows

  • Radeon RX 480

    86

  • Radeon R9 290

    66

FPS (higher is better)

DX11, Very High Settings

  • Radeon RX 480

    66

  • Radeon R9 290

    67

FPS (higher is better)

DX11, Max Details, 16:1 AF, 2xMSAA

  • Radeon RX 480

    84

  • Radeon R9 290

    83

FPS (higher is better)

DX12, Ultra Quality, MSAA, 16x AF

  • Radeon RX 480

    75

  • Radeon R9 290

    69

FPS (higher is better)

DX12, Very High Details, Pure Hair On, HBAO+

  • Radeon RX 480

    70

  • Radeon R9 290

    57

FPS (higher is better)

DX11,Max Details, 16:1 HQ-AF, +AA

  • Radeon RX 480

    53

  • Radeon R9 290

    45

FPS (higher is better)

DX11, Max Details, 16:1 AF

  • Radeon RX 480

    59

  • Radeon R9 290

    60

FPS (higher is better)

DX11, Ultra Quality, 4xAA

  • Radeon RX 480

    39

  • Radeon R9 290

    38

FPS (higher is better)

DX11, Ultra Quality, 4xMSAA,EP3 Gator Bait

  • Radeon RX 480

    45

  • Radeon R9 290

    43

FPS (higher is better)

OpenGL, Ultra Quality, SMAA 1tx

  • Radeon RX 480

    61

  • Radeon R9 290

    57

FPS (higher is better)

DX11, Ultra Details, Godrays, High shadows

  • Radeon RX 480

    56

  • Radeon R9 290

    43

FPS (higher is better)

DX11, Very High Settings

  • Radeon RX 480

    46

  • Radeon R9 290

    48

FPS (higher is better)

DX11, Max Details, 16:1 AF, 2xMSAA

  • Radeon RX 480

    59

  • Radeon R9 290

    53

FPS (higher is better)

DX12, Ultra Quality, MSAA, 16x AF

  • Radeon RX 480

    57

  • Radeon R9 290

    55

FPS (higher is better)

DX12, Very High Details, Pure Hair On, HBAO+

  • Radeon RX 480

    48

  • Radeon R9 290

    40

FPS (higher is better)

DX11,Max Details, 16:1 HQ-AF, +AA

  • Radeon RX 480

    41

  • Radeon R9 290

    35

FPS (higher is better)

DX11, Max Details, 16:1 AF

  • Radeon RX 480

    42

  • Radeon R9 290

    41

FPS (higher is better)

DX11, Ultra Quality, 4xAA

  • Radeon RX 480

    23

  • Radeon R9 290

    23

FPS (higher is better)

OpenGL, Ultra Quality, SMAA 1tx

  • Radeon RX 480

    31

  • Radeon R9 290

    28

FPS (higher is better)

DX11, Ultra Details, Godrays, High shadows

  • Radeon RX 480

    26

  • Radeon R9 290

    19

FPS (higher is better)

DX11, Very High Settings

  • Radeon RX 480

    25

  • Radeon R9 290

    26

FPS (higher is better)

DX11, Max Details, 16:1 AF, 2xMSAA

  • Radeon RX 480

    30

  • Radeon R9 290

    27

FPS (higher is better)

DX12, Ultra Quality, MSAA, 16x AF

  • Radeon RX 480

    31

  • Radeon R9 290

    28

FPS (higher is better)

DX12, Very High Details, Pure Hair On, HBAO+

  • Radeon RX 480

    24

  • Radeon R9 290

    22

FPS (higher is better)

DX11,Max Details, 16:1 HQ-AF, +AA

  • Radeon RX 480

    25

  • Radeon R9 290

    21

FPS (higher is better)

DX11, Max Details, 16:1 AF

  • Radeon RX 480

    24

  • Radeon R9 290

    24

FPS (higher is better)

VS
Radeon RX 480 GeForce GTX 1650 SUPER
VS
Radeon RX 480 Radeon RX 580
VS
Radeon R9 290 GeForce GTX 1650
VS
Radeon R9 290 Radeon RX 570
VS
Radeon RX 5500 GeForce GTX 1660
VS
GeForce GTX 1660 Radeon R9 Nano

Please enable JavaScript to view the comments powered by Disqus.

AMD Radeon R9 290 vs AMD Radeon RX 480








AMD Radeon R9 290 vs AMD Radeon RX 480

Comparison of the technical characteristics between the graphics cards, with AMD Radeon R9 290 on one side and AMD Radeon RX 480 on the other side. The first is dedicated to the desktop sector, it has 2560 shading units, a maximum frequency of 0,9 GHz, its lithography is 28 nm. The second is used on the desktop segment, it includes 2304 shading units, a maximum frequency of 1,3 GHz, its lithography is 14 nm. The following table also compares the boost clock, the number of shading units (if indicated), of execution units, the amount of cache memory, the maximum memory capacity, the memory bus width, the release date, the number of PCIe lanes, the values ​​obtained in various benchmarks.

Note: Commissions may be earned from the links above.

This page contains references to products from one or more of our advertisers. We may receive compensation when you click on links to those products. For an explanation of our advertising policy, please visit this page.

Specifications:

Graphics card

AMD Radeon R9 290

AMD Radeon RX 480
Market (main)

Desktop

Desktop
Release date

Q4 2013

Q2 2016
Model number

215-0852020, Hawaii PRO

215-0876184, Polaris 10 XT
GPU name

Hawaii

Ellesmere
Architecture

GCN 2. 0

GCN 4.0
Generation

Volcanic Islands R9 200

Arctic Islands RX 400
Lithography

28 nm

14 nm
Transistors

6.200.000.000

5.700.000.000
Bus interface

PCIe 3.0 x16

PCIe 3.0 x16
GPU base clock

947 MHz

1,12 GHz
GPU boost clock

947 MHz

1,27 GHz
Memory frequency

1.250 MHz

2.000 MHz
Effective memory speed

5 GB/s

8 GB/s
Memory size

4 GB

8 GB
Memory type

GDDR5

GDDR5
Memory bus

512 Bit

256 Bit
Memory bandwidth

320,0 GB/s

256,0 GB/s
TDP

275 W

150 W
Suggested PSU 600W ATX Power Supply 450W ATX Power Supply
Multicard technology


Outputs

2x DVI
1x HDMI
1x DisplayPort

1x HDMI
3x DisplayPort


Cores (compute units, SM, SMX)

40

36
Shading units

2. 560

2.304
TMUs

160

144
ROPs

64

32
Cache memory

1 MB

2 MB
Pixel fillrate

60,6 GP/s

40,5 GP/s
Texture fillrate

151,5 GT/s

182,3 GT/s
Performance FP32 (float)

4,8 TFLOPS

5,8 TFLOPS
Performance FP64 (double)

606,1 GFLOPS

364,6 GFLOPS
Amazon


eBay


Note: Commissions may be earned from the links above.

Price: For technical reasons, we cannot currently display a price less than 24 hours, or a real-time price. This is why we prefer for the moment not to show a price. You should refer to the respective online stores for the latest price, as well as availability.

We can better compare what are the technical differences between the two graphics cards.

Performances :

Performance comparison between the two processors, for this we consider the results generated on benchmark software such as Geekbench 4.





FP32 Performance in GFLOPS
AMD Radeon RX 480

5.834
AMD Radeon R9 290

4.849

The difference is 20%.

Note: Commissions may be earned from the links above. These scores are only an
average of the performances got with these graphics cards, you may get different results.

Single precision floating point format, also known as FP32, is a computer number format that typically occupies 32 bits in PC memory. This represents a wide dynamic range of numeric values that employs a floating point.

See also:

AMD Radeon R9 290XAMD Radeon R9 290X2AMD Radeon R9 295X2

Equivalence:

AMD Radeon R9 290 Nvidia equivalentAMD Radeon RX 480 Nvidia equivalent

Disclaimer:

When you click on links to various merchants on this site and make a purchase, this can result in this site earning a commission. Affiliate programs and affiliations include, but are not limited to, the eBay Partner Network.

As an Amazon Associate I earn from qualifying purchases.

This page includes affiliate links for which the administrator of GadgetVersus may earn a commission at no extra cost to you should you make a purchase. These links are indicated using the hashtag #ad.

Information:

We do not assume any responsibility for the data displayed on our website. Please use at your own risk. Some or all of this data may be out of date or incomplete, please refer to the technical page on the respective manufacturer’s website to find the latest up-to-date information regarding the specifics of these products.

Radeon R9 290X vs Radeon RX Vega 6 Ryzen 4000 Graphics cards Comparison

In this comparison between Radeon R9 290X and Radeon RX Vega 6 Ryzen 4000 you will find out which graphics card performs better in today’s games. Bear in mind that third-party versions may have more efficient cooling and higher clock speeds. This will increase cards’ performance, though not by much. In addition to raw power you should also take into account the dimensions. Thicker models simply will not fit into a small mini-ITX case. The resolution of your monitor also affects the choice, since 4K gameplay requires a more powerful GPU. And don’t overspend on the graphics card. Other parts of your build may also need to be upgraded, save some money for the CPU or power supply. For some people Radeon R9 290X will be the best choice, for others Radeon RX Vega 6 Ryzen 4000 will be their preference. Study the comparison tables below and make your choice.

Radeon R9 290X

Check Price

Radeon RX Vega 6 Ryzen 4000

Check Price

Radeon RX Vega 6 Ryzen 4000 is a Laptop Graphics Card

Note: Radeon RX Vega 6 Ryzen 4000 is only used in laptop graphics. It has lower GPU clock speed compared to the desktop variant, which results in lower power consumption, but also 10-30% lower gaming performance. Check available laptop models with Radeon RX Vega 6 Ryzen 4000 here:

Radeon RX Vega 6 Ryzen 4000 Laptops

Main Specs

  Radeon R9 290X Radeon RX Vega 6 Ryzen 4000
Power consumption (TDP) 250 Watt
Interface PCIe 3. 0 x16
Supplementary power connectors 1 x 6-pin + 1 x 8-pin
Memory type GDDR5
Maximum RAM amount 4 GB
Display Connectors 2x DVI, 1x HDMI, 1x DisplayPort
 

Check Price

Check Price

  • Radeon R9 290X is used in Desktops, and Radeon RX Vega 6 Ryzen 4000 — in Laptops.
  • Radeon R9 290X is build with GCN architecture, and Radeon RX Vega 6 Ryzen 4000 — with Vega.
  • Radeon R9 290X is manufactured by 28 nm process technology, and Radeon RX Vega 6 Ryzen 4000 — by 7 nm process technology.

Game benchmarks

Assassin’s Creed OdysseyBattlefield 5Call of Duty: WarzoneCounter-Strike: Global OffensiveCyberpunk 2077Dota 2Far Cry 5FortniteForza Horizon 4Grand Theft Auto VMetro ExodusMinecraftPLAYERUNKNOWN’S BATTLEGROUNDSRed Dead Redemption 2The Witcher 3: Wild HuntWorld of Tanks
high / 1080p 35−40 16−18
ultra / 1080p 24−27 10−11
QHD / 1440p 18−20 4−5
4K / 2160p 10−11
low / 720p 60−65 35−40
medium / 1080p 45−50 21−24
The average gaming FPS of Radeon R9 290X in Assassin’s Creed Odyssey is 111% more, than Radeon RX Vega 6 Ryzen 4000.
high / 1080p 60−65 27−30
ultra / 1080p 50−55 24−27
QHD / 1440p 35−40 7−8
4K / 2160p 18−20 6−7
low / 720p 110−120 60−65
medium / 1080p 65−70 30−35
The average gaming FPS of Radeon R9 290X in Battlefield 5 is 118% more, than Radeon RX Vega 6 Ryzen 4000.
low / 768p 50−55 50−55
high / 1080p 50−55
QHD / 1440p 0−1 0−1
Radeon R9 290X and Radeon RX Vega 6 Ryzen 4000 have the same average FPS in Call of Duty: Warzone.
low / 768p 250−260 220−230
medium / 768p 220−230 190−200
ultra / 1080p 190−200 100−110
QHD / 1440p 110−120 65−70
4K / 2160p 70−75 35−40
high / 768p 210−220 150−160
The average gaming FPS of Radeon R9 290X in Counter-Strike: Global Offensive is 37% more, than Radeon RX Vega 6 Ryzen 4000.
low / 768p 60−65 60−65
ultra / 1080p 45−50
medium / 1080p 55−60 55−60
Radeon R9 290X and Radeon RX Vega 6 Ryzen 4000 have the same average FPS in Cyberpunk 2077.
low / 768p 120−130 110−120
medium / 768p 110−120 100−110
ultra / 1080p 100−110 70−75
The average gaming FPS of Radeon R9 290X in Dota 2 is 18% more, than Radeon RX Vega 6 Ryzen 4000.
high / 1080p 45−50 21−24
ultra / 1080p 45−50 18−20
QHD / 1440p 30−35 18−20
4K / 2160p 16−18 6−7
low / 720p 85−90 45−50
medium / 1080p 50−55 21−24
The average gaming FPS of Radeon R9 290X in Far Cry 5 is 113% more, than Radeon RX Vega 6 Ryzen 4000.
high / 1080p 65−70 27−30
ultra / 1080p 50−55 21−24
QHD / 1440p 30−35 14−16
4K / 2160p 27−30
low / 720p 180−190 110−120
medium / 1080p 120−130 60−65
The average gaming FPS of Radeon R9 290X in Fortnite is 91% more, than Radeon RX Vega 6 Ryzen 4000.
high / 1080p 65−70 30−33
ultra / 1080p 45−50 21−24
QHD / 1440p 35−40 10−12
4K / 2160p 24−27
low / 720p 110−120 60−65
medium / 1080p 70−75 30−35
The average gaming FPS of Radeon R9 290X in Forza Horizon 4 is 112% more, than Radeon RX Vega 6 Ryzen 4000.
low / 768p 140−150 95−100
medium / 768p 130−140 85−90
high / 1080p 75−80 35−40
ultra / 1080p 30−35 14−16
QHD / 1440p 24−27 4−5
The average gaming FPS of Radeon R9 290X in Grand Theft Auto V is 72% more, than Radeon RX Vega 6 Ryzen 4000.
high / 1080p 27−30 10−12
ultra / 1080p 21−24 8−9
QHD / 1440p 18−20
4K / 2160p 9−10 2−3
low / 720p 70−75 35−40
medium / 1080p 35−40 14−16
The average gaming FPS of Radeon R9 290X in Metro Exodus is 142% more, than Radeon RX Vega 6 Ryzen 4000.
low / 768p 130−140 110−120
medium / 1080p 120−130 110−120
The average gaming FPS of Radeon R9 290X in Minecraft is 13% more, than Radeon RX Vega 6 Ryzen 4000.
ultra / 1080p 14−16 14−16
low / 720p 100−110 65−70
medium / 1080p 18−20 18−20
The average gaming FPS of Radeon R9 290X in PLAYERUNKNOWN’S BATTLEGROUNDS is 39% more, than Radeon RX Vega 6 Ryzen 4000.
high / 1080p 27−30 14−16
ultra / 1080p 18−20 9−10
QHD / 1440p 12−14 0−1
4K / 2160p 8−9
low / 720p 70−75 30−35
medium / 1080p 35−40 18−20
The average gaming FPS of Radeon R9 290X in Red Dead Redemption 2 is 105% more, than Radeon RX Vega 6 Ryzen 4000.
low / 768p 140−150 60−65
medium / 768p 90−95 40−45
high / 1080p 50−55 21−24
ultra / 1080p 27−30 12−14
4K / 2160p 16−18 7−8
The average gaming FPS of Radeon R9 290X in The Witcher 3: Wild Hunt is 131% more, than Radeon RX Vega 6 Ryzen 4000.
low / 768p 90−95 90−95
medium / 768p 60−65 60−65
ultra / 1080p 55−60 35−40
high / 768p 55−60
The average gaming FPS of Radeon R9 290X in World of Tanks is 9% more, than Radeon RX Vega 6 Ryzen 4000.

Full Specs

  Radeon R9 290X Radeon RX Vega 6 Ryzen 4000
Architecture GCN Vega
Code name Hawaii XT Vega Renoir
Type Desktop Laptop
Release date 24 October 2013 7 January 2020
Pipelines 2816 384
Boost Clock 947 MHz 1500 MHz
Transistor count 6,200 million
Manufacturing process technology 28 nm 7 nm
Texture fill rate 176.0
Floating-point performance 5,632 gflops
Length 275 mm
Memory bus width 512 Bit
Memory clock speed 1250 MHz
Memory bandwidth 320 GB/s
Shared memory
DirectX DirectX 12_1
Shader Model 6. 3
OpenGL 4.6
OpenCL 2.0
Vulkan +
Monero / XMR (CryptoNight) 0.76 kh/s
FreeSync +
Bus support PCIe 3.0
HDMI +
Bitcoin / BTC (SHA256) 623 Mh/s
Eyefinity +
HD3D +
TrueAudio +
Design reference
DisplayPort support +
CrossFire +
DDMA audio +
Decred / DCR (Decred) 1. 07 Gh/s
Ethereum / ETH (DaggerHashimoto) 25.75 Mh/s
Zcash / ZEC (Equihash) 350 Sol/s
AppAcceleration +
LiquidVR +
TressFX +
UVD +
 

Check Price

Check Price

Similar compares

  • Radeon R9 290X vs Radeon Pro 580X
  • Radeon R9 290X vs GeForce GTX 980M
  • Radeon RX Vega 6 Ryzen 4000 vs Radeon Pro 580X
  • Radeon RX Vega 6 Ryzen 4000 vs GeForce GTX 980M
  • Radeon R9 290X vs Radeon RX 480 mobile
  • Radeon R9 290X vs GeForce GTX 590
  • Radeon RX Vega 6 Ryzen 4000 vs Radeon RX 480 mobile
  • Radeon RX Vega 6 Ryzen 4000 vs GeForce GTX 590

8 best AMD GPUs of all time

AMD’s Radeon graphics business has been sitting firmly in second place for a number of years and despite a recent resurgence in fortunes and performance, Nvidia retains a firm hold on graphics card mindshare and market dominance. But AMD hasn’t always been the underdog. In fact, there have been several instances over the years where it stole pole position and captured the hearts and minds of gamers with a truly unique GPU release.

Contents

  • Radeon 9700 Pro
  • Radeon HD 4870
  • Radeon HD 5970
  • Radeon HD 7970
  • Radeon R9 290X
  • Radeon RX 480
  • Radeon RX 5700 XT
  • Radeon RX 6800 XT
  • So what’s next?

Here are some of the best AMD (and ATI) graphics cards of all time, and what made them so special.

Radeon 9700 Pro

Radeon makes a name for itself

VGA Museum

The Radeon 9700 Pro is technically not an AMD card, but rather an ATI card. AMD only obtained GPU technology by acquiring ATI in 2006, and in doing so inherited ATI’s rivalry with Nvidia. But not talking about ATI would be a mistake, because although Nvidia largely dominated the GPU scene in the pre-AMD era, there was one moment where everything aligned in just the right way for ATI to make its mark.

Nvidia had established itself as the leading GPU manufacturer after it launched the famous GeForce 256 in 1999, and in the following years, ATI struggled to compete with its first generation Radeon GPUs: the 7000 series. The 8000 series shrunk the gap between Radeon and GeForce but didn’t close it. More than that, reviewers at the time felt the flagship Radeon 8500 came out way too soon and had very poor driver support.

However, in the background ATI was working on the R300 architecture that would power the 9000 series, and ended up creating a GPU larger than anything that came before it. At a die size of over 200 mm2 and with over 110 million transistors, at the time it was absolutely gargantuan. For comparison, Nvidia’s GeForce 4 GPU capped out at 63 million transistors.

When the Radeon 9700 Pro launched in 2002, it blew Nvidia’s flagship GeForce 4 Ti 4600 out of the water, with the 9700 Pro sometimes being over twice as fast. Even though the Ti 4600 was actually about $100 cheaper than the 9700 Pro, Anandtech recommended the Radeon GPU in its review. It was just that fast.

In fact, all across the stack the 9000 series did quite well, partly due to the fact that Nvidia’s newer GeForce FX series struggled to compete. The top-end 9800 XT’s high performance was well received (though it was criticized for being expensive) and the mid-range 9600 XT also received accolades from websites such as The Tech Report, which declared “it would almost be irresponsible for me to not give the Radeon 9600 XT our coveted Editor’s Choice award.” However, this success wasn’t to last very long.

Radeon HD 4870

Small but powerful

Amazon

After the 9000 series, ATI gradually lost more and more ground to Nvidia, especially after the legendary GeForce 8800 GTX launched in late 2006 to critical acclaim. In fact, the launch of the GeForce 8 series was described as “9700 Pro-like” by Anandtech, which not only cemented the 9000 series’s reputation but also demonstrated how far ahead Nvidia was with the 8 series.

By this time, AMD had acquired ATI, and using their combined forces, the two companies tried to work out a successful strategy for competing with Nvidia. Ever since the 9700 Pro, winning was always about launching the biggest GPU. A 200 mm2 die was big in 2002, but the top-end GeForce 8 series GPU in 2006 was almost 500 mm2, and even today that’s a pretty big GPU. The problem for AMD and ATI was that producing big GPUs was expensive, and there simply wasn’t enough money to fund another big GPU.

What AMD and ATI decided to do next was called the small die strategy. Instead of making really big GPUs and trying to win with raw power, AMD wanted to focus on high density, high performance GPUs with small die sizes (200-300 mm2) that were almost as fast as Nvidia’s flagship GPUs. This way, AMD could sell their GPUs at an extremely low price and, hopefully, roll right over Nvidia despite not having a halo product.

The first small die strategy GPU was the HD 4000 series, which launched in 2008 with the HD 4870 and 4850. Up against Nvidia’s GTX 200 series, AMD’s small die strategy was a big success. The HD 4870 was praised for beating the GTX 260 at $100 less, with Anandtech describing the GPU as “in a performance class beyond its price. ” The HD 4870 was also nipping at the GTX 280’s heels, despite the 280 being over twice as large.

AMD hadn’t abandoned the high end, however, and wanted to leverage its multi-GPU technology, CrossFire, to make up for the lack of big Radeon GPUs. Though, not every reviewer believed this was a good strategy, with The Tech Report calling it “hogwash” at the time.

Ultimately, that quote was proved correct, as large, monolithic GPUs weren’t going anywhere any time soon.

Radeon HD 5970

The GPU that was too fast

Nevertheless, AMD was not deterred from its small-die strategy and continued on with the HD 5000 series. Nvidia was struggling to get its next-generation GPUs out the door, which meant the aging GTX 285 (a refreshed GTX 280) had to compete against the brand-new Radeon GPUs. Unsurprisingly, the HD 5870 trounced the 285, putting AMD back in the lead with a single GPU card for the first time since the 9800XT.

As multi-GPU setups were crucial for the small die strategy, AMD also launched the HD 5970, a graphics card with two GPUs running in CrossFire. The 5970 was so fast that multiple publications said it was too fast to actually matter, with Anandtech describing the phenomenon as “GPUs are outpacing games.” The Tech Report found the 5970 to be a niche product for this reason, but nevertheless called it the “obvious winner” and didn’t complain about CrossFire either.

For six whole months, AMD ruled the GPU market and had a massive lead against Nvidia’s 200 series in performance and efficiency. In early 2010, Nvidia finally launched its brand-new GTX 400 series based on the Fermi architecture, which at the top end especially, was power hungry, hot, and loud. It was barely any faster than the HD 5870, and well behind the HD 5970. Two 480s in SLI could beat the 5970, but at nearly double the power consumption such a GPU configuration was ludicrous. The 480 was so hot in testing that Anandtech was worried in regular usage a 480 could die prematurely.

The HD 5000 series was the high watermark for AMD when it came to discrete GPU market share, with AMD coming just short of overtaking Nvidia in 2010. However, in overall graphics (including stuff like integrated graphics and embedded graphics), AMD enjoyed higher market shares from 2011 to 2014. Although Nvidia had been beaten badly by HD 5000, it wouldn’t be too long before the tables were turned.

Radeon HD 7970

Breaking the GHz barrier

Amazon

Although Nvidia’s 400 series was quite bad, the company managed to improve upon it with the 500 series, which launched in late 2010. The newer and better GTX 580 was faster and more efficient than the GTX 480, and it creeped up on the HD 5970. Around the same time, AMD also launched its next-generation HD 6000 GPUs, but the top end HD 6970 (which was a single GPU card, not dual GPU) didn’t blow away reviewers with either its performance or price.

To make matters worse, Nvidia would be moving to the newest 28nm process from TSMC with its next-generation cards, and this was a problem for AMD because the company had always been ahead when it came to nodes. In order to get the most out of the 28nm node, AMD retired the old Terascale architecture that had powered Radeon since HD 4000 and introduced the new Graphics Core Next (or GCN) architecture, which was designed for both gaming and regular computing. At the time AMD thought it could save money by using one design for both.

The HD 7000 series launched with the HD 7970 in early 2012, and it beat the GTX 580 pretty conclusively. However, it was more expensive compared to the HD 4000 and 5000 series. Anandtech noted that while AMD had made “great technological progress” in recent years, the one that actually made money was Nvidia, which was largely why AMD hadn’t priced the HD 7970 so aggressively like older Radeon GPUs.

But the story doesn’t stop there. Just two months later, Nvidia launched its new 600 series, and it was very bad … for AMD. The GTX 680 beat the HD 7970 not just in performance, but also efficiency, which had been a key strength of Radeon GPUs ever since the HD 4000 series. To add insult to injury, the 680 was actually smaller than the 7970 at around 300 mm2 to the 7970’s 350 mm2.

All thanks to Nvidia using the same 28nm node the 7970 used.

That said, the 7970 wasn’t much slower than the 680, and since the 7970 was never going to be as efficient as the 680 anyways, AMD decided it would launch the 7970 again, but with much higher clock speeds, as the HD 7970 GHz Edition. It was the world’s first GPU that ran at 1 GHz out of the box, and it tied the score with the GTX 680. The 7970 wasn’t a 4870 or a 5970, but its back and forth battle with the GTX 680 was impressive at the time.

There was just one problem: it was hot and loud, “too loud” according to Anandtech. Nvidia had also launched a hot and loud GPU just a couple of years before, and fixed it by moving to the next node. AMD could just do the same thing, right?

Radeon R9 290X

Victory, but at what cost?

AMD

As it turns out, no, AMD could not just move to the next node, because almost every single foundry in the world ran into a brick wall around the 28nm mark, which has been recognized as the beginning of the end of Moore’s Law. While TSMC and other fabs continued making theoretically better nodes, AMD and Nvidia stuck with the 28nm node because these newer nodes weren’t really much better and weren’t suitable for GPUs. This was an existential problem for AMD, because the company had always been able to rely on moving to the newer node to remain ahead of Nvidia when it came to efficiency.

Still, AMD had some ways out. The HD 7970 was only around 350 mm2, and the company could always make a bigger GPU with more cores and a bigger memory bus. AMD could also improve GCN, but that was difficult because GCN was doing double duty as both a gaming and a compute architecture. Finally, AMD could always launch its next GPUs at lower prices.

Nvidia had already beaten AMD to the next generation in mid 2013 with its new GeForce 700 series, led by the GTX 780 and the GTX Titan, which were much faster (and more expensive) than the HD 7970 GHz Edition. But launching second wasn’t bad for AMD, since it gave it a chance to choose the right price for its upcoming 200 series, which launched in late 2013 with the R9 290X.

The 290X was almost the perfect GPU. It beat the 780 and the Titan in almost every game while being much cheaper at $549 to the 780’s $649 and the Titan’s $999. The 290X was a “price/performance monster.” It was the fastest GPU in the world. There was just one slight problem with the 290X, and it was the same problem the HD 7970 GHz Edition had: It was hot and loud.

A large part of the problem was that the R9 290X had reused the cooler on the reference HD 7970, and since the 290X used more power, the GPU ran at a higher temperature (up to a blazing 95 degrees C) and its single blower fan had to spin even faster. AMD had pushed the envelope just a bit too much, and it was basically a replication of Fermi and the GTX 480. Despite the greatness of the 290X, it was the first of many hot and loud AMD GPUs.

Radeon RX 480

A new hope

Bill Roberson/Digital Trends

When the RX 480 launched in mid 2016, it had been nearly three years since the 290X had claimed the performance crown. Those three years were some of the toughest for AMD as everything seemed to go wrong for the company. On the CPU side, AMD had delivered the infamously poor Bulldozer architecture, and on the GPU side AMD had launched the R9 390X in 2015, which was just a refreshed 290X. The Fury lineup wasn’t great either and couldn’t keep up with Nvidia’s GTX 900 series. It really looked like AMD might even go bankrupt.

Then there was hope. AMD restructured itself in 2015 and created the Radeon Technologies Group, led by veteran engineer Raja Koduri. RTG’s first product was the RX 480, a GPU based on the Polaris architecture which was aimed purely at the midrange, a throwback to the small die strategy. The 480 was no longer on the old TSMC 28nm process but GlobalFoundries’s 14nm process, which was a much-needed improvement.

At $200 for the 4GB model, the 480 was received very positively by reviewers. It not only beat the midrange GTX 960 (which to be fair was over a year old) but also previous generation AMD GPUs that had been way more expensive. It tied GPUs like the R9 290X, the R9 390X, and the GTX 970. It wasn’t power hungry either, thankfully. In our review, we simply said “AMD’s Radeon RX 480 is awesome.”

Unfortunately for the 480, the very same month Nvidia launched the brand-new GTX 1060, and for the first time in years Nvidia was on a superior node: TSMC’s 16nm. The GTX 1060 was quite a bit better than the 480 and started at $250, the same price as the 480 8GB. To make things worse, the RX 480 consumed quite a bit more power than the GTX 1060 and also launched with a bug that caused the 480 to draw too much power over the PCIe slot.

But surprisingly, that didn’t kill the 480 or its slower but much cheaper counterpart the RX 470. In fact, it went on to become one of AMD’s most popular GPUs of all time. There are many reasons why this happened but the primary ones are pricing and drivers. The RX 480 for pretty much all of its life sold at a very low price, first at the $200-250 range but into 2017 even the AIB models with 8GB of VRAM could be found for less than $200. The RX 470 was even cheaper, sometimes going for just over $100. The performance of these GPUs also gradually improved with better drivers and increasing adoption of DX12 and Vulkan; the so-called AMD “Fine Wine” effect.

AMD went on to refresh the 480 as the RX 580 and then the RX 590, which weren’t particularly well received. Nevertheless, the Polaris architecture that powered the RX 480 and other 400 and 500 GPUs certainly made its mark despite the odds, and re-established AMD as a relevant company for desktop GPUs.

Radeon RX 5700 XT

Good graphics, promising prospects

Although AMD had gained ground with the RX 400 series, those were only midrange GPUs; there was no RX 490 doing battle with the GTX 1080. AMD did challenge Nvidia with its RX Vega 56 and 64 cards in 2017, but those fell flat. RX Vega had mediocre value: the 64 model was only as fast as the 1080 and significantly slower than the 1080 Ti, and to top it all off, these GPUs were hot and loud. In early 2019, AMD tried again with the Radeon VII (which was based on data-center silicon), but it was a repeat of the original Vega GPUs: mediocre value, unimpressive performance, hot and loud.

However, Nvidia was also struggling because its new RTX 20 series wasn’t very impressive, particularly for the price. For example, the GTX 1080 was 33% faster than the GTX 980 and launched for only $50 more, whereas the RTX 2080 was just 11% faster than the GTX 1080 and launched for $200 more. Ray tracing and A.I. upscaling technology in just a handful of games simply weren’t worth the price at the time.

It was a good opportunity for AMD to counterattack with the RX 5000 series. Codenamed Navi, it was based on the new RDNA architecture and utilized TSMC’s 7nm node. Similar to the RX 480, the 5700 XT at $449 and 5700 at $379 weren’t supposed to be high end GPUs, but aimed just below at the upper midrange, specifically at Nvidia’s RTX 2060 and RTX 2070 GPUs. In our review, we would have found that the new 5000 series GPUs beat the 2060 and the 2070 just as AMD planned. That is, we would have if Nvidia didn’t launch three brand-new GPUs on literally the same day the 5000 series came out. The new RTX 2060 Super and the RTX 2070 Super were faster and cheaper than the old models, and in our review the 5700 XT ended up getting second place, albeit at a decent price.

But it wouldn’t be an AMD GPU without at least one scandal. Just days before the RX 5000 series launched, Nvidia announced the RTX Super GPUs, and the 2060 Super and the 2070 Super were priced very aggressively. In order to keep RX 5000 competitive, AMD cut the 5700 XT’s price to $399 and the 5700’s to $349, and pretty much everyone agreed this was the right move. And that should have been the end of it.

Except that wasn’t the end of it, because Radeon VP Scott Herkelman tried to claim this was some kind of mastermind chess move, where the RX 5000 price cut was planned from the beginning so that Nvidia would be tempted into selling its Super GPUs at a low price just to have worse value anyway. Except, as Extremetech pointed out, AMD wouldn’t have cut prices if Nvidia didn’t price the Super GPUs the way it did. It’s more likely AMD cut prices because RX 5000 would have looked bad at the old prices.

Although it didn’t set the world on fire, the 5700 XT proved AMD had potential. It had good performance and was just about 250 mm2. By comparison, Nvidia’s flagship RTX 2080 Ti was three times as large and was only about 50% faster. If AMD could just make a bigger GPU, it could be the first Radeon card to beat Nvidia’s flagship since the R9 290X.

Radeon RX 6800 XT

Radeon returns to the high end

AMD

With the RX 5700 XT and the brand-new RDNA architecture, AMD found itself in a very good position. The company had made it to 7nm before Nvidia, and the new RDNA architecture despite its immaturity was much better than old GCN. The next thing to do was obvious: make a big, powerful gaming GPU. In early 2020, AMD announced RDNA 2, which would power Navi 2X GPUs, including the rumored “Big Navi” chip. RDNA 2 was 50% more efficient than the original RDNA architecture, which was not only impressive since RDNA 2 still used the 7nm node, but was also crucial for making a powerful, high-end GPU that wasn’t hot and loud.

2020 promised to be a great year for GPUs as both AMD and Nvidia would be launching its next-generation GPUs, and “Big Navi” was rumored to mark AMD’s return to the high end. As it turns out, 2020 was a terrible year in general, but at least there was still a GPU showdown to look forward to: the RTX 30 series versus the RX 6000 series.

Although the flagships for this generation were Nvidia’s RTX 3090 and AMD’s RX 6900 XT, at $1499 and $999 respectively, these GPUs weren’t super interesting to most gamers. The real fight was between the RTX 3080 and the RX 6800 XT, which had MSRPs of $699 and $649, respectively.

Two months after the RTX 3080 came out, the RX 6800 XT finally arrived in late 2020, and to everyone’s relief, it was a good GPU. The 6800 XT didn’t crush the 3080, but most reviewers such as Techspot found it was a little faster at 1080p and 1440p, and just a bit behind at 4K. At $50 less, the 6800 XT was the first good alternative to high-end Nvidia GPUs in years. Sure, it didn’t have DLSS and it wasn’t very good at ray tracing, but that wasn’t a dealbreaker for most gamers.

Unfortunately, as good as the RX 6800 XT was when launched only a year and a half ago, there was a new problem to contend with: You couldn’t buy one. The dreaded GPU shortage had totally upended the GPU market, and whether you wanted a 6800 XT or a 3080, it was basically impossible to find any GPU for a reasonable price. This not only put a serious damper on AMD’s return to the high end, but made buying any GPU very painful.

At the time of writing, the GPU shortage has mostly subsided, with AMD GPUs selling for about $50 more than MSRP rather than hundreds of dollars more, while Nvidia GPUs tend to sell for $100 more than MSRP. That makes not only the 6800 XT competitive, but pretty much the entire RX 6000 series.

That’s a solid win for AMD.

So what’s next?

As far as we can tell, AMD is not slowing down for even a moment. AMD promises its next-generation RDNA 3 architecture will deliver yet another 50% efficiency improvement, which is extremely impressive to see three generations in a row. RX 7000 GPUs based on RDNA 3 are slated to launch in late 2022, and AMD has confirmed the upcoming GPUs will make use of chiplets, the technology which allowed AMD to dominate desktop CPUs from 2019 to 2021.

It’s hard to say how much better RX 7000 will be over RX 6000, but if the claims are to be believed, it could be very impressive indeed. If AMD gives RX 7000 a good price, perhaps we’ll have to add it to this list in the months ahead.

Editors’ Recommendations
  • Want a 72% GPU boost for free? AMD just delivered one — here’s how to get it

  • Intel reveals full Arc Alchemist pricing, and it’s definitely competitive

  • Intel XeSS is already disappointing, but there’s still hope

  • DLSS 3 could boost your gaming performance by up to 5x

  • Intel says Moore’s Law is alive and well. Nvidia says it’s dead. Which is right?

AMD Radeon RX 480 review: the best $200 GPU you can buy today

This site may earn affiliate commissions from the links on this page. Terms of use.

Last week, AMD launched its largest midrange hardware update in years, codenamed Polaris. Polaris isn’t a brand-new architecture — that’s Vega, which arrives late this year — but it’s arguably a larger update than anything we’ve seen from AMD since the original GCN debuted in late 2011. GCN 1.1 (Bonaire, Hawaii) and 1.2 (Tonga, Fiji) both improved on the original microarchitecture and integrated additional heterogeneous compute capabilities, but both were fairly modest improvements in the grand scheme of things. Polaris aims to deliver larger improvements across the entire GPU stack, while keeping some of the features first introduced with AMD’s Fury family of products.

The RX 480 is AMD’s first 14nm FinFET GPU and it brings a number of improvements to the table. HDMI 2.0b and DisplayPort 1.3 and 1.4 are both supported, as are emerging features like High Dynamic Range (HDR) displays, FreeSync (via both DisplayPort and HDMI), and a new H.265 / HEVC decoder block with support for up to 1080p240, 1440p120, or 4K60 (that’s the resolution followed by the maximum frame rate). DVI users will need an adapter if they want to use that form factor — unlike many of AMD’s older parts in this price range, the RX 480 packs one HDMI 2.0b port and 3x DisplayPorts.

Polaris positioning

Normally, we dive into the architectural details of a new GPU design first and tackle the market positioning later. In the RX 480’s case, however, AMD has chosen to lead with a midrange product that targets mainstream enthusiasts rather than launching high-end hardware first, with midrange parts launching later. AMD’s entry and midrange products are definitely due for a refresh, but it’s important to put the RX 480 in perspective — at $199 and $239 for the 4GB and 8GB versions of the RX 480, AMD isn’t trying to overtake the likes of the R9 Fury X or GTX 980 Ti. Instead, the company’s goal was to create a GPU that would offer improved performance and significantly better power consumption for the majority of users.

Based on what we’ve seen so far, it’s succeeded, though AMD’s decision to launch into the mass-market first makes it a bit trickier to put the RX 480 in proper context. Nvidia’s two major competitors to the RX 480, at least for now, are the GTX 960 and 970, but neither are a clean match — the cheapest GTX 960s are well under the $200 mark, while GTX 970 currently starts around $265. While there’s a GTX 1060 rumored to be launching in the very near future, Nvidia’s GTX 1080 and 1070 remain very thin on the ground — we’ll have to wait and see if they can ship a 1060 GPU in significant volume.

While we’ve included both the GTX 960 and 970 in this review, we’ve decided to evaluate the RX 480 alongside AMD’s previous GPUs in the same price bracket. Over the past six years, the company has launched a number of cards between $200 – $240 — if you own an R9 380, R7 270X, or even an HD 6870, is the RX 480 a worthy upgrade? How does it compare against the R9 390, AMD’s current 8GB Hawaii-derived GPU?

The Polaris architecture

The RX 480 packs 36 compute units (CUs) with 64 cores per CU and 2304 cores in total. There are 144 texture units and 32 ROPS in the full configuration, backed up by a 2MB L2 cache and a 256-bit GDDR5 memory bus. At a high level, the chip doesn’t look too terribly different from previous-generation GCN architectures, but there are significant improvements under the hood.

The RX 480 at a glance

One of the differences between AMD and Nvidia GPUs has been their ability to handle extremely high levels of scene geometry. Nvidia cards have typically outperformed their AMD counterparts in this regard, though the real-world usefulness of their capabilities has been questionable. The RX 480 introduces a new feature, primitive discard acceleration, which is designed to close the gap in small triangle performance.

By throwing out triangles earlier in the rendering process, the RX 480 can save bandwidth and reduce the performance penalty hit it takes with MSAA enabled. Overall primitive throughput should be higher with the RX 480 than what we saw with earlier cards, despite the fact that the RX 480 has the same maximum number of primitives per clock (4) as last year’s Fiji products.

Polaris’ second major improvement is its improved shader efficiency and instruction caching. Polaris is now capable of speculatively prefetching instructions, which should reduce pipeline stalls and boost total performance. Speculative prefetching has been used in CPUs for decades to generally good effect, though it’s important to balance the feature’s power consumption against its improved performance.

Finally, there’s Polaris’ improved support for delta color compression. AMD didn’t go into as much detail as Nvidia did during its Pascal discussion, but the company’s high-level data suggests significant improvements in overall bandwidth efficiency. By compressing color data AMD can squeeze more effective performance out of the same raw bandwidth (224GB/s on the 4GB RX 480, 256GB/s of bandwidth on the 8GB card).

Improved power consumption

Ever since AMD first unveiled Polaris at its Sonoma event last winter, the company has claimed that 14nm FinFET and new design elements would help it deliver as much as a 2.5x improvement in performance-per-watt. Some of that gain comes courtesy of FinFET technology and the smaller process node, but much of the rest comes courtesy of Carrizo, AMD’s first APU to implement Adaptive Voltage and Frequency Scaling (AVFS). We extensively covered AVFS when AMD adopted it last year.

Traditionally, GPU and CPU manufacturers have used a different method of controlling voltage and frequency, called Dynamic Voltage and Frequency Scaling (DVFS). DVFS works by adjusting a CPU’s voltage to match its frequency in stairstep fashion. The response curve is set by the OEM as part of the CPU family’s specification and is designed to ensure a significant margin of error is available at all times. As the slide above shows, when VDD drops, clock speed drops even farther to ensure stable operation.

AVFS is implemented by monitoring each individual die at specific points and calibrating its voltage and frequency targets on a per-chip level. While this requires an extensive sensor network, there are two significant payoffs. First, it allows AMD to reduce per-part performance variation — each chip should be capable of hitting closer to maximum theoretical performance. Second, it gives AMD the ability to reduce its margin of error and operate closer to an ideal frequency / voltage curve.

Polaris’ more efficient delta color compression, larger L2 caches, and a new, optimized multi-bit flip-flop (MBFF) approach also helped AMD cut total ASIC power consumption by 4-5%.

Asynchronous compute

The asynchronous compute capability in Polaris is fundamentally similar to what AMD introduced with Fiji. Fiji included a hardware scheduling block (HWS) that could be used to improve asynchronous compute workload efficiency. It includes a quick response queue for implementing asynchronous time warp in VR applications and the ability to reserve compute unit blocks for executing TrueAudio workloads.

Many of these capabilities were also included in Fiji, but weren’t fully enabled or exposed when the hardware shipped. AMD has updated its software drivers for Fury and Nano cards to expose these capabilities and included them in Polaris as well. In this respect, the RX 480 has all the asynchronous compute capabilities that AMD loaded into Fury X, but in a $200 GPU instead of a $600 card.

Test hardware and configuration

We tested the RX 480 using an Asus X99-Deluxe motherboard and an Intel Core i7-5960X CPU, 16GB of DDR4-2667, and an Antec 750W 80 Plus Gold power supply. The GeForce GTX 970 and 980 were tested using Nvidia driver 368.39, the RX 480 was tested with an unreleased Catalyst driver; the other AMD cards were tested with Catalyst 16.6.2 Hotfix.

There are a few additional things to be aware of. First, we tested the HD 6870 with AMD’s Catalyst 15.7 beta drivers. AMD dropped support for its pre-GCN hardware when it launched Radeon Software; the Catalyst 15.7 betas were what the company’s auto-detection application recommended for this GPU.

Second, we shifted from a 1200W Thermaltake PSU down to a 750W Antec 80 Plus Gold PSU for this review. Our power consumption benchmarks for all GPUs were rerun on this hardware, which is why the data in this review won’t match other coverage.

Third, while we’ve included Ashes of the Singularity benchmark data in this review, we need to note that performance in Ashes has changed markedly on both AMD and Nvidia cards since we last covered the title. Toggling the asynchronous compute feature off vs. on no longer has any impact on Maxwell GPUs from Nvidia. We’ve confirmed this with extensive testing, including falling back to older drivers we used in our February article.

Fourth, while we normally include both 1080p and 4K results, the RX 480 is not intended for 4K testing and does not perform well in that resolution (we checked). Given this, we’ve skipped 4K benchmarks and stuck to 1080p. We’ve also specified how much RAM was on each GPU in our lists of results.

Test results

BioShock Infinite is a DirectX 11 title from 2013. We tested the game in 1080p with maximum detail and the alternate depth-of-field method. While BioShock Infinite isn’t a particularly difficult lift for any mainstream graphics card, it’s a solid last-gen title based on the popular Unreal Engine 3.

Our first title shows the 8GB RX 480 competing well against the GeForce GTX 970 and the R9 390. While slightly edged out by both last-gen cards, the RX 480’s lower price tag more than compensates.

As far as whether the RX 480 provides a qualitatively different experience than previous $200 cards, it definitely outclasses both the HD 6870 and the R9 270X, both of which would see dips below 60 FPS in regular gameplay. The R9 380 and RX 480 perform identically as far as the naked eye, as do the GTX 960 and GTX 970.

Company of Heroes 2

Company of Heroes 2 is an RTS game that’s known for putting a hefty load on GPUs, particularly at the highest detail settings. Unlike most of the other games we tested, COH 2 doesn’t support multiple GPUs. We tested the game with all settings set to “High,” with V-Sync disabled.

COh3 isn’t playable on the HD 6870 at these settings, and the R9 270X, GTX 960, and R9 380 aren’t great, either. The RX 480 is a hair faster than the GTX 970 and essentially ties with the R9 390 in terms of overall performance.

Metro Last Light Redux

Metro Last Light Redux is the remastered version of Metro Last Light with an updated texture model and additional lighting details. Metro Last Light Redux’s benchmark puts a fairly heavy load on the system and should be seen as a worst-case run for overall game performance. We test the game at Very High detail with SSAA enabled.

The RX 480 is 1.42x faster than AMD’s previous R9 380, and only a hair slower than the GTX 970 (4%). The R9 390 wins this test decisively, by 15%, easily the largest gap we’ve seen open up between the two cards to-date. The RX 480’s improvement over its predecessors make this the first $200 GPU from AMD that we’d say is realistically capable of running these detail levels. At the same time, our use of SSAA likely explains why the R9 390 pulls ahead to such a degree — supersampled antialiasing is a brute-force method of improving visual quality and the R9 390 has more raw power at its disposal.

Total War: Rome 2

Total War: Rome II is the sequel to the earlier Total War: Rome title. It’s fairly demanding on modern cards, particularly at the highest detail levels. We tested at maximum detail levels, with SSAO and Vignette enabled.

Total War: Rome 2’s performance on the HD 6870 is an example of how frame rates don’t always capture everything there is to know about how a game performs on two different GPUs. While the HD 6870 appears to match the R9 270X (and believe me, we re-ran that test several times), the HD 6870’s frame rate delivery is much more erratic than the later GPUs. That’s to be expected, given the HD 6870’s small frame buffer, but the overall frame rate was still surprising.

Rome 2 runs better, on the whole, on Team Green hardware. The gap between the RX 480 and the GTX 970 is fairly significant, though the R9 390 is itself within shooting distance of the GTX 970.

Shadow of Mordor

Shadow of Mordor is a third-person open-world game that takes place in between The Hobbit and the Lord of the Rings. Think of it as Grand Theft Ringwraith, and you’re on the right track. We tested at maximum detail in 1080p with FXAA enabled (the only AA option available).

The RX 480 picks up a clear win here over the GTX 970, losing only to the older, higher-end Hawaii-based R9 390. Overall performance is excellent for Team Red.

Dragon Age: Inquisition

Dragon Age: Inquisition is one of the greatest role playing games of all time, with a gorgeous Frostbite 3-based engine. While it supports Mantle, we’ve actually stuck with Direct3D in this title, as the D3D implementation has proven to be superior in previous testing.

While DAI does include an in-game benchmark, we’ve used a manual test run instead. The in-game test often runs more quickly than the actual title, and is a relatively simple test compared with how the game handles combat. Our test session focuses on the final evacuation of the town of Haven, and the multiple encounters that the Inquisitor faces as the party struggles to reach the chantry doors. We tested the game at maximum detail with 4x MSAA.

We were forced to omit the HD 6870 from this test, since that GPU isn’t really capable of actually benchmarking the title at our chosen detail levels. Here, Teams Red and Green are evenly matched — the R9 380 and GTX 960 tie, as do the GTX 970 and the RX 480. Even the R9 390 is only barely faster.

Ashes of the Singularity

Ashes of the Singularity is one of the first mainstream DirectX 12 titles. It’s an RTS game that’s designed to take full advantage of DX12 features like asynchronous compute and we’ve covered it since it launched in Early Access almost a year ago. We benchmarked the game in 1920×1080 with the Extreme detail preset.

The HD 6870 can’t run DX12, but the other cards perform fairly well. The R9 270X and GTX 960 aren’t fast enough to play at these resolutions and detail levels, but the R9 380 can still break 30 FPS at “Extreme” detail. The GTX 970 is significantly faster than the R9 380, but it’s not quicker than RX 480, which outpaces it by 12%.

The GTX R9 390, on the other hand, is even faster still. This gap is due to differences in how the two GPUs handle asynchronous compute — while the RX 480 only picked up about 3% from enabling or disabling the feature, our tests showed that the R9 390 still gains 12% from using the capability. That’s enough to give the R9 390 the overall win in this particular test.

Power consumption and efficiency

The RX 480 has demonstrated that it can hang with the top dogs in its price band as far as overall performance — but what about power efficiency? This has long been the Achilles heel of the GCN family and AMD promised that we’d see dramatic improvements when RX 480 finally launched. Did the company deliver on its promise?

To find out, we measured power consumption at the wall while benchmarking Metro Last Light Redux, then averaged the values across the benchmark run to produce an average power consumption figure. Since raw power consumption alone isn’t all that useful, we also give data in terms of watts per frame — how many watts of power does it take to generate each frame of animation?

For this specific test, we’ve also included data on the AMD R9 Nano. While the Nano wasn’t benchmarked for the rest of this review (its $500 price point means you could buy two RX 480’s for the price of one Nano), it’s the most power-efficient GPU AMD has ever built and won accolades for giving AMD a 28nm GPU that finally closed to within spitting distance of Nvidia’s power efficiency last generation.

The RX 480 is an obvious improvement for AMD in absolute terms, given that it draws 150W less power than the R9 390, despite both of those GPUs having 8GB of RAM. The 2GB GTX 960 and R9 270X draw less absolute power, but the GTX 970 draws more, despite having just 4GB of RAM (3.5GB + 512MB). The R9 380 and even the R9 Nano also draw more power than AMD’s new 14nm chip.

Let’s see what happens when we factor in performance.

Power consumption and efficiency graphs don’t normally make for exciting reading, but I think this one is rather fascinating. First, it shows that the RX 480 has made huge improvements to GCN’s power efficiency — the RX 480 uses just 57% as much power as the R9 270X per frame of output. The R9 390 is the best comparison point as far as total RAM loadout, since both cards have 8GB, and the RX 480 still compares extremely well with that GPU.

Second, this data illustrates how lower-end GPUs aren’t always the most power efficient parts. The GTX 960 has the lowest absolute power draw, but it consumes significantly more power per frame than the GTX 970. Each chip has a sweet spot when it comes to balancing power consumption against overall performance, and the GTX 960 clearly isn’t as efficient when it comes to turning power into frame rate, even if its total power consumption is the lowest in our data.

Third, the RX 480 may be more efficient than any GDDR5-equipped GCN part, but it’s not the most efficient GPU AMD has ever shipped. That award still goes to the AMD Radeon R9 Nano, the last-generation $500 card with 4GB of HBM. If you think about it, however, this makes sense. Nano was never a performance play; it was a card designed to pack as much GPU as possible into an extremely small form factor. The 28nm chips AMD used for the Nano were the best of the Fury X parts, and the chip itself runs at lower frequencies and voltages. (Nano’s raw frame rate score of 61 FPS is part of the reason why its watts-per-frame rating is so low).

I kept the Nano comparison in this graph because it gives us the opportunity to evaluate some of the claims AMD made about power consumption and GDDR5 last year when it launched the HBM-equipped Fury family. The RX 480’s GPU is clearly more efficient than anything AMD has previous shipped, but using 8GB of 8Gbps GDDR5 clearly cost the company some overall power efficiency.

One thing this graph highlights extremely well is that GPU power efficiency really has to be compared between specific cards, not overall architectures. The GTX 960-equipped system uses 1.21x more power than the GTX 970 per frame of animation. The R9 270X uses 1.23x more power than the R9 380 and 1.91x more power than the R9 Nano, despite the fact that all of these cards are built on 28nm and based on the same GCN architecture with only modest differences between each generation.

The immediate takeaway from this data is that AMD’s RX 480 only manages to match the GTX 970, rather than surpass it, but I’m not sure that’s the right way to characterize the situation. The GTX 970 has half the RAM and clocks it significantly lower — an apples-to-apples comparison between the two ASICs would almost certainly open at least a modest lead for AMD on this front.

Conclusion: A great upgrade, for the right buyer

Ever since it launched GCN, AMD has struggled with the architecture’s power consumption. This wasn’t so much an issue against Nvidia’s Kepler, but Maxwell’s 28nm architecture demonstrated how superior power consumption could lead to superior GPUs. Cooler operating temperatures and lower power envelopes gave Nvidia more headroom to push its chip farther, while AMD struggled to match.

Fiji and Fury X were proof of this. The switch to High Bandwidth Memory (HBM) saved AMD enough power that it could build an enormous GPU without an unsustainably high TDP, while Nano demonstrated the flexibility of an HBM-equipped form factor — but both chips were still based on an architecture that drew a great deal of power.

The RX 480, in contrast, is much more efficient than any design AMD has previously shipped. Equally as important, it dramatically improves its overall efficiency without compromising on performance. The RX 480 beats the R9 380 in every single test, it wins or ties a majority of its GTX 970 match-ups, and it’s within spitting distance of the R9 390 in three of our seven benchmarks.

Whether or not the RX 480 is a worthy upgrade will, as always, depend on when you last upgraded and what features and games you care about. If you’re looking for a GPU that’ll handle 4K smoothly, both now and in the future, a $200 – $250 GPU isn’t going to cut it yet. It’s not yet clear if the 8GB version of the RX 480 will prove to be a better buy than the 4GB variant — while it’s true that games tend to use more VRAM than they used to, no game we’re aware of pushes a 4GB frame buffer in 1080p.

Assuming that the 4GB version of this card is only slightly slower than the 8GB, the $200 price tag makes it a very nice upgrade from any previous AMD card, particularly the R9 270X or HD 6870. If you’ve held off the last few cycles because you wanted to really lock in something significant, you don’t need to worry about waiting any longer. The RX 480 also competes extremely well against the GTX 970, since that card only offers a 3.5GB effective frame buffer. Between a last-gen Nvidia GTX 970 or a 4GB variant of the GTX 960 and the RX 480, we’ll take the RX 480 every time.

The really hard call here is how to rate the R9 390. This GPU starts at $279 ($259 after rebate) on Newegg, packs the same 8GB of RAM as the RX 480, and offers equal or higher performance. Power efficiency, however, isn’t all that good — the 150W difference in power consumption across eight hours of gaming per day at 12 cents per KW works out to roughly $51 a year in power consumption costs. If you upgrade every two years, it works out to about $4.32 a month for using the R9 390 as opposed to the RX 480.

If you already own an R9 290 / 290X or anything from AMD’s Fury family, the RX 480 isn’t a GPU you’ll want to upgrade to. These higher end cards won’t be replaced until later this year, when AMD launches its own Vega top-end architecture. What AMD has done, however, is demonstrate that it can build an extremely capable and power-efficient chip that meaningfully improves its position in the market.

On the whole, AMD’s RX 480 delivers what the company promised: a significant leap forward for its GCN products and a top-notch card in the $200 to $250 range.

more memory bandwidth?
288GB/s vs 256GB/s

  • 256bit wider memory bus?
    512bit vs 256bit
  • 256 more stream processors?
    2560 vs 2304
  • 500million more transistors?
    6200 million vs 5700 million
  • 16 more texture units (TMUs)?
    160 vs 144
  • 32 more ROPs?
    64 vs 32
  • 2 more DVI outputs?
    2 vs 0
  • Why is AMD Radeon RX 480 better than AMD Radeon R9 290?

    • GPU frequency 458MHz higher?
      1120MHz vs 662MHz
    • 0.31 TFLOPS higher than FLOPS?
      5.16 TFLOPS vs 4.85 TFLOPS
    • 130W below TDP?
      120W vs 250W
    • 875MHz faster memory speed?
      2000MHz vs 1125MHz
    • 3500MHz higher effective clock speed?
      8000MHz vs 4500MHz
    • 2x more VRAM?
      8GB vs 4GB
    • 9. 3 GTexels/s higher number of textured pixels? Is
      161.3 GTexels/s vs 152 GTexels/s
    • 0.8 a newer version of DirectX?
      12 vs 11.2

    Which comparisons are the most popular?

    AMD Radeon R9 290

    VS

    AMD Radeon Vega 8

    AMD Radeon RX 480

    AMD Radeon RX 6500 XT

    AMD Radeon R9 290 9000 9000) AMD Radeon R9 290

    vs

    AMD Radeon RX 550

    AMD Radeon RX 480

    VS

    AMD Radeon RX 580

    AMD Radeon R9 290

    VS

    AMD Radeon RX 6500 XT

    AMD RADEON RX 480 9000 9000

    VS

    NVIDIA GEFIA GEFIA GEFIA GETIA GETIA NVIDIA GEFIA 290

    vs

    AMD Radeon R9 290X

    AMD Radeon RX 480

    vs

    MSI GeForce GTX 1050 Ti GAMING X

    AMD Radeon R9 290

    vs

    Nvidia GeForce GTX 1060

    AMD Radeon RX 480

    vs

    MSI GeForce GTX 1650 Gaming X

    AMD Radeon R9 290

    vs

    Nvidia GeForce GTX 1050

    AMD Radeon RX 480

    vs

    Nvidia GeForce RTX 2060

    AMD Radeon R9 290

    vs

    AMD Radeon RX 570

    AMD Radeon RX 480

    VS

    AMD Radeon RX Vega 8

    AMD Radeon R9 290

    VS

    AMD Radeon RX 580

    AMD RADEON RADEON RADEON RADEON RADEON RADEON RADEON RADEON RADEON RADEON RADEON RADEON RADEON0003

    0 Reviews of users

    AMD Radeon RX 480

    0. 0 /10

    0 Reviews of Users

    Functions

    The cost and quality ratio

    10.0 /10

    1 VOTES 9000

    Games

    8.0 /10

    1 Votes

    Reviews not yet

    performance

    10.0 /10

    1 VOTES

    reviews yet there is no

    fan noise

    10.0 /10

    1 Votes

    reviews yet there are no

    Reliability

    10.0 /10 9000

    Performance

    1.GPU clock speed

    662MHz

    1120MHz

    The graphics processing unit (GPU) has a higher clock speed.

    2.turbo GPU

    947MHz

    1266MHz

    When the GPU is running below its limits, it can jump to a higher clock speed to increase performance.

    3.pixel rate

    41.7 GPixel/s

    35.8 GPixel/s

    The number of pixels that can be displayed on the screen every second.

    4.flops

    4.85 TFLOPS

    5.16 TFLOPS

    FLOPS is a measurement of GPU processing power.

    5.texture size

    152 GTexels/s

    161.3 GTexels/s

    Number of textured pixels that can be displayed on the screen every second.

    6.GPU memory speed

    1125MHz

    2000MHz

    Memory speed is one aspect that determines memory bandwidth.

    7.shading patterns

    Shading units (or stream processors) are small processors in a video card that are responsible for processing various aspects of an image.

    8.textured units (TMUs)

    TMUs take textured units and map them to the geometric layout of the 3D scene. More TMUs generally means texture information is processed faster.

    9 ROPs

    ROPs are responsible for some of the final steps of the rendering process, such as writing the final pixel data to memory and for performing other tasks such as anti-aliasing to improve the appearance of graphics.

    Memory

    1.memory effective speed

    4500MHz

    8000MHz

    The effective memory clock frequency is calculated from the memory size and data transfer rate. A higher clock speed can give better performance in games and other applications.

    2.max memory bandwidth

    288GB/s

    256GB/s

    This is the maximum rate at which data can be read from or stored in memory.

    3.VRAM

    VRAM (video RAM) is the dedicated memory of the graphics card. More VRAM usually allows you to run games at higher settings, especially for things like texture resolution.

    4.memory bus width

    512bit

    256bit

    Wider memory bus means it can carry more data per cycle. This is an important factor in memory performance, and therefore the overall performance of the graphics card.

    5. versions of GDDR memory

    Later versions of GDDR memory offer improvements such as higher data transfer rates, which improve performance.

    6. Supports memory troubleshooting code

    ✖AMD Radeon R9 290

    ✖AMD Radeon RX 480

    Memory troubleshooting code can detect and fix data corruption. It is used when necessary to avoid distortion, such as in scientific computing or when starting a server.

    Functions

    1.DirectX version

    DirectX is used in games with a new version that supports better graphics.

    2nd version of OpenGL

    The newer version of OpenGL, the better graphics quality in games.

    OpenCL version 3.

    Some applications use OpenCL to use the power of the graphics processing unit (GPU) for non-graphical computing. Newer versions are more functional and better quality.

    4. Supports multi-monitor technology

    ✔AMD Radeon R9 290

    ✔AMD Radeon RX 480

    The video card has the ability to connect multiple displays. This allows you to set up multiple monitors at the same time to create a more immersive gaming experience, such as a wider field of view.

    5. GPU temperature at boot

    Unknown. Help us offer a price. (AMD Radeon RX 480)

    Lower boot temperature means the card generates less heat and the cooling system works better.

    6.supports ray tracing

    ✖AMD Radeon R9 290

    ✖AMD Radeon RX 480

    Ray tracing is an advanced light rendering technique that provides more realistic lighting, shadows and reflections in games.

    7. Supports 3D

    ✔AMD Radeon R9 290

    ✔AMD Radeon RX 480

    Allows you to view in 3D (if you have a 3D screen and glasses).

    8. supports DLSS

    ✖AMD Radeon R9290

    ✖AMD Radeon RX 480

    DLSS (Deep Learning Super Sampling) is an AI based scaling technology. This allows the graphics card to render games at lower resolutions and upscale them to higher resolutions with near-native visual quality and improved performance. DLSS is only available in some games.

    9. PassMark result (G3D)

    Unknown. Help us offer a price. (AMD Radeon RX 480)

    This test measures the graphics performance of a graphics card. Source: Pass Mark.

    Ports

    1.has HDMI output

    ✔AMD Radeon R9 290

    ✔AMD Radeon RX 480

    Devices with HDMI or mini-HDMI ports can stream HD video and audio to an attached display.

    2.HDMI connectors

    More HDMI connectors allow you to connect multiple devices at the same time, such as game consoles and TVs.

    HDMI version 3

    Unknown. Help us offer a price. (AMD Radeon R9 290)

    HDMI 2.0

    New versions of HDMI support higher bandwidth, resulting in higher resolutions and frame rates.

    4. DisplayPort outputs

    Allows connection to a display using DisplayPort.

    5.DVI outputs

    Allows connection to a display using DVI.

    6. Mini DisplayPort 9 outputs0003

    Allows you to connect to a display using Mini DisplayPort.

    Price match

    Cancel

    Which graphic cards are better?

    Radeon RX 480 vs Radeon R9 290 comparison. Which is better?

    Radeon RX 480

    Radeon R9 290

    June, 2016 | 1.1GHz | 8GB GDDR5

    November, 2013 | 947MHz | 4GB GDDR5

    Edelmark rating
    8

    Edelmark rating
    6. 9

    General comparison

    Game performance

    Tested with: Battlefield 3, Battlefield 4, Bioshock Infinite, Crysis 2, Crysis 3, Dirt3, FarCry 3, Hitman: Absolution, Metro: Last Light, Thief, Alien: Isolation , Anno 2070, Counter-Strike: Global Offensive, Diablo III, Dirt Rally, Dragon Age: Inquisition, The Elder Scrolls V: Skyrim, FIFA 15, FIFA 16, GRID Autosport, Grand Theft Auto V, Sleeping Dogs, Tomb Raider, The Witcher 3: Wild Hunt.

    Radeon RX 480 7.7 out of 10
    Radeon R9 290 7.3 out of 10
    GeForce GTX 1060 n/a

    Working with graphics

    Tests used: T-Rex, Manhattan, Cloud Gate Factor, Sky Diver Factor, Fire Strike Factor.

    Radeon RX 480 6.5 out of 10
    Radeon R9 290 6.5 out of 10
    GeForce GTX 1060 5.5 out of 10

    Computing power

    Tested on: Face Detection, Ocean Surface Simulation, Particle Simulation, Video Composition, Bitcoin Mining.

    Radeon RX 480 8.6 out of 10
    Radeon R9 290 7.9 out of 10
    GeForce GTX 1060 8.6 out of 10

    Output per W

    Video card tests performed on: Battlefield 3, Battlefield 4, Bioshock Infinite, Crysis 2, Crysis 3, Dirt3, FarCry 3, Hitman: Absolution, Metro: Last Light, Thief, Alien: Isolation, Anno 2070, Counter-Strike: Global Offensive, Diablo III, Dirt Rally, Dragon Age: Inquisition, The Elder Scrolls V: Skyrim, FIFA 15, FIFA 16, GRID Autosport, Grand Theft Auto V, Sleeping Dogs, Tomb Raider, The Witcher 3: Wild Hunt, T-Rex , Manhattan, Cloud Gate Factor, Sky Diver Factor, Fire Strike Factor, Face Detection, Ocean Surface Simulation, Particle Simulation, Video Composition, Bitcoin Mining, TDP.

    Radeon RX 480 8.8 out of 10
    Radeon R9 290 7.1 out of 10
    GeForce GTX 1060 8. 6 out of 10

    Price-Performance

    Tested on: Battlefield 3, Battlefield 4, Bioshock Infinite, Crysis 2, Crysis 3, Dirt3, FarCry 3, Hitman: Absolution, Metro: Last Light, Thief, Alien: Isolation, Anno 2070, Counter-Strike: Global Offensive, Diablo III, Dirt Rally, Dragon Age: Inquisition, The Elder Scrolls V: Skyrim, FIFA 15, FIFA 16, GRID Autosport, Grand Theft Auto V, Sleeping Dogs, Tomb Raider, The Witcher 3: Wild Hunt, T-Rex, Manhattan, Cloud Gate Factor, Sky Diver Factor, Fire Strike Factor, Face Detection, Ocean Surface Simulation, Particle Simulation, Video Composition, Bitcoin Mining, Best new price.

    Radeon RX 480 n/a
    Radeon R9 290 5.3 out of 10
    GeForce GTX 1060 n/a

    Noise and Power

    Tested at: TDP, Idle Power Consumption, Load Power Consumption, Idle Noise Level, Load Noise Level.

    Radeon RX 480 8. 8 out of 10
    Radeon R9 290 6.7 out of 10
    GeForce GTX 1060 9.1 out of 10

    Overall graphics card rating

    Radeon RX 480 8.0 out of 10
    Radeon R9 290 6.9 out of 10
    GeForce GTX 1060 7.5 out of 10

    Benefits

    Why is the Radeon RX 480 better?

    Much higher effective memory clock speed 8.000 MHz vs 5.000 MHz 60% higher effective memory clock speed
    Significantly more memory 8.192 MB vs 4.096 MB 2x more memory
    Overclocked 1.120 MHz vs 947MHz Approximately 20% higher clock speed
    Better PassMark score 8.095 vs 7. 049 Approximately 15% better PassMark score for
    Better floating point performance 5.834 GFLOPS vs 4,800 GFLOPS More than 20% better floating point performance
    Much higher memory clock speed 2.000 MHz vs 1.125 MHz Approximately 80% higher memory clock speed
    Overclocked turbo 1.266 MHz vs 947MHz Approximately 35% higher turbo clock speed
    Faster texture processing speed 182.3 GTexel/s vs 152 GTexel/s Approximately 20% faster texture rendering speed
    Better PassMark direct rating 4.411 vs 3.242 More than 35% better PassMark direct rating
    Significantly lower power consumption 150W vs 300W 2x lower power consumption
    Slightly better face recognition quality score 120. 46 mPixels/s vs 108.61 mPixels/s More than 10% better face recognition quality score

    Why is the Radeon R9 290 better?

    Higher memory bandwidth 320 GB/s vs 256 GB/s 25% more memory bandwidth
    Substantially more raster operation blocks 64 vs 32 Twice as many raster operation blocks
    Faster pixel fill rate 60.6GPixel/s vs 40.5GPixel/s Approximately 50% faster pixel fill rate
    Much wider memory bus 512bit vs 256bit 2x wider memory bus
    More shader blocks 2.560 vs 2.304 256 more shader units
    More texture units 160 vs 144 16 more texture units
    Slightly more computing units 40 vs 36 4 more compute units

    Comparative benchmarks (benchmarks)

    Bitcoin mining

    Radeon RX 480 614. 7 mHash/s
    Radeon R9 290 540.64 mHash/s

    Face Recognition

    Radeon RX 480 120.46 mPixels/s
    Radeon R9 290 108.61 mPixels/s

    Ocean Surface Modeling

    Radeon RX 480 2,091.36 frames/s
    Radeon R9 290 1,366.31 frames/s

    T-Rex (GFXBench 3.0)

    Radeon RX 480 3,355.88
    Radeon R9 290 3,355.58

    Manhattan test (GFXBench 3.0)

    Radeon RX 480 3,715.04
    Radeon R9 290 3,713.44

    Fire Strike Test

    Radeon RX 480 78.17
    Radeon R9 290 71.97

    Sky Diver Test

    Radeon RX 480 424. 91
    Radeon R9 290 410.95

    Crysis 3

    Radeon RX 480 52.6
    Radeon R9 290 51


    Tags:compare, Radeon R9 290, Radeon RX 480

    GIGABYTE Radeon RX 480 1290Mhz PCI-E 3.0 4096Mb 7000Mhz 256 bit DVI HDMI HDCP

    Advantages

    Disadvantages

    Comment

    Estimated

    I accept the terms
    providing data.

    Average rating GIGABYTE Radeon RX 480 1290Mhz PCI-E 3.0 4096Mb 7000Mhz 256 bit DVI HDMI HDCP — 3.44
    There are 9 known reviews about GIGABYTE Radeon RX 480 1290Mhz PCI-E 3.0 4096Mb 7000Mhz 256 bit DVI HDMI HDCP

    From 11 sources we collected 9negative, negative and positive reviews.

    We will show all the pros and cons of the GIGABYTE Radeon RX 480 1290Mhz PCI-E 3.0 4096Mb 7000Mhz 256 bit DVI HDMI HDCP as seen by users. We do not hide anything and we post all positive and negative honest customer reviews about GIGABYTE Radeon RX 480 1290Mhz PCI-E 3.0 4096Mb 7000Mhz 256 bit DVI HDMI HDCP, and also offer alternative analogues. Is it worth it to buy — the decision is only yours!

    Best deals on GIGABYTE Radeon RX 480 1290Mhz PCI-E 3.0 4096Mb 7000Mhz 256 bit DVI HDMI HDCP

    Reviews for GIGABYTE Radeon RX 480 1290Mhz PCI-E 3.

    0 4096Mb 7000Mhz 256 bit DVI HDMI HDCP

    Feedback information updated on 02.10.2022

    write a feedback

    Guest, 01/13/2018

    Advantages:
    not bad cooling and appearance

    Disadvantages:
    is 5-10 fps ahead of 470 in any game, which costs several times cheaper. Better take 1060 for 6 gigs

    Zelenov Alexander, 18.09.2017

    Advantages:
    Works in stock and thank God.

    Disadvantages:
    Does not accelerate. It is worth increasing the frequency of the memory or processor by at least 5% immediately the driver freezes or bluescreen

    Comment:
    Rx 470 from msi overclocked from memory from 1750 to 2050 without problems and has been mining for about six months. And px 480 is only worth trying to do something with it, it hangs the entire system. Despite the fact that it costs more and brings much less money itself

    Shasherka Vanya, 08/24/2017

    Advantages:
    Occupies 2 slots
    Size decreased (was R9 280 3g from gigabyte)
    1 extra. Pete. 8 pin.
    Price / Performance.

    Faults:
    Noise after 45% valve speed.
    Reduced memory bandwidth.

    Comment:
    In general, it will do, the only thing that worries me is the memory bandwidth, earlier on r9 280 3 g it was 240 Gb / s here 224 Gb / s I don’t know what this is connected with, but on the version with 8 gig on board with this is all right.
    Be sure to flash the BIOS!
    In principle, I am satisfied with the purchase.

    Guest, 04/08/2017

    Advantages:
    Excellent price (took for 12500r), high performance, metal backplate, workmanship at altitude. Excellent price/performance ratio.

    Shortcomings:
    I had to tinker a bit. Bios quality is not up to par.

    Comment:
    Gigabyte engineers tried to meet the declared consumption of 150W from this all the problems. I lowered the frequency to 1250 MHz and the card became completely stable.
    Overall satisfied! But who does not like fiddling with the settings, pass by.

    Pyzhov Alexander, 04/05/2017

    Advantages:
    we do undervolting and you will be happy..!!!

    Comment:
    I only play WoT. out of the box, the temperature of 75 degrees made undervolting no more than 65 in a small pantry where the PC is. case closed 4 vent. two for blowing in two for blowing out min. turns! my parameters..1290/1065mv 1235/1015mv 1190/965mv 1145/915mv 1075/865mv 910/830mv 610/800mv.. recommend! and look at the FPS in their reviews …. yes, and in order to keep it clean, we increase power consumption!

    Guest, 02/22/2017

    Advantages:
    Regarding the vendor:
    -Beautiful design
    -The presence of a backpate, which gives the VC greater rigidity, in this instance of the VC it ​​also performs the function of removing heat from the VRM zone
    -Judging by the reviews and personal experience on the Gigabyte RX 480 8gb costs Samsung
    memory -Well, actually, 8GB of video memory (as practice shows, 4GB is no longer enough for high/ultra settings in MOST games, and some games (Quantum Break, Titanfall 2, Rise of the Tomb Raider) at ultra settings in Full HD consume over 6GB of video memory)
    -8 pin. power supply
    -Low price compared to other vendors
    -RGB backlight
    Regarding the RX480 chip itself:
    -4 spill of the GCN architecture (on average, the efficiency of the architecture increased by 5-6%) -14nm process technology (the use of 14nm process technology allowed AMD to significantly reduce the heat pack , increase clock frequencies and reduce the area of ​​the crystal, which had a positive effect on the cost of chips)
    -150W tdp (for comparison, r9 380 had 180W, and R9 390, whose performance analogue is RX480, 280W)
    -Performance in games in Full HD and QHD at the level of R9 390/290 (in UHD, due to the small number of ROP blocks, RX480 starts to lose ground)
    -Hardware support for DX12, Vulcan (and all low-level APIs that can take advantage of asynchronous calculations). Even on the 2nd generation GCN, using DX12 gives a performance increase of 15-25%, and in the case of Pascal (there is no performance increase on the GTX1070 example): https://youtu. be/zTn2c-THnPw -Recommended price $ 250 (actually for this price at the moment it is impossible to find the RX480 due to the excess of demand over supply)
    -256 bit bus, which together with GDDR5 with an effective frequency of 8 GHz gives a memory bandwidth of 256 Gb / s (33% more than the direct competitor GTX1060)
    -144 texture units (TMU) (80% more than the GTX1060)
    -8GB of video memory (33% more than 1060 6GB)
    -Crossfire support (in many games, as AMD assured at the presentation, it really has parity with the GTX1080, but there are also games where the cross works crookedly)

    Disadvantages:
    Regarding the vendor ( manufacturer):
    -Mediocre CO (I got the VC of the old revision, on the old BIOS in games there was a temperature of 80-82 degrees at fan speeds of 1400-1600, on the new BIOS f8 the fans are configured so that the temperature in games is 70-72 degrees , BUT this increases the noise (fan speeds exceed 2000, and in Furmark they can spin up to 3000)
    — Poorly implemented fan stop (due to inefficient CO when working in 2d mode, the chip temperature easily exceeds 52 degrees, and this is already the threshold for turning on the fans, as a result, in 2d mode we get constant turning on / off of the fans, which is accompanied by an annoying «crack» «). It is solved by setting the fan curve in Afterburner
    -In the BIOS, 127W tdp is wired, when this power consumption value is overcome, the VK starts throttling
    -Again, judging by the reviews, the software from Gigabyte is crooked, but I personally use Wattool and MSI Afterburner
    Regarding the RX480 chip itself:
    -14nm FinFet (the FinFet process technology, although it reduced the cost of chips, but reduced the frequency potential of new products)
    -32 ROP-blocks (the direct competitor GTX1060 has 50% more of them, and R9 390 has 100%) . Due to the small number of texture rasterizers, we get relatively low performance at resolutions higher than QHD
    — Mediocre overclocking and a clearly NON-LINEAR TDP increase. Personally, I managed to drive my RX480 to 1370 MHz at a voltage of 1.17v
    -Low (relative to the competitor GTX1060) performance in older games -Increased timings when overclocking video memory (because of this, paradoxically, when overclocking video memory, we can get even smaller results in games and benchmarks than in stock). The problem is solved by flashing the timings of the video card. -Lack of PhysX 9 support0024 — The lack of the ability to programmatically increase the voltage on the video memory chips (the setting available in Whatman and referred to as the memory voltage is actually the voltage supplied to the memory CONTROLLER.

    Comment:
    In general, I do not regret that I took the RX480 instead of the GTX1060 If at the time of the release of the GTX1060 RX480 leaked it in most games, then six months later the «reds» pulled up firewood and in DX11 VK began to show equal performance, and in DX12 RX480 is in the lead: https://www.youtube.com/watch?v= Z7eHNE8SgBg (on this channel there are also more recent May tests of video cards in 30 games and they also show equal performance there, BUT I don’t recommend watching them, since the RX480 is driven there by video memory WITHOUT flashing timings.) While Nvidia is on Pascal and Moreover, even older architectures emulate support for asynchronous computing, GCN architectures, starting from the second generation, receive a serious performance boost. 0024 «AMuDe’s buggy firewood» is long gone. On the RX 480, I have never had any problems with firewood. By the way, I was pleased with the new firewood and software from AMD: Wattman. Now you don’t need to plot fan speed versus temperature, but you can simply specify the desired chip temperature and the maximum possible fan speed. Special thanks to AMD for Radeon Relive, otherwise Nvidia fanboys are tired of boasting about their ShadowPlay, which most of them don’t use…
    As I already said, just overclocking video memory increases its timings, which is expressed in a performance downgrade. I have timings from a 1625 MHz strap on my Samsung memory. At such timings, the video memory is chasing to ABSOLUTELY stable 2100 MHz (verified by almost 3 months of mining and 0 memory errors during this time in HWinfo monitoring). In 3d Mark Firestrike, at a GPU clock speed of 1200MHz (1v) and 2100MHz from memory with a flashed BIOS, the video card scores 13200 points, if you overclock the GPU to 1370 (1. 17v), then the video card scores 14600 points, and this is already the GTX1070 level. Unfortunately, I can’t compare it with the stock RX480, since its results have not been preserved.
    If you have any questions about Polaris 10/11/12/20/21, please contact VK: https://vk.com/id370589739

    Mishina Julia, 06.02.2017

    Advantages:
    1. Powerful (for the middle price category). Similar to GTX 970.
    2. Low power consumption
    3. Backlight

    Disadvantages:
    1. Noisy.
    2. In idle, large jumps in core frequency — from 300 to approximately 900-1000 (!!) MHz. In GPU-Z, it looks like a «cardiogram» ((Not critical, but unpleasant. This was not the case on the old vidyukha. Therefore, minus one point.
    2.1. There were big problems with the fans. Fixed by flashing bios.

    Comment:
    In silent mode (idle) the fans are disabled. They turn on at a certain temperature or a certain core frequency. Considering that the core frequency jumps all the time, the fans turn on and off a second after being turned on with an interval of about one minute. It is good that the case is completely closed, and these sounds do not hurt the ear. In silent mode, with the case closed, the core temperature is 48 degrees (normal air circulation in the case). In my opinion, this is not enough, although it is acceptable. With the help of the XTREME GAMING ENGINE program, I set the automatic mode. In automatic mode, in idle mode, the constant speed of the vidyuhi fans is 937-960 rpm, which gives such a normal rumble ((. But the temperature of the core is 36-39 degrees.
    In games, the speed of the coolers automatically increases to (the maximum as I had) 2200 rpm. At the same time, the core is loaded by 95% (on average) , the vidyuhi memory is 99% full (on average) and the core temperature does not exceed 80 degrees, i.e. the cooling completely copes with the load. before that I had not the weakest vidyuha — GTX 670 is also from Gigabyte.With RX 480 I passed CoD Advanced warfare at maximum settings with Fxaa anti-aliasing, without vsync, 1920×1080 — less than 43 fps does not fall, but on average 80-100.
    Mirror’s Edge Catalyst on hyper settings with vsync(!) and anti-aliasing 1920×1080 — flies, sometimes a little friezes, otherwise everything is smooth.
    Dirt 3 on ultra with vsync and anti-aliasing — flies without problems)).
    Kholat all settings at maximum 1920×1080 — 1 time per 20 minutes it drops to 37-40 fps, and so everything is fine.
    CoD Infinite Warfare all settings at maximum 1920×1080 — regular but not critical drawdowns up to 30-35 fps, and an average of 70 fps.
    P.S. Vidyuha works in conjunction with the FX-8350 and 16 GB Kingston FuryX pc-1866. Game performance tested with GPU-Z.

    Mishina Julia, 12/20/2016

    Advantages:
    1. Powerful (for the middle price category). Similar to GTX 970.

    2. Low power consumption

    3. Backlight

    Disadvantages:
    1. Noisy.

    2. In idle, large jumps in core frequency — from 300 to approximately 900-1000(!!) MHz. In GPU-Z it looks like a «cardiogram» ((Not critical, but unpleasant. This was not the case on the old vidyuhi. Therefore, minus one point.

    2.1. There were big problems with the fans. Fixed by flashing the BIOS.

    Comment:
    In silent mode (in idle) the fans are disabled.They turn on at a certain temperature or a certain core frequency.Given that the core frequency jumps all the time, the fans turn on and off a second after being turned on with an interval of about one minute.It feels like a padded bird is waving wings, but can not take off. It’s good that the case is completely closed, and these sounds do not hurt the ear. In silent mode, with the case closed, the core temperature is 48 degrees (normal air circulation in the case). In my opinion, this is not small, although it is acceptable. With the help of the XTREME GAMING ENGINE program, I set the automatic mode.In automatic mode, in idle, the constant speed of the vidyuhi fans is 937-960 rpm, which gives such a normal rumble ((. But the temperature of the core is 36-39 degrees.
    In games, the speed of the coolers automatically increases to (the maximum as I had) 2200 rpm. At the same time, the core is loaded by 95% (on average) , the vidyuhi memory is 99% full (on average) and the core temperature does not exceed 80 degrees, i.e. the cooling completely copes with the load. before that I had not the weakest vidyuha — GTX 670 is also from Gigabyte.With RX 480 I am now playing the first game of Call of Duty Advanced Warfare.I have not played other games on RXe yet.For those who do not know, watch the video review of this game on stopgame (there are high graphics settings.) The graphics there are hurricane. I saw the new doom, lara croft (but have not played yet), etc. Yes, these are beautiful games, but play this call of duty aw. On RXe, everything is twisted to extra with Fxaa anti-aliasing, without vsync — less than 43 fps does not drop, but on average 80-100.0024 Mirror’s Edge Catalyst on hyper settings with vsync and anti-aliasing — flies, no friezes, everything is smooth.
    Dirt 3 on ultra with vsync and anti-aliasing — flies, no problem))
    P. S. Vidyuha works in conjunction with the FX-8350 and 16 GB Kingston FuryX pc-1866.

    Fast Roman, 11/10/2016

    Advantages:
    low price, backplate, glowing logo in different colors on board.

    Disadvantages:
    noisy cooling, freezes in WOT

    Comment:
    Everything would be fine, but hangs WOT. Hangs up in different ways: once an hour, once every half an hour, every 5 minutes. I didn’t determine what it was connected with, but it’s definitely not connected with heating. On the forum, WG found a six-month-old thread with a description of the problem, where it is advised to clean up the remnants of old drivers and attribute the problem to the factory overclocking of the video card. Haven’t found any problems with other games. Nothing helped with WOT, except for replacing it with a green 1060. Well, okay, even though R9 280 had been standing for two years and nothing bothered me. The devil made me trust the Reds again. Apparently — not fate! Gigabyte or AMD is to blame, I have no idea, but for me it doesn’t matter anymore. I understood a simple rule for myself — an outsider-producer always counts on an outsider-consumer. Buy the best…

    Detailed specifications

    General specifications

    Video card type
    office/gaming
    GPU
    AMD Radeon RX 480
    Manufacturer code
    GV-RX480G1 GAMING-4GD
    Interface
    PCI-E 16x 3.0
    GPU codename
    Ellesmere XT
    Process
    14 nm
    Number of monitors supported
    5
    Maximum resolution
    7680×4320

    Specifications

    GPU frequency
    1290 MHz
    Video memory size
    4096 Mb
    Video memory type
    GDDR5
    Video memory frequency
    7000 MHz
    Video memory bus width
    256 bit
    SLI/CrossFire support
    yes
    CrossFire X support
    yes

    Connection

    Connectors
    DVI-D, HDCP support, HDMI, DisplayPort x3
    HDMI version
    2. 0b
    DisplayPort version
    1.4

    Math block

    Number of universal processors
    2304
    Shader version
    5.0
    Number of texture units
    144
    Number of ROPs
    32
    Maximum anisotropic filtering
    16x
    Support for

    standards

    DirectX 12, OpenGL 4.5

    Additional features

    Vulkan support
    yes
    OpenCL Version
    2.0
    AMD APP support (ATI Stream)
    yes
    Additional power required
    yes, 8 pin
    Recommended power supply
    500W
    TDP
    150W
    Cooling design
    custom
    Number of fans
    2
    Dimensions
    232×116 mm
    Number of slots occupied
    2
    Additional information
    cooling system WINDFORCE 2X

    Before buying, check the technical characteristics and equipment with the seller

    Manufacturers

    • ASUS1189
    • GIGABYTE1047
    • MSI955
    • Sapphire659
    • Palit655
    • PowerColor390
    • ZOTAC382
    • Inno3D278
    • XFX266
    • Gainward257
    • HIS204
    • EVGA164
    • PNY109
    • Leadtek83
    • KFA275
    • Sparkle72
    • Point of View56
    • HP41
    • VTX3D33
    • ECS27
    • Albatron27
    • Manli26
    • Foxconn20
    • Colorful20
    • GALAXY20
    • GeCube16
    • Chaintech23
    • Club-3D13
    • Forsa10
    • BFG10
    • AMD9
    • ASRock7
    • FORCE3D6
    • XpertVision6
    • GALAX6
    • AFOX5
    • Lenovo2
    • Jetway2
    • Biostar2
    • OCZ2
    • ZOGIS2
    • Matrox2
    • Diablotek2
    • Sysconn2
    • Diamond2
    • VVIKOO2
    • NVIDIA1

    show more

    Will forza horizon 4 handle it?

    Updated: 10/01/20220003

    • Operating system: Windows 10 (x64)
    • Processor: Intel Core i3-4170 (3. 7 GHz) or Core i5-750 (2.67 GHz) | analogue from AMD
    • RAM: 8 GB
    • Video card: Nvidia GeForce GTX 650Ti or GT 740 | AMD Radeon R7 250X with 2GB
    • DirectX version: 12
    • Also: Keyboard, mouse, gamepad
    What to do if the game slows down or the PC is weak?
      using Driver Updater using CCleaner using Advanced System Optimizer
    Recommended system requirements

    The recommended system requirements for Forza Horizon 4 show which computer can run the game at maximum graphics settings and still play without stuttering and at a high number of frames per second (FPS):

    • Operating system: Windows 10 (x64)
    • Processor: Intel Core i7-3820 (3.6 GHz) | analogue from AMD
    • RAM: 12 GB
    • Video card: Nvidia GeForce GTX 970 or GTX 1060 | AMD Radeon R9 290X or AMD RX 470 with 3GB
    • memory

    • DirectX version: 12
    • Also: Keyboard, mouse, gamepad
    What to do if the game slows down or the PC is weak?
      using Driver Updater using CCleaner using Advanced System Optimizer
    Forza Horizon 4 PC Test

    Test your PC to see if Forza Horizon 4 will work for you. The test will compare your PC’s specs with the game’s system requirements and recommend an upgrade if needed

    Processor: 11th Gen Intel Core i5-11400F @ 2.60GHz

    Video card: GeForce 9800M GTX

    Upgrade required to at least: Radeon R7 + R5 435 Dual A10-9700 RADEON or Radeon R7 + R5 Dual or Radeon R7 + R5 330 Dual or Radeon R7 + R7 240 Dual or Radeon R7 + R7 200 Dual or Radeon R7 +8G or Radeon R7 + R7 250 Dual or Radeon R7 + R5 340 Dual or Radeon R7 + HD 7700 Dual or Radeon R7 + R7 350 Dual or Radeon R7 250X or GeForce GTX 650 Ti

    Upgrade to: 12 GB is recommended for best performance

    Result: The game may not start or work incorrectly on the current configuration

    The above recommendations may contain errors and are not the ultimate truth. Report any bugs to us via the feedback form

    0003

    • Operating system: Windows 10 (x64)
    • Processor: Intel Core i3-4170 (3. 7 GHz) or Core i5-750 (2.67 GHz) | analogue from AMD
    • RAM: 8 GB
    • Video card: Nvidia GeForce GTX 650Ti or GT 740 | AMD Radeon R7 250X with 2GB
    • DirectX version: 12
    • Also: Keyboard, mouse, gamepad
    What to do if the game slows down or the PC is weak?
      using Driver Updater using CCleaner using Advanced System Optimizer
    Recommended system requirements

    The recommended system requirements for Forza Horizon 4 show which computer can run the game at maximum graphics settings and still play without stuttering and at a high number of frames per second (FPS):

    • Operating system: Windows 10 (x64)
    • Processor: Intel Core i7-3820 (3.6 GHz) | analogue from AMD
    • RAM: 12 GB
    • Video card: Nvidia GeForce GTX 970 or GTX 1060 | AMD Radeon R9 290X or AMD RX 470 with 3GB
    • memory

    • DirectX version: 12
    • Also: Keyboard, mouse, gamepad
    What to do if the game slows down or the PC is weak?
      using Driver Updater using CCleaner using Advanced System Optimizer
    Forza Horizon 4 PC Test

    Test your PC to see if Forza Horizon 4 will work for you. The test will compare your PC’s specs with the game’s system requirements and recommend an upgrade if needed

    Processor: AMD FX-4300 Quad-Core

    Upgrade recommended for best performance: Intel Core i7-3820

    Video Card: GeForce GTX 1050 Ti

    For best performance, upgrade to: Radeon R9 290 / 390 or Radeon R9 290X / 390X or GeForce GTX 970

    Result: The game may not start or work incorrectly on the current configuration

    The above recommendations may contain errors and are not the ultimate truth. Please report any bugs to us via the feedback form

    These are the system requirements

    Forza Horizon 4 (minimum)

    • CPU: Intel i3-4170 @ 3.7Ghz OR Intel i5 750 @ 2.67Ghz
    • CPU FREQUENCY: Info
    • RAM: 8 GB
    • OS: Windows 10 version 15063.0 or higher
    • VIDEO CARD: NVidia 650TI OR NVidia GT 740 OR AMD R7 250x
    • PIXEL SHADERS: 5. 0
    • VERTEX SHADERS: 5.0
    • DEDICATED VIDEO MEMORY: 2 GB

    Recommended system requirements of the game

    Forza Horizon 4

    • CPU: Intel i7-3820 @ 3.6Ghz
    • CPU FREQUENCY: Info
    • RAM: 12 GB
    • OS: Windows 10 version 15063.0 or higher
    • VIDEO CARD: NVidia GTX 970 OR NVidia GTX 1060 3GB OR AMD R9290x OR AMD RX 470
    • PIXEL SHADERS: 5.1
    • VERTEX SHADERS: 5.1
    • DEDICATED VIDEO MEMORY: 4 GB

    The 11th title in the Forza franchise and the 4th in the Forza Horizon series is an open world racing game with over 450 cars available. You can race or stunt and explore all of Britain. After being developed by Playground Games and published by Microsoft Studios, Forza Horizon 4 hit the market on October 2, 2018. Its predecessor, Forza Horizon 3, received great reviews from critics and gamers alike. The newest feature that has been implemented in this part of the series is the ability to change the seasons during the game, thus increasing the replay value and difficulty. The nature of the open world and the mass of AI-controlled vehicles in Forza Horizon 4 make it necessary to have at least an average hardware to meet the minimum requirements.

    Specifically, Microsoft is asking you to have an old Core i5-750 or slightly newer Core i3-4170 as the minimum CPU on your computer. Unfortunately for AMD, no specific model is listed, but AMD’s approximate equivalent is the Phenom II X2 550. In addition to these mid-range processor models, your PC also needs a 2GB VRAM graphics card such as the GeForce GTX 650Ti or Radeon R7 250X. The GeForce GTX 650Ti is currently the 40th most popular GPU and is ranked 50th in Nvidia’s power rating. If your PC had this graphics card, it could cover 85% of the minimum requirements of modern games.

    You might miss the gameplay of Forza Horizon 4 on low to medium settings, which is understandable given the scale and beauty of this game. For gaming and racing enthusiasts, you need a PC with a high-quality graphics card and a powerful modern processor. Recommended graphics card requirements like GeForce GTX 970 / GeForce GTX 1060 or Radeon R9 290x / Radeon RX 470 are among the highest in the industry. In comparison, they are roughly the same as the popular Call of Duty: Black Ops 4 and are in fact no different from Forza Horizon 3. The fact that Fh4 and Fh5 have the same system requirements because they are built using the same engine Forzatech.

    Microsoft has released the official system requirements for Forza Horizon 4.

    Minimum
    • OS: Windows 10 version 15063.0 or later
    • Architecture: x64
    • DirectX: DirectX 12 API
    • RAM: 8 GB
    • Video memory: 2 GB
    • Processor: Intel i3-4170 @ 3.7Ghz or Intel i5 750 @ 2.67Ghz
    • Graphics: NVidia 650TI or NVidia GT 740 or AMD R7 250x
    Recommended
    • OS: Windows 10 version 15063.0 or later
    • Architecture: x64
    • DirectX: DirectX 12 API
    • RAM: 12 GB
    • Video memory: 4 GB
    • Processor: Intel i7-3820 @ 3. 6Ghz
    • Graphics: NVidia GTX 970 or NVidia GTX 1060 3GB or AMD R9 290x or AMD RX 470

    Steam Weekend Deal: 50% Off Forza Horizon 4

    Someone said that the system requirements will be lower than in 3 parts? I didn’t notice any difference

    mysterio61972v Requirements below

    SanSanZ

    900 Well tell me where?

    mysterio61972v Nobody promised lower requirements. They said that the game would run better on similar hardware

    mysterio61972v wrote: And yes, how much space will it take on the railway? again 100+ GB?

    Hom1e there is yes, that not everyone has so much disk space for games

    Mysterio61972V there RX480, here RX470)

    Sansanz vau, how many requirements were changed mysterio61972v And you don’t appreciate these works

    SanSanZ what works? what are you talking about?

    ahaha stupid copy-paste from horizon3. And they said they would be lower.

    I remind Ful edition Graphics: NVidia GTX 1060 3GB Interestingly, nvidia cards are so cool that even having 3 GB of video memory does not prevent it from meeting the requirements of 4 GB.

    If only the demo was already delivered.

    GIBAY yeah they’ll bring it) buy it and let them try it

    plutikov55 Demos of all the previous parts always appeared a month before the release, since the days of Xbox 360

    I’ll play on the box. Especially with a game pass they let you play)

    Shtainman wrote: Interestingly, nvidia cards are so cool that even the presence of 3 GB of video memory does not prevent it from meeting the requirements of 4 GB

    Now in our time to judge a video card only by the amount of video memory. And yes, 1060 even at 3 GB will be better than most 4 gigs (for example 1050ti) or at least go on a par (as with 970). So the system requirements for the game were far from amateurs

  • Sims 3 paintings are missing
  • Galaxy on fire 2 blueprints
  • Pubg freezes
  • Scorty GTA 5 Abandoned Cars
  • The most realistic minecraft realistic texture pack
  • How to start cryptocurrency mining for beginners from scratch (beginning) 2020 or step-by-step mining

    Greetings friends, in this article, by the way, the first on this site about mining (the article is constantly updated), we will talk about how to start mining on a video card (GPU) and where to start. If you are too lazy to read, at the end of the article will soon be video about how to start mining For beginners. But first, I advise you to read the first 2 paragraphs on how to start mining for a beginner.

    Mining start

    This article, as you understood above, is about mining on a video card and step by step instructions. This mining guide is for beginners. We will set up mining, connect, install software, overclock the video card, and so on.

    Start mining

    What do we need to start mining from scratch

    Step-by-step mining

    1. And probably the most important thing for mining on a video card is a video card (GPU), at least one, also more or less modern, about no more than 3-4 years old, and not budget. (Mining on asics, processors, hard drives and cloud mining is also possible). If you want to pick up a video card, then you are here — Video cards for mining, the best cards are selected there, tables and top are compiled.
    2. Naturally, a computer (system unit) or a farm with an installed Operating System (Windows x64). It is the 64 bit version of
    3. Decide on the currency to be mined. Depends on the video card, in our example, the description will be mining ether eth. It is better to mine on Nvidia at the moment (12/28/2017) ZCASH — more on that below. Now live.
    4. Since mining is online, we need the Internet. Fast speed is not needed, but good ping is desirable, more on that below.
    5. Select a pool (POOL) where we will mine ethereum (do not confuse with etherium). Next, select the miner program and configure.
    6. Choose an exchange or wallet where ours will be deposited and accumulated for mined ethereum coins, as well as services where you can transfer our earned coins to rubles and withdraw to the card.

    Mining instruction start

    And so let’s start, you have a suitable video card and all of the above listed in paragraphs 1, 2, 4 above and the drivers must be installed. If there is no video card, then read the article which video card for mining is better to choose and buy. If you do not have video cards and equipment, more about what kind of equipment you need for mining at the link — What you need for mining.

    Determine which cryptocurrency to mine. Today (01/05/2020) and in principle it will be so for a long time,
    on cards from AMD Radeon it is profitable to extract (mine) ether, on cards
    Nvidia Zcash is also live. In principle, on the new nvidia 1000 series (gtx 1060. 1070) the air is also good
    is being mined, we will have an example of mining eth for a start. Next, we will write examples for other cryptocurrencies. If you want to mine bitcoins (they are not mined on video cards), then read the article — How to mine bitcoins.

    Also, if you don’t want to set up equipment, you are struggling with finding video cards and ASICs, there is an alternative — cloud mining with which you can also make good money, more details — Real cloud mining .

    In our example, there will be Windows 7 x64 (by the way, ether mining only on x64 bit operating systems), two AMD Radeon Sapphire RX 470 4gb video cards. The processor does not matter, in principle, as well as the amount of RAM, but it is recommended from 4 gigabytes. The easiest maning kit for beginners. UPD 3 GB and lower versions of video cards are no longer under go for ether mining, if you have 3gb or less, then you need to look for an alternative. There are other algorithms and currencies for mining, the list is HERE .

    Fast internet is not needed, but with good ping. If you connect via a cable, then of course it’s better than the Wi-Fi option, but Wi-Fi is also suitable (I’ll write about this later), you can also read what kind of Internet for mining.

    How to start mining

    Choose a pool where we will mine ether coins:

    At the time of this writing, I recommend several pools for mining Small and honest commission 1%

  • + Good ping
  • Cons:

    • Pool in English
    • A bit incomprehensible site (for beginners) but we will describe here how to set it up using its example

    2.dwarfpool.com/eth
    pluses:

    • + for those who are from Russia, they have a server in Russia, which is good for ping
    • + a small commission of 1% that is not secretly overestimated
    • + Good pool power
    • + Easy miner setup

    Cons :

    • — A little incomprehensible site (for beginners)

    3. www2.coinmine.pl/eth/
    Pros:

    • + Honest small commission 1%
    • + good ping
    • + good power
    • + good protection

    disadvantages:

    • — more difficult setting manner
    • — with the site there are also difficulties

    4. eth.nanopool.org/

    • + Simple site
    • + Ease of setting up the miner
    • + High power

    Cons:

    • — There are rumors among the miners and many claim that the declared commission of 1% is well underestimated

    Even Better is the pool to which there is less ping, read the article to — Find out what ping to the server pool.

    At the moment, I have chosen and will show the setting using example 1 — ethermine.org open the site in your browser

    So the pool has been chosen, now you need to select a wallet or exchange in advance. What is a wallet, Ethereum Wallet? This wallet is located on your computer, coins are received on it, then from this wallet through services they can be withdrawn to the card, which is not profitable at the moment, it is more profitable to transfer to bitcoins, and then to the card in rubles, but this can be done on the exchange. But the wallet is much safer than exchanges, soon I will write an article on how to create
    wallet.
    But for beginners It is better to work with exchanges

    1. Binance is a more professional exchange, a good alternative to exmo!
    2. Exmo is a great exchange, especially for beginners, loves miners.
    3. Yobit is a good Russian exchange.

    — there are many of them, but let’s start with the simplest one, then with the advent of experience you will switch to a more interesting exchange, I also advise you to read the list of good mining exchanges.

    And so we chose the exchange Binance, follow the link

    You will immediately go to the registration page, if yes then you can skip the next screen, if not then click registration

    Registration for the mining exchange binance

    Next:

    Register, enter your login, your email address.

    So do not forget to check the box that you are over 18 years old. The password must contain at least 8 letters and numbers, at least one letter must be capitalized. Click create an account, solve the captcha and confirm the email address, there you will need to enter the numbers that will come to the mail.

    Should be like this

    Next
    choose a miner program to start setting up a video card for mining, the most popular and functional at the time of writing the article is Claymor’s Dual Ethereum 7.4, a new version 15.0 has appeared, download further, suitable by the way for both AMD and Nvidia video cards. Download here (15.0).

    After downloading, unpack the archive with the miner to a place convenient for you, the folder into which you unpack, preferably that was named in Latin (in English). This is what the content should look like.

    Now let’s start setting up the miner and mining
    Next, the Start.bat file, right-click on it, and select edit. If the Run button appears, click it.

    Here is the code of our file that we will run

    and we need to edit it for your pool and your wallet (exchange)
    -epool and then specify the address of the pool since we chose ethermine. org from us should be:

    -epool eu1.ethermine.org:4444

    (This is the address and port for mining, for other pools you can find out on the pool website)

    Do not close the text editor

    Next, we go to the Binance exchange on which we previously registered. We enter under our login, click wallets (gray arrow)

    then in the search we type eth (circled in red).

    Next, open the hidden balance for convenience (red arrow), then press the deposit (green arrow) and see the following:

    There will be a set of characters (circled in red), this will be your wallet number, click «Copy address» (red arrow).
    And we return to our test editor, and in the line -ewal we insert our own
    instead of someone else’s address
    Something like this

    -ewal 0x2a8dba001857ac96336d65efaa6b3059789ef070

    Next, we see the following

    -eworker flex2

    Where “komp1” is the name of your worker, you can call it whatever you like for monitoring convenience. At the end, we add pause — this is necessary in order if there are errors so that the miner does not close. The rest is left unchanged.

    Now go to the ethermine.org pool site and a little to the right there is a scoreboard miner adress (Statistics)

    and enter your address there (which we copied from exmo and pasted into start.bat -ewal) press enter and see the statistics of his worker, statistics attention appears after 5 — 30 minutes, depending on the mining speed.

    Unpaid balance (red arrow) — your balance in ether coins that have not yet been paid to the wallet on the exchange (one coin costs from 100 to 1000 dollars depending on the exchange rate)

    Estimated Earnings (blue arrow) — Your approximate income, to the right of the settings for what period, correctly shows only after 24 hours of continuous mining.

    Reported (pink arrow) — your current speed in the miner.

    Average (yellow arrow) — Your average speed for the last 6 hours. It will show correctly after 6 hours of continuous mining.

    Current (green arrow) — your effective power for the last 60 minutes, correctly shows after 2 hours of continuous mining.

    Next, scroll down the page and see

    Name — remember in mining we set up start.bat -eworker flex2 in the file, here is the list of workers of running miners in my case 1 computer is running, where last seen is written this is your last ball, and if it was more than 30 minutes ago, maybe the miner is not working for you or the miner has turned off, you need to check. Usually, if there are problems with the miner (worker), then the background where the speed is shown is not blue, but red, as in the screenshot below.

    That’s it, now you are mining ether, first they will appear in the dwarfpool statistics, and then after a while ether coins will appear in the binance exchange, after how long they will appear depends on the speed of your video cards. If she is alone sometimes you have to wait a long time. Attention ethermine made 1 ether auto withdrawal by default, which is a lot, but this can be reduced to 0.05 so that they would come to Binance faster, read here — tomorrow we will write an article!

    I hope you understand our step-by-step mining instructions for dummies or mining training, if you have any questions, write. Now you can start earning on mining.

    How to withdraw money from the Binance exchange to your card.

    Mined? Don’t know how to withdraw? Read How to withdraw on binance.

    If you do not have equipment, then I advise you to read — What you need for mining.

    Also, if you do not want to suffer with setting up and buying equipment, there is a cloud mining solution — Reliable cloud mining.

    If you want to mine bitcoin, then read — How to mine bitcoins.

    The video of this article is coming soon — how to create mining.
    Mining in Russia is developing, so you develop with us in 2020, good luck to everyone in mining, now you are not a mining beginner!!!

    If your card is less than 3 gigabytes, then you can choose a different algorithm, the list is here.

    Write questions in the comments.

    You can also ask your questions on the forum, and you will definitely get an answer — Forum.

    The page you requested was not found on our site.

    • First MSI B650 Motherboards…

      MSI’s first B650 motherboards for Ryzen 7000 processors are now available for at least $199 USD.

    • GPU-Z now supports GPUs…

      GPU-Z now supports NVIDIA GeForce RTX 4090 and Intel Arc A770/A750 GPUs.


    • Integrated circuit manufacturing in South…

      Integrated circuit production in South Korea declined for the first time in many years

    • ASUS DIY-APE Revolution project is…

      The ASUS DIY-APE Revolution project is an attempt to improve PC cable management.

    • AMD Processor Roadmap Leaked…

      AMD Desktop Processor Roadmap Leak Confirms Ryzen 7000 X3D Series

    • GeForce RTX 4090 to 60%…

      GeForce RTX 4090 is 60% faster than RTX 3090 Ti in Geekbench CUDA test

    • Gigabyte Introduces GeForce RTX Graphics Card.

      ..

      Gigabyte Introduces GeForce RTX 4090 Aorus Waterforce Graphics Card With 360mm AIO Cooler

    • MicroCenter offers a free memory kit…

      MicroCenter Offers Free 32GB DDR5 Memory Kit with AMD Ryzen 7/9 7000 Processors

    • Intel Arc custom graphics cards unveiled…

      Intel Arc A770 and A750 custom graphics cards introduced

    • Intel Arc A770 GPUs…

      Intel Arc A770 and A750 Desktop GPUs Enter Testing

    • NVIDIA GeForce RTX 4090 Performance…

      NVIDIA GeForce RTX 4090 performance showcased in Digital Foundry’s first look at DLSS 3 technology

    • Acer unveils Predator Arc graphics card.

      ..

      Acer introduced the graphics card Predator Arc A770

    You requested https://gamegpu.com/%25d0%25b6%25d0%25b5%25d0%25bb%25d0%25b5%25d0%25b7%25d0%25bd%25d1%258b%25d0%25b5-%25d0%25bd% 25d0%25be%25d0%25b2%25d0%25be%25d1%2581%25d1%2582%25d0%25b8/amd-vega-10-zamechena-v-drajverakh-linux , but no matter how hard our servers tried, we couldn’t find. What happened?

    • the link you clicked to come here contains a typo
    • or this page has been somehow either deleted or renamed by us
    • or, which is of course unlikely, did you enter it manually and in doing so made a small mistake?
    However, the world does not end there: you might be interested in the following pages on our website:
    • igrovye-novosti/amd-i-creative-assembly-ob-edinilis-dlya-realizatsii-maksimalnykh-effektov-v-alien-isolation.