1060 vs 480: AMD RX 480 vs Nvidia GTX 1060-6GB

Reddit — Dive into anything

So I’ve been on a bit of a spreadsheet kick lately, and with all the back and forth about GTX 1060 vs. RX 480, I decided I’d throw my own data into the circus.

First off, shout out to u/kaol who gave me the idea for this. This whole thing started with his own scatterplot, which you can find here.

 

So, what is this? I’m going to describe how I went about gathering data, then I’ll show the charts, and then I’ll do my best to summarize them. For those of you who are impatient and trust my methods, feel free to skip ahead. The first thing I did was gather benchmarks from a number of different sources, and I had several criteria that the reviews had to meet in order to be included: they had to have a reasonably detailed section on their testing methodology, they had to describe their test system, they needed to reasonably describe the settings they used to test the games, they had to list which drivers they were testing with and those drivers had to be up-to-date.

After gathering the various benchmarks, I went through them and aggregated them by game. Any individual benchmark which wasn’t within a reasonable range of other benchmarks done on the same game was removed. Additionally, any games that were outliers themselves were removed, although this only resulted in the exclusion of Project CARS, because apparently the developers decided to give a giant middle finger to AMD cards.

Instead of using the raw FPS numbers from each game (which vary too wildly between review sites), I used the same method I used to generate my normalized benchmark data and instead calculated the performance differential (in %) between the GTX 1060 and RX 480, where a positive differential would favor the GTX 1060 and negative numbers favor the RX 480. I also gathered each game’s release date and converted it to a decimal number so I could plot the performance differential as a factor of when the game was released to attempt to shed light on whether the RX 480 is favored by newer games or not.

Before getting to my conclusions, here are the charts:

  • RX 480 vs. GTX 1060 — Performance Differential by Release Date

  • RX 480 vs. GTX 1060 — Performance Differential by Game

  • Source Spreadsheet

 

The charts themselves should be fairly self-explanatory. The first plots the performance differential against a games release date, and the second just shows the differential by game, sorted from the games most favoring the RX 480 down to the games most favoring the GTX 1060.

And now, some analyses:

  • Take the Doom (Vulkan) results with a grain of salt. Yes, it’s mighty impressive, but it is also currently the only Vulkan benchmark being tested, and the only other Vulkan game I even know of is Talos Principle, in which the Vulkan API hurts performances on ALL cards. I’m just saying, the 1060 takes a 10 fps hit on Vulkan, which shouldn’t be, so we need to wait for more Vulkan games before coming to any conclusions.

  • The RX 480 is demonstrably better at low-level API’s. I realize I just said to hold back from making conclusions based on the Doom benchmark, but in addition to Doom, there are 5 DX12 games, 4 of which are frequently benchmarked, and out of those 6 games, the 480 is measurably faster on 5 of them. Again, it’s only 6 games, but this is definitely becoming a trend and could mean big things for AMD if the 480 continues to dominate in DX12 titles, as these are expected to become more and more frequent.

  • That said, across these benchmarks, the GTX 1060 is, on average, 9.2% faster than the RX 480, and that’s including the games where the RX 480 is faster. Looking at only DX11 games, the 1060 is 12.2% faster. That’s a pretty big performance delta for a card that’s only $50 more expensive.

  • And lastly, despite my super nifty release date scatterplot (which, really, u/kaol should take all the credit for), there is not, currently, a correlation between the performance delta between the 1060 and 480 and a game’s release date. If you’re a stats geek, the Correlation Coefficient (R) of this dataset is -0.3788, showing a weak negative trend, and the Coefficient of Determination ( R2 ) is 0.1435. It’s possible that, given more DX12 and Vulkan releases, we might see this trend solidify, but for now, any attempts to form a conclusion from this data would be a reach, at best.

As always, thanks for reading, let me know if you have any questions or suggestions. Below, I’ll post links to the reviews I used for these analyses, let me know if you have an issue with any of the sites (and if you do, specifics and links to back up your claims are appreciated) and also let me know if there are other reputable sites that you’d like me to consider including in the future.

 

Review Sources (all reviews were that site’s GTX 1060 Review):

  • BabelTechReviews

  • Eurogamer

  • Guru3D

  • HardOCP

  • Hardware Canucks

  • Hexus

  • TechPowerup

  • Tom’s Hardware

EDIT #1: I realized that I never made a note about the DX12 and Vulkan games. If a game was available in Vulkan or DX12, I included ONLY the results where the game was run in either Vulkan or DX12. This means that no DX11 results were tabulated for Rise of the Tomb Raider, Hitman, etc… and no OpenGL results were included for Doom. I apologize that this wasn’t made clear.

EDIT#2 ~ Clarifications and Corrections: There are a number of questions that users keep asking, so I’m going to do my best to make several clarifications and error fixes here, before addressing them individually.

  • All benchmarks were done with the stock/reference versions of both cards. That means the Founders Edition of the GTX 1060, and the reference 8 GB board sent to reviewers for the RX 480.

  • No special treatment was given to the RX 480 to prevent throttling. A lot of people are saying that the RX 480 has such bad cooling that it is throttling itself and severely impacting its score by as much as 20%. Those are lofty claims and mean 1 of 2 things must be true: Either those claims are misguided and we’ll see modest gains from the AIB partners, or AMD has made a truly COLOSSAL error when sending out these reference boards.

  • The price difference is $10 not $50. This is my mistake. In my head I was comparing the $200 price of the RX 480 to the $250 price of the GTX 1060 when, in fact, all reviews were done with the $240 8 GB RX 480, which is 3-4% faster than the 4 GB model. I apologize for the error.

  • All DX12 or Vulkan games only used results on the DX12 or Vulkan version of that game. That means that I included no results where the reviewer didn’t specify which API they used on the game and I used no DX11 results for Rise of the Tomb Raider, Hitman (2016), Ashes of the Singularity, or Total War: Warhammer, and I used no OpenGL results for Doom (2016).

  • All tests were done at 1080p. This is the resolution that these cards were aimed at, so it seemed most appropriate to use that resolution for this comparison.

Lastly, (this is for you u/AdoredTV), some users have implied that I have manipulated the data by excluding specific results, and they’re right. I have done my very best to be completely transparent and unbiased, but I’m still new at this, and so perhaps I didn’t make things as clear they should have been, so here is EXACTLY what I have done:

  • Any individual result that was out of line with several other results for the same game was thrown out. This means that the GamersNexus result on Ashes of the Singularity was thrown out because it favored the GTX 1060 by 3.2% when the 5 other sites favored the RX 480 by 3.3%. Additionally, I threw out the BabelTechReviews result on Hitman (2016) because it favored the GTX 1060 by 7. 9% when the six other reviews favored the RX 480 by 11.9%.

  • I used standard outlier methodology, outlined here, to determine which results were «outliers» and should be excluded entirely. This meant throwing out any results that favored the RX 480 by 25.0% or more and throwing out any results that favored the GTX 1060 by 41.4% or more. Ultimately, this only resulted in throwing out two results, both for Project CARS, both of which favored the GTX 1060 by more than 50%.

  • The spreadsheet has been appropriately updated, with a new tab added where the removed games are highlighted in red and where the math for the outliers has been done at the top of the sheet.

GTX 1060 vs. RX 480: Which Graphics Card is Better? (After Driver Update)

For those looking for a mid-range graphics card, this generation’s two leading choices are NVIDIA’s GTX 1060 and AMD’s RX 480. Both cards come with two models of varying video RAM capacities and price points:

  • NVIDIA: GTX 1060 3 GB (~$200-210) and GTX 1060 6 GB (~$250-265)
  • AMD: RX 480 4 GB (~$200-230) and RX 480 8 GB (~$240-250)

Clearly, these cards are direct competitors in performance and price. Let’s take a look at each head-to-head matchup to figure out which is the better purchase.

GTX 1060 3 GB vs RX 480 4 GB

In a case where your graphics card budget is closer to $200, the GTX 1060 3 GB and RX 480 4 GB come into play. The prices for these cards hover around that $200 range, going higher or lower depending on the quality of the cooler and extra features such as fancy lighting.

The 1060 3 GB was slightly faster than the RX 480 4 GB at launch, but after all the recent driver updates from AMD, the gap has narrowed. According to an extensive test at HardwareUnboxed, the RX 480 4 GB is now equal to — or marginally faster than — the GTX 1060 3 GB on average.

The graph compares the GTX 1060 3 GB vs the RX 480 4 GB. If the percentage is positive, it means that the GTX 1060 3 GB is faster (for instance, Quantum Break performance is 19% better on the GTX 1060 3 GB). If the percentage is negative, it means that the GTX 1060 3 GB is slower (for instance, DOOM performance is 25% worse on the GTX 1060 3 GB).

Source: HardwareUnboxed

These two cards are extremely close in terms of performance, so the choice here will be much more dependent on the brand’s driver ecosystem and game preferences of the user. It is also important to note that the AMD card performs much better in vendor-agnostic DirectX 12 and Vulkan titles (for instance Deus Ex: Mankind Divided, The Division in DX12 mode), which gives it an edge in upcoming games that will be built on the newer DirectX 12 and Vulkan APIs. Nvidia on the other hand performs formidably in Nvidia-optimized titles (Watch Dogs 2, Dishonored 2) and Overwatch, one of the most popular games to date.

Deals are prominent on both of these cards, with the 1060 3 GB being on average a tiny bit more expensive than its AMD counterpart. As the GTX 1060 3 GB is a fairly power-efficient card, using about 120W under a typical gaming load, any decent two fan cooler with heatpipes should be sufficient, but the single-fan 1060s tend to stay around 70-75 degrees C under full load. The RX 480 4 GB is less power-efficient, using around 165W under typical gaming load, similar to a GTX 1080. Avoiding blower-style and cheap coolers without heatpipes is recommended.

To sum it up: the GTX 1060 3 GB and RX 480 4 GB are cards that you should look at when wanting to spend around $200 on your GPU. On average, the RX 480 4 GB is slightly faster than the GTX 1060 3 GB and has a higher VRAM buffer for the coming years. On the other hand, the 480 requires a beefier cooler to keep its temperatures in check, consumes more power, and generates more heat.

Winner: Tie

 

GTX 1060 6 GB vs RX 480 8 GB

The prices for the GTX 1060 6 GB and RX 480 8 GB hover around the $250 range, with higher quality cards reaching as high as $280.

Higher VRAM doesn’t necessarily translate into higher performance. Though both the GTX 1060 3 GB and RX 480 GB perform slightly worse than their higher VRAM counterparts, it’s mostly not because of the amount of VRAM.  Having more VRAM does alleviate video memory bottlenecks when gaming at resolutions higher than 1080p, such as 1440p. Modded games with high-resolution texture packs may require upwards of 5 GB of VRAM.

There is no certain way to know how much video memory a game actually uses, as all currently available software methods show the amount of requested memory — not what is actually used. So, the application may in fact request 6 GB of VRAM, but in reality use only around 3.5 GB.

In the long run, the main performance difference between the 4 GB and the 8 GB RX 480 would be clockspeeds as both cards use the same graphics processing unit, the GPU on the card with 4 GB just has a lower core clock. The difference between the two cards will mostly be seen in VRAM-intensive titles, such as GTA V, or at higher resolutions (1440p, 4K). Otherwise the cards should perform identically at the same clockspeeds.

On the other hand, NVIDIA’s GTX 1060 3 GB and 6 GB do not use the same chip — the one on the 3 GB card has some parts of it disabled, making it all-around 10% slower than its bigger brother (read more on that here).

Concerning the performance of these cards, the RX 480 8 GB launched with a measurable disadvantage compared to the GTX 1060 6 GB, but AMD was able to reduce that difference with constant driver updates, as we can very well see in HardwareUnboxed’s recent performance overview of the two cards here.

The graph compares the GTX 1060 6 GB vs the RX 480 8 GB. If the percentage is positive, it means that the GTX 1060 6 GB is faster (for instance, Quantum Break performance is 25% better on the GTX 1060 6 GB). If the percentage is negative, it means that the GTX 1060 6 GB is slower (for instance, DOOM performance is 18% worse on the GTX 1060 6 GB).

Source: HardwareUnboxed

As we can see from the graph, the performance is equally distributed between games, so the choice, again, comes down to what you play. It is important to keep in mind that the RX 480 uses DirectX 12 and Vulkan in a more efficient way, allowing for a higher performance boost in games based on these new APIs. Nvidia on the other hand, as the market leader, has the ability to get support for much more games, thus optimizing them better for their hardware.

The power use of the cards is similar to their smaller siblings. Again, the RX 480 uses about 50W more, so if power use or heat output are concerns, the 1060 is the better option. High quality heatsinks are recommended for both, but the RX 480 in particular.

Another important factor to consider when choosing between these cards is the monitor. If you’re planning to buy a new monitor, then getting a FreeSync+AMD GPU combo would be the best bet as it would be far cheaper than a G-Sync+Nvidia GPU combo in that performance range. Adaptive frame synchronization technology allows for tear- and latency-free gaming, which may in fact the deciding factor when choosing a graphics card in this price range.

Winner: Tie

The RX 480 is the more future-proof choice here, as it sees significant performance improvements in DirectX 12 gaming. Following recent driver updates, the 480 is also very slightly faster than the 1060 on average. But the GTX 1060 provides steadier framerates, consumes significantly less power, and generates less heat, making it an appealing alternative.

Comparison between GeForce GTX 1060 and Radeon RX 480 (21260-00-20G). Video card selection for Full HD

Full HD monitors are most often found among users, this is also confirmed by the statistics of visits to our site. It is for this segment of the market that the fiercest struggle between AMD and NVIDIA is going on. In 2016, the minds of gamers will divide the new AMD Radeon RX 480 and NVIDIA GeForce GTX 1060. Both new products are built on a new process technology and have become a concentration of new technologies in graphics processing. 16nm vs 14nm, 6GB vs 8GB, 120 vs 150W TDP, Pascal vs Polaris. The performance level of AMD Radeon RX 480 and NVIDIA GeForce GTX 1060 is optimal for modern games in Full HD resolution.

An important issue concerns the cost, as compared to older solutions, they are available for purchase by mass users. The gap between them is minimal and jumps depending on the particular store. At the time of the release of the comparative review, sales of non-reference versions did not start, and here the gap will be more noticeable not in favor of AMD.

It’s no secret that the AMD Radeon RX 480 offers the most simplified version of this video card with a simple cooling system. The starting versions of the NVIDIA GeForce GTX 1060, from partners such as Palit and Zotac, cost $40-50 less than the reference in European stores. In options with maximum factory overclocking, prices will be equal. At any cost, it will be about the same fork for AMD Radeon RX 480 and NVIDIA GeForce GTX 1060.

We have at our disposal reference versions of these video cards, a good opportunity to compare efficiency and performance in the same work scenarios. Before starting a detailed comparison, I would like to dwell briefly on the appearance of the Radeon RX 480 and GeForce GTX 1060.

The product from NVIDIA is clearly in the lead here, offering a well-thought-out cooling system with an aluminum casing and massive heatsinks. It looks more modern, while AMD has taken the path of maximum savings. This is clearly visible externally and is reflected in the results of measurements of temperature and noise.

Comparison of GeForce GTX 1060 and Radeon RX 480

The test bench is based on an Intel Core i7 6700K processor with a frequency of 4700 MHz, 32 GB of DDR4 RAM and an ASUS MAXIMUS VIII Ranger motherboard. Recording is done on a separate system using an AVEMERDIA Live Gamer Extreme capture card. Additional overclocking of video cards was not carried out, official drivers were installed at the time of the test.

9002 8

Model Data
Chassis Fractal Design Define S
Motherboard ASUS MAXIMUS VIII RANGER
Processor Intel Core i7-6700K Skylake (oc — 4700 MHz)
Cooler for CPU DeepCool Captain 240 EX
RAM Kingston HyperX Fury DDR4 32GB (oc — 2700MHz)
Hard Drive Intel SSD 530 240GB
Hard Drive 2 WD Red WD20EFRX
Power Supply Zalman ZM850-EBT
Audio Creative Sound BlasterX G5
Monitor iiyama ProLite E2773HDS
Monitor 2 Philips 242G5DJEB
Mouse ROCCAT Kone XTD
Keyboard Razer BlackWidow Chroma
Operating system Windows Pro 10

Game data is captured in Full HD resolution, similar settings for each of the games with ultra textures and maximum anti-aliasing. For a visual comparison, we present video clips, we offer a visual comparison of the image in games and trace the dynamic change in FPS.

Power consumption

The measurements are based on the maximum power consumption during the load by the synthetic test of the system. Data are given at maximum load.

Temperature

Noise

3DMark Fire Strike Ultra

3DMark Time Spy

All measurements in games are carried out in Full HD resolution with ultra graphics settings.

DirectX 12: Ashes of the Singularity

Ashes of the Singularity is a strategy game from Stardock developed specifically for DirectX 12 and 64-bit systems.

Crysis 3

Hitman Absolution

Tomb Raider

Batman: Arkham Knight

Middle-earth: Shadow of Mordor 9 0003 FarCry Primal

Tom Clancy The Division

Doom

GTA V (msaa off)

FallOut 4 Ultra HD

Doom Ultra HD

Tom Clancy The Division Ultra HD

Total War Warhammer Ultra HD

What to choose Radeon RX 480 or GeForce GTX 1060?

Both graphics cards performed well at Full HD resolution at ultra graphics settings. But the final results are clearly not in favor of the Radeon RX 480. At the same cost, it loses the GeForce GTX 1060 from 10 to 20% in games, regardless of whether the AMD or NVIDIA logo flaunts when the game starts. The second point concerns overclocking potential, which is noticeably better in NVIDIA, opening up opportunities for further performance improvements while maintaining low temperatures and noise. The software component is still not in favor of AMD, it is still far from the “plug and play” mode of operation. For example, we tried capturing images in games and the possibility of using regular tools for streaming games. Draw your own conclusions, but we will return to this issue when non-reference versions appear.

GeForce GTX 1060 vs. Radeon RX 580 Benchmarking

The GeForce GTX 1060 series has become a successful mid-range solution from NVIDIA. Initially, they were released in two versions — the older GeForce GTX 1060 received 6 GB and a full configuration of GP106 GPU computing units, the younger version received 3 GB of memory and fewer active GPU units. Later, older GeForce GTX 1060 9Gbps ​​with increased GDDR5 memory frequency went on sale. AMD has also updated its lineup with the introduction of the Radeon RX 480 last year and the faster Radeon RX 580 this year. Many of these graphics accelerators have been featured in our latest gaming reviews, but now we decided to bring them together in one comparison test. .

Let’s focus on the opposition of different versions of the GeForce GTX 1060 6GB with the Radeon RX 580 and Radeon RX 480. We will test them at standard frequencies and overclocked to fully evaluate the potential of each video card. Since we are talking about several models that differ only in frequencies within the series, the testing was carried out on the basis of two video cards.

NVIDIA’s Inno3D iChill GeForce GTX 1060 6GB X3.

This graphics card operates at higher frequencies. For testing at par, the frequencies of the graphics core were reduced so as to limit Boost at 1860 MHz. According to reviews on other resources, this is close to the characteristics of simple reference variants of the GeForce GTX 1060. The effective memory frequency is at 8000 MHz. For GeForce GTX 1060 9Gbps are set to similar core frequencies with Boost up to 1860 MHz with manual increase in GDDR5 memory frequencies to 9000 MHz. The overclocking of the video card is as follows: up to 2101 MHz in the core (base frequency 1704 MHz) and memory 4743 (9486) MHz. Read more here.

AMD’s competitor is the Sapphire Pulse Radeon RX 580 8GD5. This accelerator will replace the Radeon RX 580 and Radeon RX 480.

The standard specifications are 1257-1340 MHz for the new AMD card and 1120-1266 MHz for the older version, both running at an effective 8 GHz memory. According to reviews, simple Radeon RX show floating frequencies, where the average value is below the declared level. Therefore, for nominal modes, the frequency slightly decreased relative to the declared maximum to the level of 1245 MHz in the case of the Radeon RX 480 and to 1315 MHz for the Radeon RX 580. 000 MHz.

As a result, each video card will replace two models in nominal value and will be tested during overclocking.

Characteristics of test participants

GeForce GTX 1060 9Gbps ​​

GeForce GTX 1060 6GB

Radeon RX 580

Radeon RX 480

Architecture

Pascal

Pascal

GCN 1.4

GCN 1.4

GPU codename

GP106

GP106

Polaris 10

Polaris 10

Number of transistors, million

4400

4400

Technical process, nm

16

16

14

14

Core area, sq. mm

200

200

232

232

Number of stream processors

1280

1280

2304

2304

Number of texture units

80

80

144

144

Number of ROPs

48

48

32

32

Core frequency, MHz

1506-1708

1506-1708

1257-1340

1120-1266

Memory bus bit

192

192

256

256

Memory type

GDDR5

GDDR5

GDDR5

GDDR5

Effective memory frequency, MHz

9000

8000

8000

8000

Memory size, MB

6144

6144

8192

8192

Interface

PCI-E 3. 0

PCI-E 3.0

PCI-E 3.0

PCI-E 3.0

TDP level, W

120

120

185

150

Test bench

  • Processor: Intel Core i7-6950X @4.1GHz
  • motherboard: MSI X99S MPOWER
  • Memory: DDR4 Kingston HyperX HX430C15SB2K4/16, 3000MHz, 4x4GB
  • hard drive: Hitachi HDS721010CLA332, 1 TB
  • power supply: Seasonic SS-750KM
  • operating system: Windows 10 Ultimate x64
  • GeForce driver: NVIDIA GeForce 382.53
  • Driver Radeon: Crimson Edition 17.6.2

Test data in 3DMark is shown first, gaming applications are listed alphabetically.

Tests performed in one or two modes. In heavy applications, tests were carried out at a resolution of 1920×1080 with different configurations of graphic settings. In some applications with low system requirements, two permissions are involved (1920×1080 and 2560×1440) at Ultra quality. More details about the settings and modes in the methodology.

Test Method

3DMark Fire Strike

Testing was carried out exclusively in Extreme mode at a high resolution of 2560×1440. The graph shows the total score on the left, GPU Score on the right.

3DMark Time Spy

Testing at default settings. On the left is the GPU Score, on the right is the overall result.

Battlefield 1

The first mission «Cape Helles» from the «The Runner» story campaign has been selected for testing. Explosions are constantly generated around the player, a lot of smoke, and performance sags more than in many other battle scenes.

This episode will allow you to compare video cards under extreme load. Set to maximum Ultra-quality with standard TAA anti-aliasing.

Deus Ex: Mankind Divided

The walk was repeated with a certain sequence of actions at the Zahrad level. Frame rate was measured by Fraps.

The Ultra quality profile is selected when rendering in DirectX 12. A certain sequence of actions in the port area was replayed immediately after arriving in the city of Karnak.

Highest graphic quality and HBAO+ quality shading selected.

Fallout 4

Testing was carried out in manual mode by replaying the same game scene with strict adherence to the sequence of actions. Frame rate was measured by Fraps.

Ultra quality profile selected, additional HBAO+ enhanced shading enabled. More about it in a separate article.

For Honor

For testing, we used the built-in gaming benchmark, which was run at least 7 times for each video card.

The default Very High quality profile is selected. Since the video cards showed a high frame rate at 1920×1080, additional testing was carried out at a resolution of 2560×1440.

Gears of War 4

For testing, we used the built-in game benchmark, which was run 6-7 times for each video card.

Maximum Ultra quality configuration selected, game running in DirectX 12 environment. Two resolutions involved: 1920×1080 and 2560×1440

Grand Theft Auto 5

The game has a built-in benchmark, which is used for this test. Based on the data on the average fps of each scene, the average result for the entire benchmark was calculated. The minimum fps was fixed based on the results of passing all the test scenes using Fraps.

One resolution, but at extreme quality settings. The maximum graphics quality is selected with MSAA 4x anti-aliasing, all shadow parameters are activated in the advanced settings section, the loading range is increased to the maximum. Read more about the influence of these parameters on the quality of graphics in a special article.

Hitman

Fraps manual testing was used for our comparison. The average and minimum frame rates were measured during the introductory sequence to the training mission. At least 6 repetitions were performed.

Maximum graphics quality in DirectX 11.

Mass Effect: Andromeda

Testing with Fraps on the planet Eos. There was a trip through the desert area and a walk through the base with many buildings and vegetation.

Ultra graphics quality set to maximum, single resolution enabled.

Styx: Shards of Darkness

Testing was carried out on the flying ship level as part of the Diplomatic Vessels mission. In this well-developed location, performance is noticeably lower than in most dark levels. A certain sequence of actions was repeated, the frame rate was measured by Fraps.

Epic graphics quality set to the highest. There are two permissions involved.

Titanfall 2

Testing was carried out at the first level, where a certain sequence of actions was repeated over a large area. The frame rate was measured by Fraps, at least 7 repetitions were performed for each video card.

All graphics settings are set to maximum quality. Insane texture quality enabled (Insane). Disabled Adaptive Resolution FPS Target to ensure fixed resolution. Two resolutions are involved, in Full HD heavy anti-aliasing MSAA 4x was used, in 2560×1440 simpler TSAA anti-aliasing was activated.

Tom Clancy’ s Ghost Recon: Wildlands

Built-in game benchmark used. It was run 7 times for each video card.

Tests were carried out only in Full HD resolution, but in two modes: maximum graphics quality Ultra and high Very High.

Tom Clancy’ s The Division

For testing, we used the built-in game benchmark, which was launched 7 times.

Maximum quality profile selected, additionally increased the quality of reflections, detail and background shading to the limit (HBAO+ option). The tests were conducted with DirectX 11 and DirectX 12.

War Thunder

The built-in «Tank battle» performance test was used for testing. It was run 6-7 times for each video card.

The highest graphic quality is selected, above the standard Cinema profile. All sliders in the maximum position. Among the anti-aliasing modes, the AA 4x quality mode is selected without activating FXAA. There are two permissions involved.

Watch Dogs 2

A test scene with a run around the Palo Alto area was selected. The route runs through the city streets and the nearby green area with thick grass and a small grove. This is a heavy test scene that will allow you to compare performance to difficult gaming moments.

Tested only at 1920×1080 resolution in two modes. First, the standard Ultra-quality profile is set, in the second mode, temporal filtering is additionally activated.

Witcher 3: Wild Hunt

Story prologue. The hero and his partner are galloping along the road to the village. This moment is chosen for performance measurement using Fraps. Six repetitions for each video card.

One full HD test resolution.