Toms hardware graphics cards: Best Graphics Cards 2022 — Top Gaming GPUs for the Money

Asus GeForce RTX 3090 Ti Review: Witness the Power

Tom’s Hardware Verdict

Asus takes the GeForce RTX 3090 Ti and gooses it to even higher performance levels with a factory overclock. It’s now the fastest graphics card we’ve tested, but we’re nearing the launch window for next-gen GPUs. That, plus the high price and power draw make this card a highly questionable choice.

Pros
  • +

    + Fastest GPU currently available

  • +

    + Decent factory overclock

  • +

    + 21Gbps GDDR6X with improved VRAM cooling

  • +

    + Fast for content creation workloads

Today’s best Asus GeForce RTX 3090 Ti deals

$2,129.99

$1,634.43

View

Low Stock

Reduced Price

No price information

Check Amazon

The GeForce RTX 3090 Ti launched two weeks back, and we’re finally done with our testing and evaluation. It now reigns as the king of graphics cards, surpassing its 3090 predecessor by up to 10% — provided you’re testing at 4K. But that performance comes at a cost, and not just in terms of dollars. The RTX 3090 Ti also takes the crown as the single most power hungry GPU we’ve ever tested (not including dual-GPU solutions like the 2014-era Titan Z), pushing the limits of how much juice a graphics card can guzzle.

If all you want is the fastest GPU possible, efficiency be damned, this is now the best graphics card and the top solution in our GPU benchmarks hierarchy. But much like sports car enthusiasts might look at a Ferrari or Lamborghini with no intention of buying one, most PC gamers will want to stick with the RTX 3080 or RTX 3080 Ti and give this a pass.

Similar to the recent RTX 3080 12GB, Nvidia chose not to sample reviewers directly for the RTX 3090 Ti. It suggested reaching out to the AIC (add-in card) partners, and Asus supplied us with its RTX 3090 Ti TUF Gaming OC. 2) 628.4 628.4 628.4 628.4 628.4 519 SMs / CUs 84 84 82 80 68 80 GPU Cores 10752 10752 10496 10240 8704 5120 Tensor Cores 336 336 328 320 272 N/A RT Cores 84 84 82 80 68 80 Boost Clock (MHz) 1950 (OC mode) 1860 1695 1665 1710 2250 VRAM Speed (Gbps) 21 21 19.5 19 19 16 VRAM (GB) 24 24 24 12 10 16 VRAM Bus Width 384 384 384 384 320 256 ROPs 112 112 112 112 96 128 TMUs 336 336 328 320 272 320 TFLOPS FP32 (Boost) 41. 9 40 35.6 34.1 29.8 23 TFLOPS FP16 (Tensor) 168 (335) 160 (320) 142 (285) 136 (273) 119 (238) N/A RT TFLOPS 81.9 78.1 69.5 66.6 58.1 N/A Bandwidth (GBps) 1008 1008 936 912 760 512 TDP (watts) 480 450 350 350 320 300 Launch Date Mar 2022 Mar 2022 Sep 2020 Jun 2021 Sep 2020 Dec 2020 MSRP $2,099 $1,999 $1,499 $1,199 $699 $999 Online Price $2,149 $2,008 $1,919 $1,299 $969 $1,149

The RTX 3090 Ti represents the culmination of Nvidia’s Ampere architecture, featuring the now fully enabled GA102 GPU. That’s the same GPU in the 3090, 3080 Ti, and both variants of the 3080, just with two extra SMs compared to the GeForce RTX 3090 that launched way back in September of 2020. 19 months later, we’re getting a minor boost to core counts, a modest boost to clock speeds — on both the GPU and the GDDR6X memory — and a rather large kick in the pants to the price and power consumption.

That last bit is sort of interesting. We recently reported on some testing by Igor’s Lab where he limited the 3090 Ti to 300W. It dropped performance down to the level of the RTX 3080 Ti, but with lower power than the RTX 3080 and even AMD’s RX 6800 XT (using non-reference cards). Nvidia has effectively gone about as far as possible to the right on the voltage, power, and frequency curve, eking out the last few ounces of performance. Then Asus takes that just a bit further and squeezes another 90MHz out of the chip.

We do have to wonder how much of the power goes to the GDDR6X memory, which is notorious for using power and generating heat. Nvidia has switched to 16Gb modules rated at 21Gbps for the 3090 Ti, so the memory can all be on one side of the PCB and thus benefits from improved cooling — and we saw that in our testing as well, with the Asus card never getting above 100C on the GDDR6X, regardless of workload. We even overclocked the memory to 23Gbps and still stayed under 100C while running a mining test — but only managed 124 MH/s sustained for Ethereum, which is just 2 MH/s higher than a good RTX 3090, despite the difference in memory speed.

On paper, the RTX 3090 Ti is 12% faster than the 3090 on compute and has 8% more memory bandwidth. Running in OC mode (not the default Gaming mode), Asus tacks on 5% in core clocks, meaning in theory the card could be up to 18% faster than the reference RTX 3090. In practice, it will be quite a bit less than that, as we’ll see soon.

  • Asus GeForce RTX 3090 Ti (Black) at Walmart for $1,634.43

Asus GeForce RTX 3090 Ti TUF Gaming OC

Image 1 of 13

(Image credit: Tom’s Hardware)

Image 1 of 13

(Image credit: Tom’s Hardware)

Image 1 of 13

(Image credit: Tom’s Hardware)

Image 1 of 13

(Image credit: Tom’s Hardware)

Image 1 of 13

(Image credit: Tom’s Hardware)

Image 1 of 13

(Image credit: Tom’s Hardware)

Image 1 of 13

(Image credit: Tom’s Hardware)

Image 1 of 13

(Image credit: Tom’s Hardware)

Image 1 of 13

(Image credit: Tom’s Hardware)

Image 1 of 13

(Image credit: Tom’s Hardware)

Image 1 of 13

(Image credit: Tom’s Hardware)

Image 1 of 13

(Image credit: Tom’s Hardware)

Image 1 of 13

(Image credit: Tom’s Hardware)

Image 1 of 13

Asus provided its GeForce RTX 3090 Ti TUF Gaming OC for this review, a large card that includes the relatively common (for an extreme GPU) triple fans, and occupies 3. 2 slots. It’s not quite as chunky as the RTX 3090 Ti Founders Edition, which is slightly thinner but weighs 2189g (or at least, that’s what the 3090 FE weighed) and measures 313x138x57 mm. The Asus card tips the scale at ‘only’ 1676g, practically a featherweight! But it measures 326x104x63mm. It’s a physically impressive card, which is a bit of a given considering the silicon and its cooling needs.

The packaging is equally imposing, with a long box that consists of a sheath over a slightly unusual interior box that has chopped-off corners. It looks somewhat cool, but it was actually quite difficult to open. For something that’s just going to end up on a shelf or recycling center, Asus probably could have stuck with a traditional box.

The Asus RTX 3090 Ti TUF includes three DisplayPort 1.4 and two HDMI 2.1 outputs, which is slightly unusual as most cards these days only include up to four outputs. The IO bracket is still only two slots wide, which seems a bit odd considering the card more than occupies three slots. A wider bracket would have provided a bit of extra support, and unlike EVGA’s GPU leash, Asus doesn’t include anything extra in the package to deal with card sag.

Power comes via the new PCIe 5.0 16-pin connector, which is compatible with Nvidia’s 12-pin connector as well. Included in the package is an adapter that takes three 8-pin inputs to drive the 12-pin output, providing a theoretical (in spec) power delivery of up to 450W, plus another 75W from the PCIe slot. While we don’t typically disassemble GPUs, it’s worth noting that Asus has a dedicated VRAM heatsink that’s designed to help wick heat away from the memory. As we’ll see later, it definitely works, and memory temperatures weren’t an issue during testing.

Unlike Asus’ higher tier ROG Strix line, the amount of RGB lighting on the TUF Gaming is relatively subdued. Only the small trapezoidal TUF logo on the top of the card lights up — there’s no lighting on the fans or the rest of the shroud. Those who like a lot of bling will probably want to look at other alternatives, but if you prefer a more subdued look, the TUF Gaming performs well and has everything you could want.

Test Setup for GeForce RTX 3090 Ti

(Image credit: Tom’s Hardware)

TOM’S HARDWARE 2022 GPU TEST PC

Intel Core i9-12900K
MSI Pro Z690-A WiFi DDR4
Corsair 2x16GB DDR4-3600 CL16
Crucial P5 Plus 2TB
Cooler Master MWE 1250 V2 Gold
Cooler Master PL360 Flux
Cooler Master HAF500
Windows 11 Pro 64-bit 

Our GPU test PC and gaming suite was updated in early 2022. We’re now using a Core i9-12900K processor, MSI Pro Z690-A DDR4 WiFi motherboard, and DDR4-3600 memory (with XMP enabled). We also upgraded to Windows 11 to ensure we get the most out of Alder Lake. You can see the rest of the hardware in the boxout.

Our gaming tests consist of a «standard» suite of eight games without ray tracing enabled (even if the game supports it), and a separate «ray tracing» suite of six games that all use multiple RT effects. For this review, we’ll be testing at 4K, 1440p, and 1080p at «ultra» settings — which generally means maxed out settings, except without SSAA if that’s an option. We also enable DLSS Quality mode in the games that support it, which includes all of the ray-tracing suite and three of the games in the standard suite.

Besides the gaming tests, we also have a collection of professional and content creation benchmarks that can leverage the GPU. We’re using SPECviewperf 2020 v3, Blender 3.10, OTOY Octane, and Vray. Those last three all focus on 3D rendering and support Nvidia’s RTX GPUs; only Blender 3.10 currently provides GPU rendering acceleration on AMD’s RX 6000 cards. SPECviewperf consists of a suite of professional applications, including CAD/CAM, medical and 3D rendering.

Asus GeForce RTX 3090 Ti: 4K Gaming Performance

Image 1 of 9

(Image credit: Tom’s Hardware)

Image 1 of 9

(Image credit: Tom’s Hardware)

Image 1 of 9

(Image credit: Tom’s Hardware)

Image 1 of 9

(Image credit: Tom’s Hardware)

Image 1 of 9

(Image credit: Tom’s Hardware)

Image 1 of 9

(Image credit: Tom’s Hardware)

Image 1 of 9

(Image credit: Tom’s Hardware)

Image 1 of 9

(Image credit: Tom’s Hardware)

Image 1 of 9

(Image credit: Tom’s Hardware)

Image 1 of 9

This is currently the fastest and most expensive graphics card, so 4K ultra makes sense — and Nvidia even pushes 8K gaming as an option, though that will generally require DLSS support to get to reasonable framerates. We can’t test 8K, as we don’t have an 8K monitor, but the 4K results should at least give you an idea of what to expect when you try to render four times as many pixels.

Keeping in mind that we’re dealing with a factory overclocked card compared to a bunch of reference clocked models, we’re still pleasantly surprised to see a solid 10% performance uplift overall, when comparing the Asus RTX 3090 Ti TUF Gaming OC against the RTX 3090 Founders Edition. That might not seem like much, considering the $500 (theoretical) increase in price, but let’s just point to the 3090 and 3080 Ti: $300 more in that case only gets you 3.4% more performance on average. Another interesting comparison of course is the RTX 3080 Ti against the original 3080 10GB, where $500 extra also bought 10% more performance. Against the RTX 3080, which remains our pick for the best graphics card, even when priced closer to $1,000 than its official $699 MSRP, the RTX 3090 Ti delivers 25% better performance overall.

Let’s also not count AMD out. While there’s no question the RTX 3090 Ti is faster than AMD’s RX 6900 XT, it’s only an 18% gap in our standard test suite on average. That’s in the best-case scenario for the Nvidia GPU, testing at 4K ultra. There are even games like Forza Horizon 5 where the 6900 XT still comes out ahead, albeit by a slim 4% margin. Generally speaking, Nvidia can win via brute force, but it’s using about 50% more power and costs about twice as much as AMD’s top offering.

Nvidia also likes to promote DLSS, not just as a solution for games with ray tracing, but for any game. Using the Quality mode, which at 4K looks very nearly the same as native, let’s check out the three games that support DLSS. Horizon Zero Dawn performance improves by 30%, Watch Dogs Legion gets a 35% boost… and Red Dead Redemption 2 only gets a 15% improvement. Not all game engines are created equal, and apparently the way DLSS was shoehorned into RDR2 — over 18 months after launch, no less — proves this point. The gains are even less at lower resolutions.

Image 1 of 7

(Image credit: Tom’s Hardware)

Image 1 of 7

(Image credit: Tom’s Hardware)

Image 1 of 7

(Image credit: Tom’s Hardware)

Image 1 of 7

(Image credit: Tom’s Hardware)

Image 1 of 7

(Image credit: Tom’s Hardware)

Image 1 of 7

(Image credit: Tom’s Hardware)

Image 1 of 7

(Image credit: Tom’s Hardware)

Image 1 of 7

Flipping over to our ray tracing test suite, the DLSS story changes completely. DLSS might be a nice extra on a card like the Asus RTX 3090 Ti in traditional games, but if you want to run maxed out ray traced settings, it becomes absolutely necessary at 4K. The 3090 Ti barely manages to squeak past the 30 fps mark at native, and half of the games fell well short of that mark. Turn on DLSS and performance nearly doubles, from 32 fps to 60 fps.

We don’t have DLSS results for all of the cards in our charts, just because they start to get very crowded, so we’ll just focus on the native performance for the remaining comparisons. It’s interesting to see the Asus 3090 Ti outpace the reference RTX 3090 by 14% overall, which is close to the theoretical maximum. The Asus card has a 1950MHz boost clock in OC mode, whereas the RTX 3090 Founders Edition has a 1695MHz boost clock, giving a maximum difference of 15%.

While a 14% gap might not seem like much, again look at the other cards. The 3090 is only 3% faster than the 3080 Ti, which is 16% faster than the 3080 10GB card. Note also that the overclocked MSI 3080 12GB is 12% faster than the 3080 Founders Edition, nearly matching the 3080 Ti, so memory bandwidth is certainly a big factor in overall performance at 4K. The Asus 3090 Ti still only beats the RTX 3080 by 37%, so it’s very much a case of diminishing returns. Meanwhile, it’s 88% faster than AMD’s best — and 250% faster if we enable DLSS.

It will be interesting to see how much uptake there is for AMD’s FSR 2.0 once it releases to the public in the next couple of months. It will also be interesting to see if AMD starts equipping its future GPUs with matrix hardware (i. e. tensor cores), which is something Intel is doing as well with its Arc graphics cards. Considering Intel’s XeSS is more of a direct competitor to DLSS, and it will also work on non-Intel GPUs, perhaps AMD will join Intel in combating DLSS in the future. It sure would be nice if we could have one universal solution for upscaling that all three companies could get behind, but I’ll eat my GPU hat if that happens.

Asus GeForce RTX 3090 Ti: 1440p Gaming Performance

Image 1 of 9

(Image credit: Tom’s Hardware)

Image 1 of 9

(Image credit: Tom’s Hardware)

Image 1 of 9

(Image credit: Tom’s Hardware)

Image 1 of 9

(Image credit: Tom’s Hardware)

Image 1 of 9

(Image credit: Tom’s Hardware)

Image 1 of 9

(Image credit: Tom’s Hardware)

Image 1 of 9

(Image credit: Tom’s Hardware)

Image 1 of 9

(Image credit: Tom’s Hardware)

Image 1 of 9

(Image credit: Tom’s Hardware)

Image 1 of 9

Dropping down to 1440p, the advantage of the 3090 Ti relative to the competition shrinks a bit. It’s now only 7% faster than the vanilla RTX 3090, 20% faster than the RTX 3080 10GB, and 6% faster than the RX 6900 XT. Without a faster CPU providing data to the graphics cards, we’re already starting to see CPU bottlenecks — Far Cry 6, Flight Simulator, and Horizon Zero Dawn are clearly running out of steam.

DLSS quality mode further proves that point. The overhead associated with DLSS means it only boosts performance a small amount in some cases. Watch Dogs Legion still got a 17% uplift, but performance in Horizon Zero Dawn and Red Dead Redemption 2 only went up about 7%. Buying a card like the RTX 3090 Ti for traditional games, even at maxed out settings, doesn’t make a lot of sense unless you have a 4K or perhaps ultrawide 1440p monitor.

Image 1 of 7

(Image credit: Tom’s Hardware)

Image 1 of 7

(Image credit: Tom’s Hardware)

Image 1 of 7

(Image credit: Tom’s Hardware)

Image 1 of 7

(Image credit: Tom’s Hardware)

Image 1 of 7

(Image credit: Tom’s Hardware)

Image 1 of 7

(Image credit: Tom’s Hardware)

Image 1 of 7

(Image credit: Tom’s Hardware)

Image 1 of 7

But, if you’re really into ray tracing effects and the most demanding games around, native 1440p can very much make use of a faster GPU. The Asus 3090 Ti still beat the vanilla 3090 by 13% on average, for example. It was also 30% faster than the 3080 10GB, and 75% faster than AMD’s RX 6900 XT — and again, enabling DLSS pays huge dividends, improving the Asus card’s results by 60% overall.

Looking at the individual charts, the Asus 3090 Ti just barely breaks 60 fps on average in our six DXR games, but it falls below 50 fps in three of the games. With DLSS enabled, all six games are comfortably above the 60 fps mark, and if you have a high refresh rate G-Sync (or G-Sync Compatible) display, you can get a very smooth gaming experience.

But who are we kidding? This card is very much overkill for most gamers. You’ll be better off waiting for the next generation GPUs later this year rather than plunking down two grand on just a graphics card now.

Asus GeForce RTX 3090 Ti: 1080p Gaming Performance

Image 1 of 9

(Image credit: Tom’s Hardware)

Image 1 of 9

(Image credit: Tom’s Hardware)

Image 1 of 9

(Image credit: Tom’s Hardware)

Image 1 of 9

(Image credit: Tom’s Hardware)

Image 1 of 9

(Image credit: Tom’s Hardware)

Image 1 of 9

(Image credit: Tom’s Hardware)

Image 1 of 9

(Image credit: Tom’s Hardware)

Image 1 of 9

(Image credit: Tom’s Hardware)

Image 1 of 9

(Image credit: Tom’s Hardware)

Image 1 of 9

Okay, we see you rolling your eyes over there. Yes, 1080p gaming on an RTX 3090 Ti represents a very imbalanced workload. It’s still 5% faster than the old RTX 3090, but you could get most of that with a factory overclocked 3090 card 18 months ago. For nearly triple the theoretical price, the RTX 3090 Ti is only about 15% faster than the RTX 3080 at 1080p. It’s also basically tied with the RX 6900 XT, which now claims wins in half of the games in our test suite.

Not surprisingly, DLSS can’t do much here either. The best result was in Red Dead Redemption 2, where performance improved by 4%. Not that you really need DLSS at 1080p with this sort of GPU, but CPU bottlenecks are very present in nearly all of the games. Flight Simulator is particularly bad, with performance that’s only slightly higher than at 1440p, meaning it’s almost entirely CPU limited.

Image 1 of 14

(Image credit: Tom’s Hardware)

Image 1 of 14

(Image credit: Tom’s Hardware)

Image 1 of 14

(Image credit: Tom’s Hardware)

Image 1 of 14

(Image credit: Tom’s Hardware)

Image 1 of 14

(Image credit: Tom’s Hardware)

Image 1 of 14

(Image credit: Tom’s Hardware)

Image 1 of 14

(Image credit: Tom’s Hardware)

Image 1 of 14

(Image credit: Tom’s Hardware)

Image 1 of 14

(Image credit: Tom’s Hardware)

Image 1 of 14

(Image credit: Tom’s Hardware)

Image 1 of 14

(Image credit: Tom’s Hardware)

Image 1 of 14

(Image credit: Tom’s Hardware)

Image 1 of 14

(Image credit: Tom’s Hardware)

Image 1 of 14

(Image credit: Tom’s Hardware)

Image 1 of 14

Our DXR test suite still proves pretty demanding, however. All of the games can break 60 fps and then some, but the 3090 Ti still averaged 11% higher performance than the reference RTX 3090, and 27% better performance than the RTX 3080. AMD’s RX 6900 XT still trails by 40%, and it can’t even average 60 fps in four of the six games we tested. Turn on DLSS and the Nvidia advantage grows even more.

Based on what we know of the hardware, AMD’s ray accelerators are about half as fast as the RT cores in Nvidia’s Ampere GPUs. It’s actually even worse than that, considering AMD’s GPU clocks at over 2.3GHz in testing, while the Asus 3090 Ti sits closer to 2GHz. So 84 RT cores against 80 ray accelerators, and even at 1080p ultra the Nvidia GPU is 66% faster on average.

It’s also interesting to look at Nvidia’s previous generation Turing GPUs, though. The RTX 2080 Ti has 68 RT cores running at around 1.7GHz, and it’s still within striking distance of AMD’s RX 6900 XT. The 2080 Ti was faster in Cyberpunk 2077 and Minecraft, while AMD’s card came out ahead in the other games. How much will AMD improve RT performance with RDNA3, and will Nvidia also find new ways to improve performance with Ada? We’ll find out later this year.

Asus GeForce RTX 3090 Ti: Professional and Content Creation Performance

GPUs aren’t just for gaming, they can be used for professional workloads, AI training and inferencing, and more. We’re looking to expand some of our GPU testing, particularly for extreme GPUs like the RTX 3090 Ti. For now, we have a few 3D rendering applications that leverage ray tracing hardware, plus the SPECviewperf 2020 v3 test suite. We’ll start there.

Image 1 of 9

(Image credit: Tom’s Hardware)

Image 1 of 9

(Image credit: Tom’s Hardware)

Image 1 of 9

(Image credit: Tom’s Hardware)

Image 1 of 9

(Image credit: Tom’s Hardware)

Image 1 of 9

(Image credit: Tom’s Hardware)

Image 1 of 9

(Image credit: Tom’s Hardware)

Image 1 of 9

(Image credit: Tom’s Hardware)

Image 1 of 9

(Image credit: Tom’s Hardware)

Image 1 of 9

(Image credit: Tom’s Hardware)

Image 1 of 9

SPECviewperf 2020 consists of eight different benchmarks. We’ve also included an «overall» chart that uses the geometric mean of the eight results to generate an aggregate score. Note that this is not an official score, but it gives equal weight to the individual tests and provides a high level overview of performance. Few professionals use all of these programs, however, so it’s generally more important to look at the results for the applications you plan to use.

What’s immediately interesting is just how far ahead of the other GPUs the Titan RTX sits. That’s because Nvidia provides some driver level enhancements to its Titan cards, and despite the Titan-esque price the RTX 3090 Ti doesn’t get the same treatment. Flipping through the individual test results, it’s mostly thanks to a massive lead in the snx-04 (Siemens NX) test that the Titan RTX gets the overall lead, though it also ranks first in the catia-06 and creo-03 tests.

The RTX 3090 Ti does nab several victories as well, claiming top marks in 3dsmax-07, energy-03, maya-06, and solidworks-07. AMD’s GPUs meanwhile deliver mixed results. They’re in the bottom half of the 3dsmax, catia, creo, and maya charts, but the RX 6900 XT takes second place in the energy test suite, gets the top result in medical-03, and the AMD cards are over three times as fast as the GeForce cards in snx-04 — only the Titan RTX beats them, by another 4X factor.

Image 1 of 4

(Image credit: Tom’s Hardware)

Image 1 of 4

(Image credit: Tom’s Hardware)

Image 1 of 4

(Image credit: Tom’s Hardware)

Image 1 of 4

(Image credit: Tom’s Hardware)

Image 1 of 4

Next up, Blender is a popular rendering application that has been used to make full-length films. We’re using the latest Blender Benchmark, which uses Blender 3.10 and three tests. Blender 3.10 includes the new Cycles X engine that leverages ray tracing hardware on both AMD and Nvidia GPUs.

As with the DXR gaming test suite, AMD’s GPUs fall far behind Nvidia’s offerings when it comes to RT hardware, at least as evidenced by Blender. Overall, the RTX 3090 Ti delivers over three times the performance of the RX 6900 XT. 

We also uncovered a bug with AMD’s current drivers, where having PCIe Resizable BAR enabled caused a massive hit to Blender 3.10 rendering performance. That should be corrected in a future driver release, but if you’re using Blender on an AMD GPU right now, you’ll want to disable ReBAR in the BIOS.

Image 1 of 7

(Image credit: Tom’s Hardware)

Image 1 of 7

(Image credit: Tom’s Hardware)

Image 1 of 7

(Image credit: Tom’s Hardware)

Image 1 of 7

(Image credit: Tom’s Hardware)

Image 1 of 7

(Image credit: Tom’s Hardware)

Image 1 of 7

(Image credit: Tom’s Hardware)

Image 1 of 7

(Image credit: Tom’s Hardware)

Image 1 of 7

OTOY Octane is another popular rendering engine, but unlike Blender, it doesn’t have support for AMD’s RX 6000 series GPUs. As such, we’ve limited our charts to Nvidia GPUs, where the RTX 3090 Ti unsurprisingly takes the top spot, delivering 6% higher performance than the standard RTX 3090. Most likely, these rendering applications depend more on memory bandwidth than GPU compute, which is why all three end up being around 6% faster with the 3090 Ti.

Nvidia also notes that there are rendering workloads that the RTX 3090 Ti (and 3090 and Titan RTX) can handle that simply fail on GPUs that don’t have as much VRAM. This is true, and it’s why true professional GPUs like the Nvidia RTX A6000 come with a whopping 48GB of VRAM. Comparative benchmarks however become meaningless when you can’t even run the test on most graphics cards.

Image 1 of 2

(Image credit: Tom’s Hardware)

Image 1 of 2

(Image credit: Tom’s Hardware)

Image 1 of 2

Like Octane, Chaos V-ray also lacks support for AMD’s GPUs at present, so we’ve only tested the Nvidia RTX cards. Also like Octane, it has an older CUDA rendering path as well as support for a newer RTX path that leverages Nvidia’s RT cores. The RTX mode boosts performance by about 30% on the newer Ampere GPUs, while the Titan RTX ran 40% faster.

[Note: We’re still looking for a good AI / machine learning benchmark, «good» meaning it’s easy to run, preferably on Windows systems, and that the results are relevant. We don’t want something that only works on Nvidia GPUs, or AMD GPUs, or that requires tensor cores. Ideally, it will use tensor cores if available (Nvidia RTX and Intel Arc), or GPU cores if not (GTX GPUs and AMD’s current consumer lineup). If you have any suggestions, please contact me — DM me in the forums, or send me an email. Thanks!]

Asus GeForce RTX 3090 Ti: Power, Temps, Noise, Etc.

(Image credit: Tom’s Hardware)

Up to now, we’ve talked about performance and hinted that power use might be just a bit high. Now’s the time for the rubber to meet the road as we check out real-world power consumption, using our Powenetics testing hardware and software. We capture in-line GPU power consumption by collecting data while looping Metro Exodus at 1440p ultra as well as while running the FurMark stress test at 1600×900. Our test PC for power testing remains the same old Core i9-9900K as we’ve used previously, to keep results consistent. We tested the Asus card in all three of its standard modes: Default (Gaming), OC, and Silent. These modes can be selected in Asus’ GPU Tweak II utility.

Image 1 of 4

(Image credit: Tom’s Hardware)

Image 1 of 4

(Image credit: Tom’s Hardware)

Image 1 of 4

(Image credit: Tom’s Hardware)

Image 1 of 4

(Image credit: Tom’s Hardware)

Image 1 of 4

So, this is interesting. Starting with the Metro Exodus results, we’re probably running into CPU bottlenecks. Peak power use was «only» 438W, in both the default mode as well as in OC model. Silent mode did drop power use a bit, but unfortunately it looks like we may need to update our testing methodology for cards like this.

Flipping over to FurMark, at least we got some separation, and this likely tracks closer with the limits imposed by the card’s firmware. In the default mode, we saw average power use of nearly 470W, the OC mode bumped that up to over 490W, and the Silent mode dropped the card to just over 440W. That’s now the highest power use we’ve seen from a GPU in recent years without end-user overclocking, by about 80W. Perhaps there are some custom RTX 3090 or RX 6900 XT cards that came close to this level, but we didn’t get those in for testing.

These results aren’t particularly surprising. More GPU cores at much higher clocks, with higher clocked GDDR6X memory all combine to substantially increase power consumption. I used to think the early rumors of 600W RTX 4090 cards later this year were ludicrous. Now such talk feels more like a sense of inevitability.

Image 1 of 4

(Image credit: Tom’s Hardware)

Image 1 of 4

(Image credit: Tom’s Hardware)

Image 1 of 4

(Image credit: Tom’s Hardware)

Image 1 of 4

(Image credit: Tom’s Hardware)

Image 1 of 4

Even in the «slowest» Silent profile, average clocks during our Metro testing were at 2GHz, and the OC mode boosted that up to 2074MHz. We’ve seen much higher core clocks from AMD’s RX 6000 GPUs, but they’re architected to run at higher speeds and tend not to be quite so power hungry. If the move from the 3090’s 1850MHz to the 3090 Ti’s 2074MHz needs 75W more power, just imagine how much juice a 2.5GHz Ampere GPU would need! Or don’t — we’ll probably find out with Ada, though.

Clock speeds in FurMark were quite a bit slower, as expected. Unlike in a game, FurMark puts a major load on the GPU and tends to use more power per clock than just about any other workload. Clocks averaged just over 1.6GHz  here, with the OC profile, 1.44GHz with the default Gaming profile, and dropped to 1.23GHz in Silent mode.

Image 1 of 4

(Image credit: Tom’s Hardware)

Image 1 of 4

(Image credit: Tom’s Hardware)

Image 1 of 4

(Image credit: Tom’s Hardware)

Image 1 of 4

(Image credit: Tom’s Hardware)

Image 1 of 4

Image 1 of 4

(Image credit: Tom’s Hardware)

Image 1 of 4

(Image credit: Tom’s Hardware)

Image 1 of 4

(Image credit: Tom’s Hardware)

Image 1 of 4

(Image credit: Tom’s Hardware)

Image 1 of 4

Asus does a great job at keeping the 3090 Ti GPU cool, with average temperatures of 65C in all three of the performance profiles while running Metro. FurMark as usual pushed things a bit harder, but only up to 67C in the OC mode. Fan speeds basically followed temperatures. In both the gaming and FurMark tests, there was only a 100 RPM difference between the Silent and OC modes.

We measured noise levels at 10cm using an SPL (sound pressure level) meter as well. The SPL was aimed right at the GPU fans in order to minimize the impact of other fans like those on the CPU cooler. The noise floor of our test environment and equipment measures 33 dB(A). Because we used a gaming workload for noise testing, there was less of a difference between the modes. After about 15 minutes, the Silent mode stabilized at around 48.9 dB(A), while the Gaming and OC modes were just a hair louder at 49.1 dB(A). GPU Tweak II reported fan speed of 74%, which means there wasn’t a ton of headroom available, but noise and temperature levels were overall very good.

Asus GeForce RTX 3090 Ti: Hail to the King

(Image credit: Tom’s Hardware)

There are many ways of looking at the GeForce RTX 3090 Ti. It’s a prosumer content creation card that’s only moderately faster than its predecessor, at an even higher price. It’s also the fastest graphics card for gaming currently available, still at an obscenely high price. One thing it’s not is a full Titan RTX replacement, and we can only guess that Nvidia had too many people buying comparatively inexpensive Titan cards and opting out of the former Quadro and current A-series lineups that can cost twice as much.

There’s no question of whether the RTX 3090 Ti represents a good value. In terms of FPS per dollar spent, out of the 57 graphics cards we’ve tested in our GPU benchmarks hierarchy, the RTX 3090 Ti ranks 56th — only the Radeon VII represents a worse value. At the same time, we need to put things in perspective. If you’re the type who has the money and wants the fastest hardware possible, the RTX 3090 Ti does improve performance by about 10% over the RTX 3090, which still goes for $1,900 at the time of writing. Again, not that you should buy such a GPU, but by that metric you could argue it’s worth the extra $100.

The real concern with the RTX 3090 Ti isn’t its performance or price, however, it’s the fact that it comes so late to the Ampere party. The RTX 3090 was released in September 2020, 19 months ago, and all indications are that we’ll get the next-generation Ada GPUs this coming September. We have no idea how much they’ll cost, but there’s effectively zero chance that a hypothetical RTX 4090 won’t be faster than the RTX 3090 Ti — and it might even cost less.

If Nvidia keeps with its recent pattern of GPU launches, though, an RTX 4090 should be the least of your worries. We’ll probably also get an RTX 4080 that delivers 30–40% more performance than the RTX 3090 Ti with a price of $999 or less. There’s also the fact that GPU prices are trending down this year, finally, and have dropped over 35% since the start of the year. We expect that trend to continue, and there are even reports that AMD and Nvidia are both entering a state of oversupply.

However you look at things, splurging on a short-lived king of the hill doesn’t make a lot of sense to us. But if you’re flush with cash, or you do the sort of work where upgrading to a $2,000 graphics card that’s only 5–10% faster than your current card will pay for itself in a few months, this is the card to get. Buyer’s remorse might kick in once the next generation parts arrive, but you can then sell off the 3090 Ti (at a loss) and upgrade again, ensuring you stay on top of the GPU pecking order.

  • MORE: Best Graphics Cards
  • MORE: GPU Benchmarks and Hierarchy
  • MORE: All Graphics Content

Jarred Walton is a senior editor at Tom’s Hardware focusing on everything GPU. He has been working as a tech journalist since 2004, writing for AnandTech, Maximum PC, and PC Gamer. From the first S3 Virge ‘3D decelerators’ to today’s GPUs, Jarred keeps up with all the latest graphics trends and is the one to ask about game performance.

MSI GeForce RTX 3080 12GB Suprim X Review: Ti Fighter

Skip to main content

Tom’s Hardware is supported by its audience. When you purchase through links on our site, we may earn an affiliate commission. Here’s why you can trust us.

(Image: © Tom’s Hardware)

Tom’s Hardware Verdict

The MSI GeForce RTX 3080 12GB Suprim X managed to effectively tie the RTX 3080 Ti performance in most situations, thanks to its healthy factory overclock. Unfortunately, retail pricing looks equally high, which eliminates the one potential draw. It ends up as one more RTX 30-series model in an already crowded landscape, at a still-inflated price.

Pros
  • +

    + 384-bit memory interface

  • +

    + Large factory overclock

  • +

    + Matches 3080 Ti in performance

Cons
  • Same price as 3080 Ti

  • Limited supply and high price

  • High power draw

Today’s best MSI GeForce RTX 3080 Suprim deals

No price information

Check Amazon

The GeForce RTX 3080 12GB sits in a curious position. We heard various rumors about an updated RTX 3080 floating about before the official ‘stealth’ reveal, when Nvidia quietly updated its RTX 30-series specs page . Will the RTX 3080 12GB manage to land a spot among the best graphics cards? Where does it fit into the GPU benchmarks hierarchy? What will supply look like, and how much will the cards cost? These are all important questions, and we received the MSI GeForce RTX 3080 12GB Suprim X that will provide at least some answers.

There were earlier suggestions that Nvidia would double down on the memory but keep the same RTX 3080 GPU configuration. Others promised a 12GB 3080 with regular GDDR6 memory, and the third and apparently correct variant was a slightly higher GPU core count with 12GB of GDDR6X memory. That description basically matches the GeForce RTX 3080 Ti, however, which makes us wonder why this particular GPU was even needed.

Nvidia told us that the RTX 3080 12GB was a SKU requested by its add-in board (AIB) partners, so there won’t be any Founders Edition of the card. 2) 628.4 628.4 628.4 519 SMs / CUs 70 68 80 80 GPU Cores 8960 8704 10240 5120 Tensor Cores 280 272 320 N/A RT Cores 70 68 80 80 Boost Clock (MHz) 1845 1710 1665 2250 VRAM Speed (Gbps) 19 19 19 16 VRAM (GB) 12 10 12 16 VRAM Bus Width 384 320 384 256 ROPs 96 96 112 128 TMUs 280 272 320 320 TFLOPS FP32 (Boost) 33.1 29.8 34.1 23 TFLOPS FP16 (Tensor) 132 (264) 119 (238) 136 (273) N/A Bandwidth (GBps) 912 760 912 512 TDP (watts) 400 320 350 300 Launch Date Jan-22 Sep-20 Jun-21 Dec-20 Official MSRP $1,249 $699 $1,199 $999 eBay Price (Early 2022) $1,576 $1,523 $1,774 $1,510

(Image credit: Tom’s Hardware)

While the RTX 3080 Ti has quite a few more GPU cores, SMs, etc. , the MSI 3080 12GB sports much higher boost clocks, negating most of the difference. Of course a factory overclocked 3080 Ti would widen the gap again, but either way we’re looking at the same memory configuration and bandwidth, which should be a major factor in determining overall performance. Compared to the original 3080, the 12GB card adds just one more SM cluster (Nvidia can only disable SMs in groups of 2), but the 20% increase in memory and memory bandwidth should prove more useful.

Based on the specs, we suspect the real reason for the RTX 3080 12GB goes back to the GA102 chip yields. If Nvidia is getting a decent number of chips with all 12 memory controllers working, but with more than 4 SM clusters than are non-functional, it can’t sell those chips for 3080 Ti cards. By dropping down to just 70 SMs, the GPUs can still be used in more expensive cards than the baseline RTX 3080 10GB.

We’ve included both the MSRPs as well as the typical GPU prices from eBay for the past month. The former is of course completely meaningless right now, but we’re happy to see that the 12GB 3080 only costs a bit more than the regular 3080. Part of that comes from the fact that there are still non-LHR 3080 10GB cards in the wild, which tend to go for a premium, but all the 3080 Ti cards are LHR models, as are the 3080 12GB cards, so it looks like you can expect to save perhaps $200 right now by purchasing a 3080 12GB instead of the 3080 Ti.

Note also that the Radeon RX 6900 XT tends to go for a similar price to the RTX 3080, so that will be a good point of comparison. Nvidia theoretically delivers much higher compute performance and memory bandwidth, but AMD provides more memory, and the large 128MB Infinity Cache helps to keep things much closer than the above specs would suggest.

  • MORE: Best Graphics Cards
  • MORE: GPU Benchmarks and Hierarchy
  • MORE: All Graphics Content
  • 1

Current page:
Features and Specifications

Next Page MSI GeForce RTX 3080 12GB Suprim X

Jarred Walton is a senior editor at Tom’s Hardware focusing on everything GPU. He has been working as a tech journalist since 2004, writing for AnandTech, Maximum PC, and PC Gamer. From the first S3 Virge ‘3D decelerators’ to today’s GPUs, Jarred keeps up with all the latest graphics trends and is the one to ask about game performance.

Tom’s Hardware is part of Future US Inc, an international media group and leading digital publisher. Visit our corporate site .

©
Future US, Inc. Full 7th Floor, 130 West 42nd Street,
New York,
NY 10036.

New Benchmarks Show GTX 1650 and RX 6400 Outperforming Intel’s Arc A380 Graphics Card

Skip to main content

Tom’s Hardware is supported by its audience. When you purchase through links on our site, we may earn an affiliate commission. Here’s why you can trust us.

(Image credit: Gunnir)

Intel recently shared performance metrics of its new Arc A380 desktop GPU in 17 gaming titles, with direct comparisons to the GTX 1650 and RX 6400 — which were all tested on the same PC. On average, the A380 lost in comparison to the GTX 1650 and RX 6400, which will make it one of the slowest entry-level GPUs when it arrives on the US market. Even as a budget offering, Intel will have a tough time making our best graphics card list.

The A300 series is Intel’s entry-level desktop GPU, using the smaller «ACM-G11» Arch Alchemist chip. Unlike the mobile A350M and A370M, however, it does have all eight of Intel’s Xe GPU cores and enabled alongside the full 96-bit GDDR6 memory interface. That’s nearly the same core configuration as the entry-level Arc A370M mobile GPU, but with 50% more memory, 66% more memory bandwidth, and significantly higher GPU clocks that can reach up to 2.45 GHz.

TBP (typical board power) is also higher at 75W, perhaps more, and Intel’s Arc A380 will come in several variants. Cards that run at less than 75W can get by without a power connector and have a 2 GHz clock speed, cards with up to an 80W TBP will require at least a 6-pin power connector and can run at up to 2.25 GHz, and cards with an 87W or higher TBP can run at 2.35 GHz or more.

We don’t know what card Intel used for the tests, and the Gunnir card images shown here with the 8-pin power connector are for reference purposes only. The test PC was equipped with a Core i5-12600K, 2x16GB DDR4-3200 memory, an MSI Z690-A Pro WiFi DDR4 motherboard (actually the same motherboard we use in our GPU testbed), and a 4TB M600 Pro XT SSD, running Windows 11.

For now, the Arc A380 is the only desktop GPU available to look at on Intel’s Arc website. But according to previous driver leaks, we should expect Intel’s A500 series and A700 series of desktop GPUs to arrive at some point. Here are the numbers, and again these come straight from Intel’s Arc A380 reviewer’s guide — we’re sharing them with permission while we attempt to get a card for our own in-depth testing. Take these figures with a healthy dose of skepticism, in other words, as most manufacturer provided benchmarks attempt to show products in a better light.

Intel Arc A380 GPU Comparison — Intel Provided Benchmarks
Games Intel Arc A380 GeForce GTX 1650 Radeon RX 6400
17 Game Geometric Mean 96. 4 114.5 105.0
Age of Empires 4 80 102 94
Apex Legends 101 124 112
Battlefield V 72 85 94
Control 67 75 72
Destiny 2 88 109 89
DOTA 2 230 267 266
F1 2021 104 112 96
GTA V 142 164 180
Hitman 3 77 89 91
Naraka Bladepoint 70 68 64
NiZhan 200 200 200
PUBG 78 107 95
The Riftbreaker 113 141 124
The Witcher 3 85 101 81
Total War: Troy 78 98 75
Warframe 77 98 98
Wolfenstein Youngblood 95 130 96

On average, the Arc A380 lost to the GTX 1650 by 19% and lost to the RX 6400 by 9%. When we compare each GPU on a game-by-game basis, the Arc A380 only beats the RX 6400 in four of the 17 titles and beats the GTX 1650 in one of them (Naraka Bladepoint). There’s also a three-way tie in NiZhan, where all the GPUs managed 200 fps, though we’re not sure why Intel would even bother to include that particular benchmark since it looks like there’s a frame rate cap.

Regardless, it isn’t exactly encouraging to see the new Intel GPU getting beat out by an entry-level Nvidia GPU released over three years ago, and an ultra low-level Radeon GPU that is literally a cut down Navi 24 mobile GPU slapped onto a graphics card PCB. Over the past few months, we’ve heard reports that Intel’s graphics drivers are playing a significant role in gaming performance with these new A-series GPUs, with poor optimization being a big issue.

Perhaps Intel can turn things around and provide well-optimized gaming drivers in the near future once its A-series lineup makes it to rest of the world market. Intel also recently showed its expected performance for the higher tier A700M mobile parts, which looked at least fairly capable. But if Intel has the same driver problems on its mid-range A500 and flagship A700 series graphics cards, where gaming performance matters even more, Intel’s GPU division is going to be dealing with serious challenges in a market that’s already quite competitive.

(Image credit: Intel)

For the entry-level and mobile parts, it’s not just gaming performance that Intel is hyping up. Arc includes the Xe media engine, which supports up to 8K encode and decode of AVC (H.264), HEVC (H.265), VP9, and AV1 — and Arc is the only GPU right now with hardware encoding support of AV1. Comparing the A380 against a Core i5-12600K CPU encode of an AV1 video, the A380 took less than a quarter of the time (53 seconds versus 234 seconds).

Arc A380 was also faster in other video encoding scenarios, like an HEVC encode using DaVinci Resolve where Intel’s Deep Link feature that leverages the CPU graphics and dedicated GPU allowed it to finish the task in 16 seconds compared to 25 seconds on a GTX 1650 card. Interestingly, just the UHD Graphics 770 or Arc A380 alone required 30 seconds, so encoding performance very nearly doubled thanks to Deep Link.

If you’re more interested in the media capabilities, Arc might be a great option when it comes to the US market. For gamers, let’s hope additional driver improvements can help narrow the gap that Intel’s currently showing.

Aaron Klotz is a freelance writer for Tom’s Hardware US, covering news topics related to computer hardware such as CPUs, and graphics cards.

Topics

Graphics

Tom’s Hardware is part of Future US Inc, an international media group and leading digital publisher. Visit our corporate site .

©
Future US, Inc. Full 7th Floor, 130 West 42nd Street,
New York,
NY 10036.

GPU Benchmarks Hierarchy 2022 — Graphics Card Rankings

(Image credit: Tom’s Hardware)

Our GPU benchmarks hierarchy ranks all the current and previous generation graphics cards by performance, including all of the best graphics cards. Whether it’s playing games or doing high-end creative work like 4K video editing, your graphics card typically plays the biggest role in determining performance, and even the best CPUs for Gaming take a secondary role.

We’ve revamped our GPU testbed and updated all of our benchmarks for 2022, and are now finished retesting nearly every graphics card from the past several generations, plus some even older GPUs. Our full GPU hierarchy using traditional rendering comes first, and below that we have our ray tracing GPU benchmarks hierarchy. Those of course require a ray tracing capable GPU so only AMD’s RX 6000-series and Nvidia’s RTX cards are present.

Our latest addition to the tables is the long-awaited but mostly disappointing Intel Arc A380 — the most promising aspects are its relatively low price and its video encoding hardware. We’ve also recently added AMD’s Radeon RX 6400, which looks good compared to Nvidia’s GeForce GTX 1630. We’re still waiting for the public launch of the other Arc GPUs, which may or may not happen before Nvidia Ada and AMD RDNA 3 arrive.

Below our main tables, you’ll find our 2020–2021 benchmark suite, which has all of the previous generation GPUs running our older test suite running on a Core i9-9900K testbed. We also have the legacy GPU hierarchy (without benchmarks) at the bottom of the article for reference purposes.

The following tables sort everything solely by our performance-based GPU gaming benchmarks, at 1080p «ultra» for the main suite and at 1080p «medium» for the DXR suite. Factors including price, graphics card power consumption, overall efficiency, and features aren’t factored into the rankings here. We’ve switched to a new Alder Lake Core i9-12900K testbed, changed up our test suite, and retested all of the past several generations of GPUs. Now let’s hit the benchmarks and tables.

GPU Benchmarks Ranking 2022

(Image credit: Tom’s Hardware)

For our latest benchmarks, we test (nearly) all GPUs at 1080p medium and 1080p ultra, and sort the table by the 1080p ultra results. Where it makes sense, we also test at 1440p ultra and 4K ultra. All of the scores are scaled relative to the top-ranking 1080p ultra card, which in our new suite is the Radeon RX 6950 XT (at least at 1080p and 1440p).

You can also see the above summary chart showing the relative performance of the cards we’ve tested across the past several generations of hardware at 1080p ultra. There are a few missing options (e.g., the GT 1030, RX 550, and several Titan cards), but otherwise it’s basically complete. We do have data in the table below for additional GPUs.

The eight games we’re using for our standard GPU benchmarks hierarchy are Borderlands 3 (DX12), Far Cry 6 (DX12), Flight Simulator (DX12), Forza Horizon 5 (DX12), Horizon Zero Dawn (DX12), Red Dead Redemption 2 (Vulkan), Total War Warhammer 3 (DX11), and Watch Dogs Legion (DX12). The fps score is the geometric mean (equal weighting) of the eight games.

Graphics Card 1080p Ultra 1080p Medium 1440p Ultra 4K Ultra Specifications
Radeon RX 6950 XT 100. 0% (137.3fps) 100.0% (190.1fps) 100.0% (115.4fps) 100.0% (70.3fps) Navi 21, 5120 shaders, 2310MHz, 16GB GDDR6@18Gbps, 576GB/s, 335W
GeForce RTX 3090 Ti 96.5% (132.4fps) 94.8% (180.1fps) 98.7% (113.9fps) 107.6% (75.7fps) GA102, 10752 shaders, 1860MHz, 24GB GDDR6X@21Gbps, 1008GB/s, 450W
Radeon RX 6900 XT 94.5% (129.7fps) 97.1% (184.6fps) 91.4% (105.5fps) 89.7% (63.1fps) Navi 21, 5120 shaders, 2250MHz, 16GB GDDR6@16Gbps, 512GB/s, 300W
GeForce RTX 3090 92.2% (126.6fps) 93.7% (178.1fps) 92.3% (106.5fps) 97.8% (68.8fps) GA102, 10496 shaders, 1695MHz, 24GB [email protected], 936GB/s, 350W
GeForce RTX 3080 12GB 90.7% (124.5fps) 93.8% (178.2fps) 90.1% (104.0fps) 94.3% (66.3fps) GA102, 8960 shaders, 1845MHz, 12GB GDDR6X@19Gbps, 912GB/s, 400W
Radeon RX 6800 XT 90. 0% (123.5fps) 94.2% (179.1fps) 86.5% (99.8fps) 83.2% (58.5fps) Navi 21, 4608 shaders, 2250MHz, 16GB GDDR6@16Gbps, 512GB/s, 300W
GeForce RTX 3080 Ti 89.9% (123.4fps) 92.0% (174.9fps) 89.6% (103.4fps) 94.5% (66.5fps) GA102, 10240 shaders, 1665MHz, 12GB GDDR6X@19Gbps, 912GB/s, 350W
GeForce RTX 3080 84.7% (116.3fps) 91.2% (173.4fps) 82.8% (95.5fps) 86.2% (60.6fps) GA102, 8704 shaders, 1710MHz, 10GB GDDR6X@19Gbps, 760GB/s, 320W
Radeon RX 6800 80.7% (110.7fps) 90.9% (172.7fps) 75.9% (87.5fps) 71.9% (50.6fps) Navi 21, 3840 shaders, 2105MHz, 16GB GDDR6@16Gbps, 512GB/s, 250W
GeForce RTX 3070 Ti 75.8% (104.1fps) 85.4% (162.4fps) 71.5% (82.6fps) 66.6% (46.8fps) GA104, 6144 shaders, 1770MHz, 8GB GDDR6X@19Gbps, 608GB/s, 290W
Radeon RX 6750 XT 73. 7% (101.2fps) 88.4% (168.0fps) 65.3% (75.4fps) 59.6% (41.9fps) Navi 22, 2560 shaders, 2600MHz, 12GB GDDR6@18Gbps, 432GB/s, 250W
Titan RTX 73.6% (101.0fps) 83.2% (158.2fps) 69.7% (80.5fps) 68.7% (48.3fps) TU102, 4608 shaders, 1770MHz, 24GB GDDR6@14Gbps, 672GB/s, 280W
GeForce RTX 3070 72.7% (99.8fps) 82.9% (157.7fps) 67.2% (77.5fps) 61.4% (43.2fps) GA104, 5888 shaders, 1725MHz, 8GB GDDR6@14Gbps, 448GB/s, 220W
GeForce RTX 2080 Ti 69.9% (96.0fps) 79.8% (151.6fps) 65.3% (75.3fps) 63.4% (44.6fps) TU102, 4352 shaders, 1545MHz, 11GB GDDR6@14Gbps, 616GB/s, 250W
Radeon RX 6700 XT 69.8% (95.8fps) 84.1% (159.8fps) 61.3% (70.8fps) 56.1% (39.4fps) Navi 22, 2560 shaders, 2581MHz, 12GB GDDR6@16Gbps, 384GB/s, 230W
GeForce RTX 3060 Ti 66. 7% (91.5fps) 78.8% (149.7fps) 60.4% (69.7fps)   GA104, 4864 shaders, 1665MHz, 8GB GDDR6@14Gbps, 448GB/s, 200W
GeForce RTX 2080 Super 61.9% (84.9fps) 72.5% (137.8fps) 56.3% (64.9fps) 49.1% (34.5fps) TU104, 3072 shaders, 1815MHz, 8GB [email protected], 496GB/s, 250W
GeForce RTX 2080 59.9% (82.2fps) 70.1% (133.1fps) 54.1% (62.4fps)   TU104, 2944 shaders, 1710MHz, 8GB GDDR6@14Gbps, 448GB/s, 215W
Radeon RX 6650 XT 58.2% (79.9fps) 72.8% (138.4fps) 49.2% (56.7fps)   Navi 23, 2048 shaders, 2635MHz, 8GB GDDR6@18Gbps, 280GB/s, 180W
Radeon RX 6600 XT 56.9% (78.1fps) 71.8% (136.5fps) 47.6% (54.9fps)   Navi 23, 2048 shaders, 2589MHz, 8GB GDDR6@16Gbps, 256GB/s, 160W
GeForce RTX 2070 Super 55. 7% (76.4fps) 65.3% (124.1fps) 49.8% (57.4fps)   TU104, 2560 shaders, 1770MHz, 8GB GDDR6@14Gbps, 448GB/s, 215W
Radeon RX 5700 XT 53.7% (73.7fps) 66.2% (125.8fps) 46.2% (53.3fps) 41.6% (29.3fps) Navi 10, 2560 shaders, 1905MHz, 8GB GDDR6@14Gbps, 448GB/s, 225W
GeForce RTX 3060 51.1% (70.2fps) 62.5% (118.8fps) 45.6% (52.6fps)   GA106, 3584 shaders, 1777MHz, 12GB GDDR6@15Gbps, 360GB/s, 170W
Radeon VII 50.8% (69.7fps) 60.0% (114.0fps) 45.9% (53.0fps) 44.7% (31.4fps) Vega 20, 3840 shaders, 1750MHz, 16GB [email protected], 1024GB/s, 300W
GeForce RTX 2070 49.5% (67.9fps) 58.2% (110.7fps) 44.2% (51.0fps)   TU106, 2304 shaders, 1620MHz, 8GB GDDR6@14Gbps, 448GB/s, 175W
Radeon RX 6600 48. 6% (66.7fps) 62.0% (117.8fps) 40.0% (46.1fps)   Navi 23, 1792 shaders, 2491MHz, 8GB GDDR6@14Gbps, 224GB/s, 132W
GeForce GTX 1080 Ti 48.5% (66.5fps) 58.2% (110.6fps) 43.6% (50.3fps) 42.0% (29.5fps) GP102, 3584 shaders, 1582MHz, 11GB GDDR5X@11Gbps, 484GB/s, 250W
GeForce RTX 2060 Super 47.4% (65.1fps) 55.7% (105.9fps) 41.8% (48.2fps)   TU106, 2176 shaders, 1650MHz, 8GB GDDR6@14Gbps, 448GB/s, 175W
Radeon RX 5700 47.2% (64.8fps) 58.5% (111.3fps) 40.9% (47.2fps)   Navi 10, 2304 shaders, 1725MHz, 8GB GDDR6@14Gbps, 448GB/s, 180W
Radeon RX 5600 XT 42.3% (58.1fps) 52.9% (100.6fps) 36.4% (42.0fps)   Navi 10, 2304 shaders, 1750MHz, 8GB GDDR6@14Gbps, 336GB/s, 160W
Radeon RX Vega 64 41. 4% (56.8fps) 49.6% (94.3fps) 36.1% (41.6fps) 33.4% (23.5fps) Vega 10, 4096 shaders, 1546MHz, 8GB [email protected], 484GB/s, 295W
GeForce RTX 2060 40.2% (55.2fps) 50.9% (96.8fps) 33.6% (38.7fps)   TU106, 1920 shaders, 1680MHz, 6GB GDDR6@14Gbps, 336GB/s, 160W
GeForce GTX 1080 38.7% (53.1fps) 47.3% (90.0fps) 34.2% (39.4fps)   GP104, 2560 shaders, 1733MHz, 8GB GDDR5X@10Gbps, 320GB/s, 180W
GeForce RTX 3050 37.5% (51.4fps) 47.0% (89.4fps) 32.6% (37.6fps)   GA106, 2560 shaders, 1777MHz, 8GB GDDR6@14Gbps, 224GB/s, 130W
GeForce GTX 1070 Ti 37.2% (51.1fps) 45.1% (85.8fps) 32.9% (37.9fps)   GP104, 2432 shaders, 1683MHz, 8GB GDDR5@8Gbps, 256GB/s, 180W
Radeon RX Vega 56 36. 9% (50.6fps) 44.4% (84.4fps) 32.0% (37.0fps)   Vega 10, 3584 shaders, 1471MHz, 8GB [email protected], 410GB/s, 210W
GeForce GTX 1660 Super 33.0% (45.3fps) 43.6% (82.8fps) 28.1% (32.4fps)   TU116, 1408 shaders, 1785MHz, 6GB GDDR6@14Gbps, 336GB/s, 125W
GeForce GTX 1660 Ti 32.8% (45.0fps) 43.4% (82.4fps) 27.9% (32.2fps)   TU116, 1536 shaders, 1770MHz, 6GB GDDR6@12Gbps, 288GB/s, 120W
GeForce GTX 1070 32.6% (44.8fps) 33.7% (64.0fps) 33.6% (38.8fps)   GP104, 1920 shaders, 1683MHz, 8GB GDDR5@8Gbps, 256GB/s, 150W
GeForce GTX 1660 29.3% (40.2fps) 39.5% (75.1fps) 24.7% (28.5fps)   TU116, 1408 shaders, 1785MHz, 6GB GDDR5@8Gbps, 192GB/s, 120W
Radeon RX 5500 XT 8GB 29. 0% (39.8fps) 38.2% (72.6fps) 24.7% (28.5fps)   Navi 14, 1408 shaders, 1845MHz, 8GB GDDR6@14Gbps, 224GB/s, 130W
Radeon RX 590 28.7% (39.4fps) 36.1% (68.6fps) 25.2% (29.1fps)   Polaris 30, 2304 shaders, 1545MHz, 8GB GDDR5@8Gbps, 256GB/s, 225W
GeForce GTX 980 Ti 26.1% (35.9fps) 32.9% (62.6fps) 23.1% (26.7fps)   GM200, 2816 shaders, 1075MHz, 6GB GDDR5@7Gbps, 336GB/s, 250W
Radeon R9 Fury X 25.8% (35.4fps) 33.9% (64.4fps)     Fiji, 4096 shaders, 1050MHz, 4GB HBM2@2Gbps, 512GB/s, 275W
Radeon RX 580 8GB 25.8% (35.3fps) 32.5% (61.7fps) 22.5% (26.0fps)   Polaris 20, 2304 shaders, 1340MHz, 8GB GDDR5@8Gbps, 256GB/s, 185W
GeForce GTX 1650 Super 24. 7% (33.9fps) 35.8% (68.0fps) 19.9% (23.0fps)   TU116, 1280 shaders, 1725MHz, 4GB GDDR6@12Gbps, 192GB/s, 100W
Radeon RX 5500 XT 4GB 24.4% (33.5fps) 35.2% (66.9fps)     Navi 14, 1408 shaders, 1845MHz, 4GB GDDR6@14Gbps, 224GB/s, 130W
GeForce GTX 1060 6GB 23.5% (32.2fps) 30.5% (58.0fps) 20.0% (23.0fps)   GP106, 1280 shaders, 1708MHz, 6GB GDDR5@8Gbps, 192GB/s, 120W
Radeon RX 6500 XT 22.4% (30.8fps) 34.6% (65.8fps) 15.6% (18.0fps)   Navi 24, 1024 shaders, 2815MHz, 4GB GDDR6@18Gbps, 144GB/s, 107W
Radeon R9 390 21.7% (29.8fps) 26.9% (51.2fps)     Grenada, 2560 shaders, 1000MHz, 8GB GDDR5@6Gbps, 384GB/s, 275W
GeForce GTX 980 21. 1% (28.9fps) 28.2% (53.7fps)     GM204, 2048 shaders, 1216MHz, 4GB GDDR5@7Gbps, 256GB/s, 165W
GeForce GTX 1650 GDDR6 21.0% (28.8fps) 29.8% (56.7fps)     TU117, 896 shaders, 1590MHz, 4GB GDDR6@12Gbps, 192GB/s, 75W
Intel Arc A380 20.6% (28.3fps) 28.8% (54.7fps) 16.9% (19.5fps)   ACM-G11, 1024 shaders, 2450MHz, 6GB [email protected], 186GB/s, 75W
Radeon RX 570 4GB 20.6% (28.3fps) 28.2% (53.6fps) 17.3% (20.0fps)   Polaris 20, 2048 shaders, 1244MHz, 4GB GDDR5@7Gbps, 224GB/s, 150W
GeForce GTX 1060 3GB 20.2% (27.8fps) 27.7% (52.6fps)     GP106, 1152 shaders, 1708MHz, 3GB GDDR5@8Gbps, 192GB/s, 120W
GeForce GTX 1650 19.6% (26.9fps) 26. 9% (51.1fps)     TU117, 896 shaders, 1665MHz, 4GB GDDR5@8Gbps, 128GB/s, 75W
GeForce GTX 970 19.3% (26.5fps) 25.9% (49.1fps)     GM204, 1664 shaders, 1178MHz, 4GB GDDR5@7Gbps, 256GB/s, 145W
Radeon RX 6400 17.2% (23.7fps) 27.3% (52.0fps)     Navi 24, 768 shaders, 2321MHz, 4GB GDDR6@16Gbps, 128GB/s, 53W
GeForce GTX 780 16.0% (22.0fps) 20.3% (38.5fps)     GK110, 2304 shaders, 900MHz, 3GB GDDR5@6Gbps, 288GB/s, 230W
GeForce GTX 1050 Ti 14.4% (19.8fps) 20.0% (38.0fps)     GP107, 768 shaders, 1392MHz, 4GB GDDR5@7Gbps, 112GB/s, 75W
GeForce GTX 1630 12.3% (16.9fps) 17.8% (33.9fps)     TU117, 512 shaders, 1785MHz, 4GB GDDR6@12Gbps, 96GB/s, 75W
GeForce GTX 1050 10. 8% (14.8fps) 15.7% (29.8fps)     GP107, 640 shaders, 1455MHz, 2GB GDDR5@7Gbps, 112GB/s, 75W
Radeon RX 560 4GB 10.8% (14.8fps) 16.8% (31.8fps)     Baffin, 1024 shaders, 1275MHz, 4GB GDDR5@7Gbps, 112GB/s, 60-80W
Radeon RX 550 4GB   10.3% (19.6fps)     Lexa, 640 shaders, 1183MHz, 4GB GDDR5@7Gbps, 112GB/s, 50W
GeForce GT 1030   7.6% (14.5fps)     GP108, 384 shaders, 1468MHz, 2GB GDDR5@6Gbps, 48GB/s, 30W

*: GPU couldn’t run all tests, so the overall score is slightly skewed at 1080p ultra.

Our updated test suite and testbed favor AMD’s GPUs slightly, particularly at 1080p and even 1440p — which is perhaps one more reason the RTX 3090 Ti exists, as it mostly retakes the throne at all resolutions, though the new 6950 XT reclaims top honors. Keep in mind that we’re not including any ray tracing or DLSS results in the above table, as we intend to use the same test suite with the same settings on all current and previous generation graphics cards.

AMD’s RX 6950 XT doesn’t massively boost performance, but it’s enough to make up the gap with the 3090 Ti, and it does so while costing over 40% less. AMD also wins, quite easily, in the performance per watt metric. Stepping down the list, the 3090 and 3080 12GB — an overclocked MSI model, since there are no reference 3080 12GB cards — place just ahead of the 6800 XT, followed by the 3080 Ti. The RX 6800 also beats the RTX 3070 Ti, while the RTX 3070 and RX 6700 XT are effectively tied.

The rankings favor AMD less at the lower portion of the chart, with the RTX 3060 and RX 6600 also tied, and the RTX 3050 easily eclipses the RX 6500 XT — not that it’s difficult to do so, as both the 4GB and 8GB RX 5500 XT also beat AMD’s latest budget offering.

Turning to the previous generation GPUs, the RTX 20-series and GTX 16-series chips end up scattered throughout the results, along with the RX 5000-series. The general rule of thumb is that you get one or two «model upgrades» with the newer architecture, so for example the RTX 2080 Super comes in just below the RTX 3060 Ti, while the RX 5700 XT lands a few percent behind the RX 6600 XT.

Go back far enough, and you can see how modern games at ultra settings severely punish cards that don’t have more than 4GB VRAM. We’ve been saying for a few years now that 4GB is just scraping by, and 6GB or more is desirable. The GTX 1060 3GB, GTX 1050, and GTX 780 actually failed to run some of our tests, which skews their results a bit, even though they do better at 1080p medium.

Now let’s switch over to the ray tracing hierarchy.

(Image credit: Techland)

Ray Tracing GPU Benchmarks Ranking 2022

Enabling ray tracing, particularly with demanding games like those we’re using in our DXR test suite, can cause framerates to drop off a cliff. We’re testing with «medium» and «ultra» ray tracing settings. Medium means using medium graphics settings but turning on ray tracing effects (set to «medium» if that’s an option), while ultra turns on all of the RT options at more or less maximum quality.

Because ray tracing is so much more demanding, we’re sorting these results by the 1080p medium scores. That’s also because the RX 6500 XT basically can’t handle ray tracing even at these settings, and testing at anything more than 1080p medium would be fruitless. We’ve finished testing all the current ray tracing capable GPUs, though there will be more in the future. We’re definitely curious to see if Intel’s Arc GPUs can do any better than the RX 6500 XT, and suspect the answer might be «nope» on the lower tier A300 series.

The six ray tracing games we’re using are Bright Memory Infinite, Control Ultimate Edition, Cyberpunk 2077, Fortnite, Metro Exodus Enhanced, and Minecraft — all of these use the DirectX 12 / DX12 Ultimate API. The fps score is the geometric mean (equal weighting) of the six games, and the percentage is scaled relative to the fastest GPU in the list, which in this case is the GeForce RTX 3090 Ti.

(Image credit: Tom’s Hardware)

Graphics Card 1080p Medium 1080p Ultra 1440p Ultra 4K Ultra Specifications
GeForce RTX 3090 Ti 100.0% (118.2fps) 100.0% (84.4fps) 100.0% (57.2fps) 100.0% (29.1fps) GA102, 10752 shaders, 1860MHz, 24GB GDDR6X@21Gbps, 1008GB/s, 450W
GeForce RTX 3090 91.7% (108.4fps) 89.7% (75.7fps) 88.7% (50.8fps) 87.2% (25.4fps) GA102, 10496 shaders, 1695MHz, 24GB [email protected], 936GB/s, 350W
GeForce RTX 3080 Ti 89.3% (105.6fps) 87.6% (73.9fps) 86.0% (49.2fps) 84.6% (24.7fps) GA102, 10240 shaders, 1665MHz, 12GB GDDR6X@19Gbps, 912GB/s, 350W
GeForce RTX 3080 12GB 88.5% (104.7fps) 85.8% (72.4fps) 83.7% (47.9fps) 81. 4% (23.7fps) GA102, 8960 shaders, 1845MHz, 12GB GDDR6X@19Gbps, 912GB/s, 400W
GeForce RTX 3080 81.5% (96.3fps) 78.5% (66.3fps) 76.3% (43.7fps) 72.2% (21.0fps) GA102, 8704 shaders, 1710MHz, 10GB GDDR6X@19Gbps, 760GB/s, 320W
Radeon RX 6950 XT 70.4% (83.2fps) 66.7% (56.2fps) 62.9% (36.0fps) 59.0% (17.2fps) Navi 21, 5120 shaders, 2310MHz, 16GB GDDR6@18Gbps, 576GB/s, 335W
GeForce RTX 3070 Ti 66.3% (78.4fps) 63.0% (53.1fps) 59.2% (33.9fps)   GA104, 6144 shaders, 1770MHz, 8GB GDDR6X@19Gbps, 608GB/s, 290W
Radeon RX 6900 XT 63.0% (74.5fps) 59.0% (49.8fps) 55.2% (31.6fps) 51.7% (15.1fps) Navi 21, 5120 shaders, 2250MHz, 16GB GDDR6@16Gbps, 512GB/s, 300W
Titan RTX 62. 5% (73.9fps) 58.2% (49.1fps) 55.4% (31.7fps) 52.5% (15.3fps) TU102, 4608 shaders, 1770MHz, 24GB GDDR6@14Gbps, 672GB/s, 280W
GeForce RTX 3070 62.1% (73.4fps) 58.7% (49.6fps) 54.9% (31.4fps)   GA104, 5888 shaders, 1725MHz, 8GB GDDR6@14Gbps, 448GB/s, 220W
GeForce RTX 2080 Ti 59.2% (70.0fps) 55.1% (46.5fps) 52.0% (29.7fps)   TU102, 4352 shaders, 1545MHz, 11GB GDDR6@14Gbps, 616GB/s, 250W
Radeon RX 6800 XT 59.0% (69.7fps) 54.6% (46.1fps) 51.3% (29.4fps) 48.2% (14.0fps) Navi 21, 4608 shaders, 2250MHz, 16GB GDDR6@16Gbps, 512GB/s, 300W
GeForce RTX 3060 Ti 55.2% (65.3fps) 51.3% (43.3fps) 47.8% (27.4fps)   GA104, 4864 shaders, 1665MHz, 8GB GDDR6@14Gbps, 448GB/s, 200W
Radeon RX 6800 50. 4% (59.6fps) 46.6% (39.3fps) 43.6% (24.9fps)   Navi 21, 3840 shaders, 2105MHz, 16GB GDDR6@16Gbps, 512GB/s, 250W
GeForce RTX 2080 Super 49.6% (58.6fps) 45.0% (37.9fps) 41.6% (23.8fps)   TU104, 3072 shaders, 1815MHz, 8GB [email protected], 496GB/s, 250W
GeForce RTX 2080 47.5% (56.2fps) 42.5% (35.9fps) 39.1% (22.4fps)   TU104, 2944 shaders, 1710MHz, 8GB GDDR6@14Gbps, 448GB/s, 215W
GeForce RTX 2070 Super 43.6% (51.5fps) 39.2% (33.1fps) 35.5% (20.3fps)   TU104, 2560 shaders, 1770MHz, 8GB GDDR6@14Gbps, 448GB/s, 215W
Radeon RX 6750 XT 41.7% (49.3fps) 39.1% (33.0fps) 35.6% (20.4fps)   Navi 22, 2560 shaders, 2600MHz, 12GB GDDR6@18Gbps, 432GB/s, 250W
GeForce RTX 3060 41. 2% (48.7fps) 38.3% (32.3fps) 35.1% (20.1fps)   GA106, 3584 shaders, 1777MHz, 12GB GDDR6@15Gbps, 360GB/s, 170W
Radeon RX 6700 XT 38.8% (45.9fps) 36.4% (30.7fps) 32.9% (18.8fps)   Navi 22, 2560 shaders, 2581MHz, 12GB GDDR6@16Gbps, 384GB/s, 230W
GeForce RTX 2070 38.5% (45.5fps) 34.9% (29.4fps) 31.6% (18.1fps)   TU106, 2304 shaders, 1620MHz, 8GB GDDR6@14Gbps, 448GB/s, 175W
GeForce RTX 2060 Super 36.9% (43.6fps) 33.0% (27.9fps) 29.9% (17.1fps)   TU106, 2176 shaders, 1650MHz, 8GB GDDR6@14Gbps, 448GB/s, 175W
GeForce RTX 2060 31.8% (37.6fps) 26.7% (22.5fps)     TU106, 1920 shaders, 1680MHz, 6GB GDDR6@14Gbps, 336GB/s, 160W
Radeon RX 6650 XT 31. 6% (37.3fps) 29.0% (24.5fps)     Navi 23, 2048 shaders, 2635MHz, 8GB GDDR6@18Gbps, 280GB/s, 180W
Radeon RX 6600 XT 30.8% (36.4fps) 28.0% (23.6fps)     Navi 23, 2048 shaders, 2589MHz, 8GB GDDR6@16Gbps, 256GB/s, 160W
GeForce RTX 3050 29.4% (34.8fps) 27.0% (22.8fps)     GA106, 2560 shaders, 1777MHz, 8GB GDDR6@14Gbps, 224GB/s, 130W
Radeon RX 6600 25.8% (30.5fps) 23.3% (19.6fps)     Navi 23, 1792 shaders, 2491MHz, 8GB GDDR6@14Gbps, 224GB/s, 132W
Intel Arc A380 13.4% (15.9fps)       ACM-G11, 1024 shaders, 2450MHz, 6GB [email protected], 186GB/s, 75W
Radeon RX 6500 XT 9.4% (11.2fps)       Navi 24, 1024 shaders, 2815MHz, 4GB GDDR6@18Gbps, 144GB/s, 107W
Radeon RX 6400 7. 6% (9.0fps)       Navi 24, 768 shaders, 2321MHz, 4GB GDDR6@16Gbps, 128GB/s, 53W

Where AMD can claim the top spot in our standard test suite at 1080p and 1440p, once we enable ray tracing, the best AMD can do is sixth place, just ahead of the RTX, RTX 3070 Ti. It’s a precipitous drop, and we’re not even using DLSS, which all six of our DXR games support.

You can see what DLSS Quality mode did for performance on the Asus RTX 3090 Ti in our review, but the short summary is that it boosted performance by 48% at 1080p ultra, 62% at 1440p ultra, and 89% at 4K ultra — with that last taking performance from a borderline unplayable 31 fps average up to a comfortable 60 fps. You can also legitimately use the Balanced and Performance modes without killing image quality, especially at 4K, which will deliver even larger gains.

AMD’s FSR 2.0 would prove beneficial here, if AMD can get widespread adoption — AMD’s fastest GPUs can barely manage 1440p at more than 30 fps. Also note that none of the GPUs can handle native 4K in all of the games, though the RTX 3080 was 45% faster than the RX 6900 XT, and the RTX 3090 Ti was 93% faster. Hopefully the upcoming Nvidia Ada and AMD RDNA 3 GPUs will be able to handle 4K at native resolution while reaching playable framerates, but even then we expect DLSS or FSR 2.0 will be necessary for 60 fps or more.

The midrange GPUs like the RTX 3070 and RX 6700 XT basically manage 1080p ultra and not much more, while the bottom tier of DXR-capable GPUs barely manage 1080p medium — and the RX 6500 XT can’t even do that, with single digit framerates in most of our test suite, and one game that wouldn’t even work at our chosen «medium» settings. (Control requires at least 6GB VRAM to let you enabled ray tracing.)

Intel’s Arc A380 ends up just ahead of the RX 6500 XT in ray tracing performance, which is interesting considering it only has 8 RTUs going up against AMD’s 16 Ray Accelerators. Intel recently posted a deep dive into its ray tracing hardware, and Arc sounds reasonably impressive, except for the fact that the number of RTUs in the A380 severely limits performance, and even the top-end A770 that hasn’t launched yet only has 32 RTUs. That might be enough to surpass the RTX 3060, but it’s unlikely to go much further than that.

It’s also interesting to look at the generational performance of Nvidia’s RTX cards. The slowest 20-series GPU, the RTX 2060, still outperforms the new RTX 3050 by a bit, but the fastest RTX 2080 Ti comes in a bit behind the RTX 3070. Where the 2080 Ti basically doubled the performance of the 2060, the 3090 delivers about triple the performance of the 3050.

(Image credit: Tom’s Hardware)

2020-2021 GPU Benchmarks Ranking

The results below are from our previous version of the GPU benchmarks hierarchy, using a different test suite and combining results from nine games with six resolution and setting combinations. All of the scores are combined (via a geometric mean calculation) into a single overall result, which tends to penalize the fastest and slowest GPUs — CPU bottlenecks come into play at 1080p medium, while VRAM limitations can kill performance at 4K ultra.

These results have not been updated since early 2022, when we added the RTX 3050 and RX 6500 XT to the list. We won’t be adding future GPUs to this table, so there’s no 3090 Ti, 6950 XT, 6750 XT, or 6650 XT, but it does help to provide a look at a slightly less demanding suite of games, where 6GB or more VRAM isn’t generally required at 1080p ultra settings. You can use these older results to help inform your purchase decisions, if you don’t typically run the latest games at maxed out settings.

Score GPU Base/Boost Memory Power Buy
Nvidia GeForce RTX 3090 100.0% GA102 1400/1695 MHz 24GB GDDR6X 350W Nvidia GeForce RTX 3090
Nvidia GeForce RTX 3080 Ti 97.9% GA102 1370/1665 MHz 12GB GDDR6X 350W Nvidia GeForce RTX 3080 Ti
AMD Radeon RX 6900 XT 97.0% Navi 21 1825/2250 MHz 16GB GDDR6 300W AMD Radeon RX 6900 XT
AMD Radeon RX 6800 XT 93. 5% Navi 21 1825/2250 MHz 16GB GDDR6 300W AMD Radeon RX 6800 XT
Nvidia GeForce RTX 3080 93.2% GA102 1440/1710 MHz 10GB GDDR6X 320W Nvidia GeForce RTX 3080
AMD Radeon RX 6800 85.7% Navi 21 1700/2105 MHz 16GB GDDR6 250W AMD Radeon RX 6800
Nvidia GeForce RTX 3070 Ti 81.5% GA104 1575/1770 MHz 8GB GDDR6X 290W Nvidia GeForce RTX 3070 Ti
Nvidia Titan RTX 79.5% TU102 1350/1770 MHz 24GB GDDR6 280W Nvidia Titan RTX
Nvidia GeForce RTX 2080 Ti 77.4% TU102 1350/1635 MHz 11GB GDDR6 260W Nvidia GeForce RTX 2080 Ti
Nvidia GeForce RTX 3070 76.3% GA104 1500/1730 MHz 8GB GDDR6 220W Nvidia GeForce RTX 3070
AMD Radeon RX 6700 XT 73. 3% Navi 22 2321/2424 MHz 12GB GDDR6 230W AMD Radeon RX 6700 XT
Nvidia GeForce RTX 3060 Ti 69.6% GA104 1410/1665 MHz 8GB GDDR6 200W Nvidia GeForce RTX 3060 Ti
Nvidia Titan V 68.7% GV100 1200/1455 MHz 12GB HBM2 250W Nvidia Titan V
Nvidia GeForce RTX 2080 Super 66.8% TU104 1650/1815 MHz 8GB GDDR6 250W GeForce RTX 2080 Super
Nvidia GeForce RTX 2080 62.5% TU104 1515/1800 MHz 8GB GDDR6 225W GeForce RTX 2080
Nvidia Titan Xp 61.1% GP102 1405/1480 MHz 12GB GDDR5X 250W GeForce GTX Titan X
Nvidia GeForce RTX 2070 Super 59.6% TU104 1605/1770 MHz 8GB GDDR6 215W GeForce RTX 2070 Super
AMD Radeon VII 58. 9% Vega 20 1400/1750 MHz 16GB HBM2 300W Radeon VII
Nvidia GeForce GTX 1080 Ti 57.8% GP102 1480/1582 MHz 11GB GDDR5X 250W Nvidia GeForce GTX 1080 Ti
AMD Radeon RX 6600 XT 57.7% Navi 23 1968/2589 MHz 8GB GDDR6 160W AMD Radeon RX 6600 XT
AMD Radeon RX 5700 XT 57.0% Navi 10 1605/1905 MHz 8GB GDDR6 225W AMD Radeon RX 5700 XT
Nvidia GeForce RTX 3060 12GB 54.7% GA106 1320/1777 MHz 12GB GDDR6 170W Nvidia GeForce RTX 3060 12GB
Nvidia GeForce RTX 2070 53.1% TU106 1410/1710 MHz 8GB GDDR6 185W RTX 2070
AMD Radeon RX 5700 51.4% Navi 10 1465/1725 MHz 8GB GDDR6 185W AMD Radeon RX 5700
Nvidia GeForce RTX 2060 Super 50. 6% TU106 1470/1650 MHz 8GB GDDR6 175W GeForce RTX 2060 Super
AMD Radeon RX 6600 49.2% Navi 23 1626/2491 MHz 8GB GDDR6 132W AMD Radeon RX 6600
AMD Radeon RX Vega 64 48.4% Vega 10 1274/1546 MHz 8GB HBM2 295W Gigabyte Radeon RX Vega 64
AMD Radeon RX 5600 XT 46.6% Navi 10 ?/1615 MHz 6GB GDDR6 150W Radeon RX 5600 XT
Nvidia GeForce GTX 1080 45.2% GP104 1607/1733 MHz 8GB GDDR5X 180W Nvidia GeForce GTX 1080
Nvidia GeForce RTX 2060 44.9% TU106 1365/1680 MHz 6GB GDDR6 160W Nvidia GeForce RTX 2060 FE
AMD Radeon RX Vega 56 42.7% Vega 10 1156/1471 MHz 8GB HBM2 210W Radeon RX Vega 56
Nvidia GeForce GTX 1070 Ti 41. 8% GP104 1607/1683 MHz 8GB GDDR5 180W GeForce GTX 1070 Ti
Nvidia GeForce RTX 3050 40.5% GA106 1552/1777 MHz 8GB GDDR6 130W
Nvidia GeForce GTX 1660 Super 37.9% TU116 1530/1785 MHz 6GB GDDR6 125W GeForce GTX 1660 Super
Nvidia GeForce GTX 1660 Ti 37.8% TU116 1365/1680 MHz 6GB GDDR6 120W GeForce GTX 1660 Ti 6GB
Nvidia GeForce GTX 1070 36.7% GP104 1506/1683 MHz 8GB GDDR5 150W MSI GTX 1070
Nvidia GTX Titan X (Maxwell) 35.3% GM200 1000/1075 MHz 12GB GDDR5 250 Nvidia GTX Titan X
Nvidia GeForce GTX 980 Ti 32.9% GM200 1000/1075 MHz 6GB GDDR5 250W GeForce GTX 980 Ti
Nvidia GeForce GTX 1660 32. 8% TU116 1530/1785 MHz 6GB GDDR5 120W Geforce GTX 1660
AMD Radeon R9 Fury X 32.7% Fiji 1050 MHz 4GB HBM 275W AMD Radeon R9 Fury X
AMD Radeon RX 590 32.4% Polaris 30 1469/1545 MHz 8GB GDDR5 225W Radeon RX 590
AMD Radeon RX 5500 XT 8GB 31.8% Navi 14 ?/1717 MHz 8GB GDDR6 130W AMD Radeon RX 5500 XT 8GB
AMD Radeon RX 580 8GB 30.9% Polaris 20 1257/1340 MHz 8GB GDDR5 185W AMD Radeon RX 580
Nvidia GeForce GTX 1650 Super 28.5% TU116 1530/1725 MHz 4GB GDDR6 100W Nvidia GeForce GTX 1650 Super
AMD Radeon RX 5500 XT 4GB 28.4% Navi 14 ?/1717 MHz 4GB GDDR6 130W AMD Radeon RX 5500 XT 4GB
AMD Radeon RX 6500 XT 27. 7% Navi 24 2610/2815 MHz 4GB GDDR6 107W
AMD Radeon R9 390 27.2% Hawaii 1000 MHz 8GB GDDR5 275W AMD Radeon R9 390
Nvidia GeForce GTX 1060 6GB 26.5% GP106 1506/1708 MHz 6GB GDDR5 120W Nvidia GeForce GTX 1060 6GB
Nvidia GeForce GTX 980 26.4% GM204 1126/1216 MHz 4GB GDDR5 165W Nvidia GeForce GTX 980
AMD Radeon RX 570 4GB 25.2% Polaris 20 1168/1244 MHz 4GB GDDR5 150W Radeon RX 570
Nvidia GTX 1650 GDDR6 23.8% TU117 1410/1590 MHz 4GB GDDR6 75W GeForce GTX 1650 GDDR6
Nvidia GeForce GTX 1060 3GB 22.3% GP106 1506/1708 MHz 3GB GDDR5 120W Nvidia GeForce GTX 1060 3GB
Nvidia GeForce GTX 970 22. 1% GM204 1050/1178 MHz 4GB GDDR5 145W Nvidia GeForce GTX 970
Nvidia GeForce GTX 1650 20.9% TU117 1485/1665 MHz 4GB GDDR5 75W GeForce GTX 1650 Gaming OC 4G
Nvidia GeForce GTX 1050 Ti 16.1% GP107 1290/1392 MHz 4GB GDDR5 75W Nvidia GeForce GTX 1050 Ti
AMD Radeon RX 560 4GB 12.5% Polaris 21 1175/1275 MHz 4GB GDDR5 80W PowerColor Red Dragon Radeon RX 560
Nvidia GeForce GTX 1050 12.2% GP107 1354/1455 MHz 2GB GDDR5 75W Gigabyte GeForce GTX 1050
AMD Vega 8 (R7 5700G) 9.5% Vega 8 2000 MHz Shared N/A
AMD Vega 7 (R5 5600G) 8.8% Vega 7 1900 MHz Shared N/A
AMD Radeon RX 550 8. 0% Polaris 22 1100/1183 MHz 4GB GDDR5 50W PowerColor Radeon RX 550
Nvidia GeForce GT 1030 6.7% GP108 1228/1468 MHz 2GB GDDR5 30W Nvidia GeForce GT 1030
AMD Vega 11 (R5 3400G) 5.5% Vega 11 1400 MHz Shared N/A AMD Ryzen 5 3400G
AMD Vega 8 (R3 3200G) 4.9% Vega 8 1250 MHz Shared N/A AMD Ryzen 3 3200G
Intel Iris Xe DG1 4.4% Xe DG1 1550 MHz 4GB LPDDR4X 30W
Intel Iris Plus (i7-1065G7) 3.0% Gen11 ICL-U 1100 MHz Shared N/A Intel Core i7-1065G7
Intel UHD Graphics 630 (i7-10700K) 1.8% Gen9.5 CFL 1200 MHz 2x8GB DDR4-3200 N/A Intel Core i7-10700K

Choosing a Graphics Card

Which graphics card do you need? To help you decide, we created this GPU benchmarks hierarchy consisting of dozens of GPUs from the past four generations of hardware. Not surprisingly, the fastest cards use either Nvidia’s Ampere architecture or AMD’s Big Navi. AMD’s latest graphics cards perform well without ray tracing, but tend to fall behind once RT gets enabled — even more so if you enable DLSS, which you should. GPU prices are finally hitting reasonable levels, however, making it a better time to upgrade.

Of course it’s not just about playing games. Many applications use the GPU for other work, and we covered some professional GPU benchmarks in our RTX 3090 Ti review. But a good graphics card for gaming will typically do equally well in complex GPU computational workloads. Buy one of the top cards and you can run games at high resolutions and frame rates with the effects turned all the way up, and you’ll be able to do content creation work equally well. Drop down to the middle and lower portions of the list and you’ll need to start dialing down the settings to get acceptable performance in regular game play and GPU benchmarks.

It’s not just about high-end GPUs either, of course. We tested Intel’s Xe Graphics DG1, which basically competes with integrated graphics solutions. The results weren’t pretty, and we didn’t even try running any of those at settings beyond 1080p medium. Still, you can see where those GPUs land at the very bottom of the 2020-2021 GPU benchmarks list. Thankfully, Intel’s Arc Alchemist, aka DG2, appears to be cut from entirely different cloth. We hope, anyway.

If your main goal is gaming, you can’t forget about the CPU. Getting the best possible gaming GPU won’t help you much if your CPU is underpowered and/or out of date. So be sure to check out the Best CPUs for gaming page, as well as our CPU Benchmarks Hierarchy to make sure you have the right CPU for the level of gaming you’re looking to achieve.

(Image credit: Tom’s Hardware)

Test System and How We Test for GPU Benchmarks

We’ve used two different PCs for our testing. The latest 2022 and later configuration uses an Alder Lake CPU and platform, while our previous testbed uses Coffee Lake and Z390. Here are the details of the two PCs.

Tom’s Hardware 2022 GPU Testbed

Intel Core i9-12900K
MSI Pro Z690-A WiFi DDR4
Corsair 2x16GB DDR4-3600 CL16
Crucial P5 Plus 2TB
Cooler Master MWE 1250 V2 Gold
Cooler Master PL360 Flux
Cooler Master HAF500
Windows 11 Pro 64-bit

Tom’s Hardware 2020–2021 GPU Testbed

Intel Core i9-9900K
Corsair h250i Pro RGB
MSI MEG Z390 Ace
Corsair 2x16GB DDR4-3200
XPG SX8200 Pro 2TB
Windows 10 Pro (21h2)

For each graphics card, we follow the same testing procedure. We run one pass of each benchmark to «warm up» the GPU after launching the game, then run at least two passes at each setting/resolution combination. If the two runs are basically identical (within 0.5% or less difference), we use the faster of the two runs. If there’s more than a small difference, we run the test at least twice more to determine what «normal» performance is supposed to be.

We also look at all the data and check for anomalies, so for example RTX 3070 Ti, RTX 3070, and RTX 3060 Ti all generally going to perform within a narrow range — 3070 Ti is about 5% faster than 3070, which is about 5% faster than 3060 Ti. If we see games where there are clear outliers (i.e. performance is more than 10% higher for the cards just mentioned), we’ll go back and retest whatever cards are showing the anomaly and figure out what the «correct» result would be.

Due to the length of time required for testing each GPU, updated drivers and game patches inevitably will come out that can impact performance. We periodically retest a few sample cards to verify our results are still valid, and if not, we go through and retest the affected game(s) and GPU(s). We may also add games to our test suite over the coming year, if one comes out that is popular and conducive to testing — see our what makes a good game benchmark for our selection criteria.

GPU Benchmarks: Individual Game Charts

The above tables provide a summary of performance, but for those that want to see the individual game charts, for both the standard and ray tracing test suites, we’ve got those as well. These charts were up-to-date as of May 19, 2022, with testing conducted using the latest Nvidia and AMD drivers in most cases, though some of the cards were tested with slightly older drivers.

Note that we’re only including the past two generations of hardware in these charts, as otherwise things get too cramped — and you can argue that with 35 cards in the 1080p charts, we’re already well past that point. (Hint: Click the enlarge icon if you’re on PC.)

Also note that we’ve switched from DX12 to DX11 for Microsoft Flight Simulator testing, partly because DX12 started to have issues recently, partly because DX12 is still listed as «beta,» but mostly because we’ve determined DX11 runs faster — somethings by more than 10% — on most GPUs. We’ve retested all of the cards in DX11 mode now.

Best Graphics Cards — 1080p Medium

Image 1 of 9

(Image credit: Tom’s Hardware)

Image 1 of 9

(Image credit: Tom’s Hardware)

Image 1 of 9

(Image credit: Tom’s Hardware)

Image 1 of 9

(Image credit: Tom’s Hardware)

Image 1 of 9

(Image credit: Tom’s Hardware)

Image 1 of 9

(Image credit: Tom’s Hardware)

Image 1 of 9

(Image credit: Tom’s Hardware)

Image 1 of 9

(Image credit: Tom’s Hardware)

Image 1 of 9

(Image credit: Tom’s Hardware)

Image 1 of 9

Image 1 of 7

(Image credit: Tom’s Hardware)

Image 1 of 7

(Image credit: Tom’s Hardware)

Image 1 of 7

(Image credit: Tom’s Hardware)

Image 1 of 7

(Image credit: Tom’s Hardware)

Image 1 of 7

(Image credit: Tom’s Hardware)

Image 1 of 7

(Image credit: Tom’s Hardware)

Image 1 of 7

(Image credit: Tom’s Hardware)

Image 1 of 7

Best Graphics Cards — 1080p Ultra

Image 1 of 9

(Image credit: Tom’s Hardware)

Image 1 of 9

(Image credit: Tom’s Hardware)

Image 1 of 9

(Image credit: Tom’s Hardware)

Image 1 of 9

(Image credit: Tom’s Hardware)

Image 1 of 9

(Image credit: Tom’s Hardware)

Image 1 of 9

(Image credit: Tom’s Hardware)

Image 1 of 9

(Image credit: Tom’s Hardware)

Image 1 of 9

(Image credit: Tom’s Hardware)

Image 1 of 9

(Image credit: Tom’s Hardware)

Image 1 of 9

Image 1 of 7

(Image credit: Tom’s Hardware)

Image 1 of 7

(Image credit: Tom’s Hardware)

Image 1 of 7

(Image credit: Tom’s Hardware)

Image 1 of 7

(Image credit: Tom’s Hardware)

Image 1 of 7

(Image credit: Tom’s Hardware)

Image 1 of 7

(Image credit: Tom’s Hardware)

Image 1 of 7

(Image credit: Tom’s Hardware)

Image 1 of 7

Best Graphics Cards — 1440p Ultra

Image 1 of 9

(Image credit: Tom’s Hardware)

Image 1 of 9

(Image credit: Tom’s Hardware)

Image 1 of 9

(Image credit: Tom’s Hardware)

Image 1 of 9

(Image credit: Tom’s Hardware)

Image 1 of 9

(Image credit: Tom’s Hardware)

Image 1 of 9

(Image credit: Tom’s Hardware)

Image 1 of 9

(Image credit: Tom’s Hardware)

Image 1 of 9

(Image credit: Tom’s Hardware)

Image 1 of 9

(Image credit: Tom’s Hardware)

Image 1 of 9

Image 1 of 7

(Image credit: Tom’s Hardware)

Image 1 of 7

(Image credit: Tom’s Hardware)

Image 1 of 7

(Image credit: Tom’s Hardware)

Image 1 of 7

(Image credit: Tom’s Hardware)

Image 1 of 7

(Image credit: Tom’s Hardware)

Image 1 of 7

(Image credit: Tom’s Hardware)

Image 1 of 7

(Image credit: Tom’s Hardware)

Image 1 of 7

Best Graphics Cards — 4K Ultra

Image 1 of 9

(Image credit: Tom’s Hardware)

Image 1 of 9

(Image credit: Tom’s Hardware)

Image 1 of 9

(Image credit: Tom’s Hardware)

Image 1 of 9

(Image credit: Tom’s Hardware)

Image 1 of 9

(Image credit: Tom’s Hardware)

Image 1 of 9

(Image credit: Tom’s Hardware)

Image 1 of 9

(Image credit: Tom’s Hardware)

Image 1 of 9

(Image credit: Tom’s Hardware)

Image 1 of 9

(Image credit: Tom’s Hardware)

Image 1 of 9

Image 1 of 7

(Image credit: Tom’s Hardware)

Image 1 of 7

(Image credit: Tom’s Hardware)

Image 1 of 7

(Image credit: Tom’s Hardware)

Image 1 of 7

(Image credit: Tom’s Hardware)

Image 1 of 7

(Image credit: Tom’s Hardware)

Image 1 of 7

(Image credit: Tom’s Hardware)

Image 1 of 7

(Image credit: Tom’s Hardware)

Image 1 of 7

Power, Clocks, Temperatures, and Fan Speeds

While our GPU benchmarks hierarchy sorts things solely by performance, for those interested in power and other aspects of the GPUs, here are the appropriate charts.

Image 1 of 8

(Image credit: Tom’s Hardware)

Image 1 of 8

(Image credit: Tom’s Hardware)

Image 1 of 8

(Image credit: Tom’s Hardware)

Image 1 of 8

(Image credit: Tom’s Hardware)

Image 1 of 8

(Image credit: Tom’s Hardware)

Image 1 of 8

(Image credit: Tom’s Hardware)

Image 1 of 8

(Image credit: Tom’s Hardware)

Image 1 of 8

(Image credit: Tom’s Hardware)

Image 1 of 8

Legacy GPU Hierarchy

Below is our legacy desktop GPU hierarchy dating back to the late 1990s. We have not tested most of these cards in many years, driver support has ended on most of these models, and the relative rankings are pretty coarse. Note that we also don’t factor in memory bandwidth or features like AMD’s Infinity Cache. The list below is mostly intended to show relative performance between architectures from a similar time period.

We sorted the table by the theoretical GFLOPS, though on architectures that don’t support unified shaders, we only have data for «Gops/s» (giga operations per second). That’s GeForce 7 and Radeon X1000 and earlier — basically anything from before 2007. We’ve put an asterisk (*) next to the GPU names for those cards, and they comprise the latter part of the table. Comparing pre-2007 GPUs against each other should be relatively meaningful, but trying to compare those older GPUs against newer GPUs gets a bit convoluted.

GPU Release Date Architecture Shaders Clockspeed GFLOPS (GOps) Launch Price
GeForce RTX 3090 Ti March 2022 GA102 10752 1860 39,997 $1,999
GeForce RTX 3090 September 2020 GA102 10496 1695 35,581 $1,499
GeForce RTX 3080 Ti June 2021 GA102 10240 1665 34,099 $1,249
GeForce RTX 3080 12GB January 2022 GA102 8960 1710 30,643 $1,199
GeForce RTX 3080 September 2020 GA102 8704 1710 29,768 $699
Radeon RX 6900 XT December 2020 Navi 21 5120 2250 23,040 $999
GeForce RTX 3070 Ti June 2021 GA104 6144 1770 21,750 $599
Radeon RX 6800 XT November 2020 Navi 21 4608 2250 20,736 $649
GeForce RTX 3070 October 2020 GA104 5888 1725 20,314 $499
Nvidia Titan RTX December 2018 TU102 4608 1770 16,312 $2,499
GeForce RTX 3060 Ti December 2020 GA104 4864 1665 16,197 $399
Radeon RX 6800 November 2020 Navi 21 3840 2105 16,166 $579
Nvidia Titan V December 2017 GV100 5120 1455 14,899 $2,999
GeForce RTX 2080 Ti September 2018 TU102 4352 1545 13,448 $1,199
Radeon VII February 2019 Vega 20 3840 1750 13,440 $699
Radeon RX 6700 XT March 2021 Navi 22 2560 2581 13,215 $479
GeForce RTX 3060 February 2021 GA106 3584 1777 12,738 $329
Radeon RX Vega 64 August 2017 Vega 10 4096 1546 12,665 $499
Radeon R9 295X2 April 2014 Vesuvius (x2) 5632 1018 11,467 $1,499
Nvidia Titan Xp April 2017 GP102 3840 1480 11,366 $1,199
GeForce GTX 1080 Ti March 2017 GP102 3584 1582 11,340 $699
GeForce RTX 2080 Super July 2019 TU104 3072 1815 11,151 $699
Nvidia Titan X (Pascal) August 2016 GP102 3584 1531 10,974 $1,199
Radeon RX 6600 XT August 2021 Navi 23 2048 2589 10,605 $379
Radeon RX Vega 56 August 2017 Vega 10 3584 1471 10,544 $399
GeForce GTX Titan Z May 2014 2x GK110 5760 876 10,092 $2,999
GeForce RTX 2080 September 2018 TU104 2944 1710 10,068 $699
Radeon RX 5700 XT July 2019 Navi 10 2560 1905 9,754 $399
GeForce RTX 3050 January 2022 GA106 2560 1777 9,098 $249
GeForce RTX 2070 Super July 2019 TU104 2560 1770 9,062 $499
Radeon RX 6600 October 2021 Navi 23 1792 2491 8,928 $329
GeForce GTX 1080 May 2016 GP104 2560 1733 8,873 $599 ($499)
Radeon R9 Fury X June 2015 Fiji 4096 1050 8,602 $649
Radeon R9 Nano August 2015 Fiji 4096 1000 8,192 $649
Radeon HD 7990 April 2013 New Zealand (x2) 4096 1000 8,192 $1,000
GeForce GTX 1070 Ti November 2017 GP104 2432 1683 8,186 $449
Radeon RX 5600 XT January 2020 Navi 10 2304 1750 8,064 $279
Radeon RX 5700 July 2019 Navi 10 2304 1725 7,949 $249
GeForce RTX 2070 October 2018 TU106 2304 1620 7,465 $499
GeForce RTX 2060 Super July 2019 TU106 2176 1650 7,181 $399
Radeon R9 Fury July 2015 Fiji 3584 1000 7,168 $549
Radeon RX 590 November 2018 Polaris 30 2304 1545 7,119 $279
GeForce GTX Titan X (Maxwell) March 2015 GM200 3072 1075 6,605 $999
GeForce GTX 1070 June 2016 GP104 1920 1683 6,463 $379
GeForce RTX 2060 January 2019 TU106 1920 1680 6,451 $349
GeForce GTX 690 April 2012 2x GK104 3072 1019 6,261 $1,000
Radeon RX 580 8GB April 2017 Polaris 20 2304 1340 6,175 $229
Radeon RX 580 4GB April 2017 Polaris 20 2304 1340 6,175 $199
GeForce GTX 980 Ti June 2015 GM200 2816 1075 6,054 $649
Radeon R9 390X June 2015 Grenada 2816 1050 5,914 $429
Radeon RX 480 8GB June 2016 Ellesmere 2304 1266 5,834 $239
Radeon RX 480 4GB June 2016 Ellesmere 2304 1266 5,834 $199
Radeon RX 6500 XT January 2022 Navi 24 1024 2815 5,765 $199
GeForce GTX Titan Black February 2014 GK110 2880 980 5,645 $999
Radeon R9 290X October 2013 Hawaii 2816 1000 5,632 $549
GeForce GTX 1660 Ti February 2019 TU116 1536 1770 5,437 $279
GeForce GTX 780 Ti November 2013 GK110 2880 928 5,345 $699
Radeon RX 5500 XT 8GB December 2019 Navi 14 1408 1845 5,196 $199
Radeon RX 5500 XT 4GB December 2019 Navi 14 1408 1845 5,196 $169
Radeon R9 390 June 2015 Grenada 2560 1000 5,120 $329
Radeon HD 6990 March 2011 Antilles (2x) 3072 830 5,100 $699
Radeon RX 570 8GB April 2017 Polaris 20 2048 1244 5,095 $199
Radeon RX 570 4GB April 2017 Polaris 20 2048 1244 5,095 $169
GeForce GTX 1660 Super October 2019 TU116 1408 1785 5,027 $229
GeForce GTX 980 September 2014 GM204 2048 1216 4,981 $549
Radeon RX 470 4GB August 2016 Ellesmere 2048 1206 4,940 $179
GeForce GTX 1660 March 2019 TU116 1408 1725 4,858 $219
Radeon R9 290 November 2013 Hawaii 2560 947 4,849 $399
GeForce GTX Titan February 2013 GK110 2688 876 4,709 $999
Radeon HD 5970 November 2009 Hemlock (2x) 3200 725 4,640 $599
GeForce GTX 1060 6GB July 2016 GP106 1280 1708 4,372 $249
Radeon HD 7970 GHz Edition June 2012 Tahiti 2048 1050 4,301 $500
GeForce GTX 780 May 2013 GK110 2304 900 4,147 $649 ($499)
Radeon R9 280X August 2013 Tahiti 2048 1000 4,096 $299
GeForce GTX 1650 Super November 2019 TU116 1280 1590 4,070 $159
Radeon R9 380X November 2015 Tonga 2048 970 3,973 $229
GeForce GTX 1060 3GB August 2016 GP106 1152 1708 3,935 $199
GeForce GTX 970 September 2014 GM204 1664 1178 3,920 $329
Radeon R9 380 June 2015 Tonga 1792 970 3,476 $199
Radeon R9 280 March 2014 Tahiti 1792 933 3,344 $249
GeForce GTX 770 May 2013 GK104 1536 1085 3,333 $399 ($329)
Radeon R9 285 September 2014 Tonga 1792 918 3,290 $249
GeForce GTX 680 March 2012 GK104 1536 1058 3,250 $500
Radeon HD 7870 XT November 2012 Tahiti 1536 975 2,995 $270
GeForce GTX 1650 April 2019 TU117 896 1665 2,984 $149
Radeon HD 7950 January 2012 Tahiti 1792 800 2,867 $450
GeForce GTX 1650 GDDR6 April 2020 TU117 896 1590 2,849 $149
Radeon HD 5870 September 2009 Cypress 1600 850 2,720 $379
Radeon HD 6970 December 2010 Cayman 1536 880 2,703 $369
Radeon R9 270X August 2013 Pitcairn 1280 1050 2,688 $199
GeForce GTX 760 Ti September 2013 GK104 1344 980 2,634 OEM
GeForce GTX 670 May 2012 GK104 1344 980 2,634 $400
GeForce GTX 660 Ti August 2012 GK104 1344 980 2,634 $300
Radeon RX 560 4GB May 2017 Baffin 1024 1275 2,611 $99
Radeon R9 370X August 2015 Pitcairn 1280 1000 2,560 $179
Radeon HD 7870 March 2012 Pitcairn 1280 1000 2,560 $350
GeForce GTX 590 March 2011 2x GF110 1024 607 2,486 $699
GeForce GTX 960 January 2015 GM206 1024 1178 2,413 $199
Radeon HD 4870 X2 August 2008 2x RV770 1600 750 2,400 $449
GeForce GTX 760 June 2013 GK104 1152 1033 2,380 $249
Radeon R9 270 November 2013 Pitcairn 1280 925 2,368 $179
Radeon HD 6950 2GB December 2010 Cayman 1408 800 2,253 $299
Radeon HD 6950 1GB December 2010 Cayman 1408 800 2,253 $259
Radeon RX 460 4GB August 2016 Baffin 896 1200 2,150 $139
Radeon RX 460 2GB August 2016 Baffin 896 1200 2,150 $109
GeForce GTX 1050 Ti October 2016 GP107 768 1392 2,138 $139
Radeon RX 560 4GB October 2017 Baffin 896 1175 2,106 $99
Radeon HD 5850 September 2009 Cypress 1440 725 2,088 $259
Radeon HD 6870 October 2010 Barts 1120 900 2,016 $239
Radeon HD 4850 X2 November 2008 2x RV770 1600 625 2,000 $339
Radeon R9 370 June 2015 Pitcairn 1024 975 1,997 $149
GeForce GTX 660 September 2012 GK106 960 1032 1,981 $230
Radeon R7 260X August 2013 Bonaire 896 1100 1,971 $139
GeForce GTX 1050 October 2016 GP107 640 1518 1,943 $109
Radeon R7 265 February 2014 Pitcairn 1024 925 1,894 $149
GeForce GTX 950 August 2015 GM206 768 1188 1,825 $159
Radeon HD 7790 March 2013 Pitcairn 896 1000 1,792 $150
Radeon HD 5830 February 2010 Cypress 1120 800 1,792 $239
Radeon HD 7850 March 2012 Pitcairn 1024 860 1,761 $250
Radeon R7 360 June 2015 Bonaire 768 1050 1,613 $109
GeForce GTX 650 Ti Boost March 2013 GK106 768 1032 1,585 $170
GeForce GTX 580 November 2010 GF110 512 772 1,581 $499
Radeon R7 260 December 2013 Bonaire 768 1000 1,536 $109
Radeon RX 550 April 2017 Lexa 640 1183 1,514 $79
Radeon HD 6850 October 2010 Barts 960 775 1,488 $179
GeForce GTX 650 Ti October 2012 GK106 768 928 1,425 $150
GeForce GTX 570 December 2010 GF110 480 732 1,405 $349
GeForce GTX 750 Ti February 2014 GK107 640 1085 1,389 $149
Radeon HD 6770 April 2011 Juniper 800 850 1,360 $129
Radeon HD 5770 October 2009 Juniper 800 850 1,360 $159
Radeon HD 4890 April 2009 RV790 800 850 1,360 $249
GeForce GTX 480 March 2010 GF100 480 701 1,346 $499
Radeon HD 6790 April 2011 Barts 800 840 1,344 $149
GeForce GTX 560 Ti (448 Core) November 2011 GF110 448 732 1,312 $289
Radeon HD 7770 February 2012 Cape Verde 640 1000 1,280 $160
GeForce GTX 560 Ti January 2011 GF114 384 822 1,263 $249
Radeon HD 4870 June 2008 RV770 800 750 1,200 $299
GeForce GT 1030 (GDDR5) May 2017 GP108 384 1468 1,127 $70
GeForce GTX 750 February 2014 GK107 512 1085 1,111 $119
GeForce GTX 470 March 2010 GF100 448 608 1,090 $349
GeForce GTX 560 May 2011 GF114 336 810 1,089 $199
GeForce GT 1030 (DDR4) March 2018 GP108 384 1379 1,059 $79
Radeon HD 3870 X2 January 2008 2x R680 640 825 1,056 $449
Radeon HD 6750 January 2011 Juniper 720 700 1,008 OEM
Radeon HD 5750 October 2009 Juniper 720 700 1,008 $129
Radeon HD 4850 June 2008 RV770 800 625 1,000 $199
Radeon HD 4770 April 2009 RV740 640 750 960 $109
Radeon R7 350 February 2016 Cape Verde 512 925 947 $89
Radeon HD 7750 (GDDR5) February 2012 Cape Verde 512 900 922 $110
Radeon HD 7750 (DDR3) February 2012 Cape Verde 512 900 922 $110
GeForce GTX 460 (256-bit) July 2010 GF104 336 675 907 $229
GeForce GTX 460 (192-bit) July 2010 GF104 336 675 907 $199
GeForce GTX 465 May 2010 GF100 352 608 856 $279
GeForce GTX 560 SE February 2012 GF114 288 736 848 OEM
Radeon R7 250E December 2013 Cape Verde 512 800 819 $109
GeForce GTX 650 September 2012 GK107 384 1058 813 $110
Radeon R7 250 (GDDR5) August 2013 Oland 384 1050 806 $99
Radeon R7 250 (DDR3) August 2013 Oland 384 1050 806 $89
Radeon HD 6670 (GDDR5) April 2011 Turks 480 800 768 $109
Radeon HD 6670 (DDR3) April 2011 Turks 480 800 768 $99
GeForce 9800 GX2 March 2008 2x G92 256 1500 768
GeForce GT 740 (GDDR5) May 2014 GK107 384 993 763 $99
GeForce GT 740 (DDR3) May 2014 GK107 384 993 763 $89
GeForce GTX 460 SE November 2010 GF104 288 650 749 $160
Radeon HD 4830 October 2008 RV770 640 575 736 $130
GeForce GT 640 (GDDR5) April 2012 GK107 384 950 730 OEM
GeForce GT 730 (64-bit, GDDR5) June 2014 GK208 384 902 693 $79
GeForce GT 730 (64-bit, DDR3) June 2014 GK208 384 902 693 $69
GeForce GTX 550 Ti March 2011 GF116 192 900 691 $149
Radeon HD 6570 (GDDR5) April 2011 Turks 480 650 624 $89
Radeon HD 6570 (DDR3) April 2011 Turks 480 650 624 $79
Radeon HD 5670 January 2010 Redwood 400 775 620 $99
Radeon HD 7730 (GDDR5) April 2013 Cape Verde 384 800 614 $60
Radeon HD 7730 (DDR3) April 2013 Cape Verde 384 800 614 $60
GeForce GT 640 (DDR3) April 2012 GK107 384 797 612 OEM
GeForce GTS 450 September 2010 GF106 192 783 601 $129
GeForce GTX 295 January 2009 2x GT200 480 576 553 $500
Radeon HD 5570 (GDDR5) February 2010 Redwood 400 650 520 $80
Radeon HD 5570 (DDR3) February 2010 Redwood 400 650 520 $80
GeForce GT 545 (GDDR5) May 2011 GF116 144 870 501 OEM
Radeon R7 240 August 2013 Oland 320 780 499 $69
Radeon HD 3870 November 2007 RV670 320 777 497 $349
Radeon HD 4670 September 2008 RV730 320 750 480 $79
Radeon HD 2900 XT May 2007 R600 320 743 476 $399
GeForce GTS 250 March 2009 G92b 128 1836 470 $150
GeForce 9800 GTX+ July 2008 G92b 128 1836 470
GeForce 9800 GTX April 2008 G92 128 1688 432
Radeon HD 3850 (512MB) November 2007 RV670 320 668 428 $189
Radeon HD 3850 (256MB) November 2007 RV670 320 668 428 $179
Radeon HD 3830 April 2008 RV670 320 668 428 $129
Radeon HD 4650 (DDR3) September 2008 RV730 320 650 416
GeForce 8800 GTS (512MB) December 2007 G92 128 1625 416
GeForce GT 545 (DDR3) May 2011 GF116 144 720 415 $149
Radeon HD 4650 (DDR2) September 2008 RV730 320 600 384
Radeon HD 2900 Pro September 2007 R600 320 600 384 $300
GeForce 8800 Ultra May 2007 G80 128 1500 384
Radeon HD 5550 (GDDR5) February 2010 Redwood 320 550 352 $70
Radeon HD 5550 (DDR3) February 2010 Redwood 320 550 352 $70
Radeon HD 5550 (DDR2) February 2010 Redwood 320 550 352 $70
GeForce 8800 GTX November 2006 G80 128 1350 346
GeForce GT 630 (DDR3) April 2012 GK107 192 875 336 OEM
GeForce 9800 GT July 2008 G92a/G92b 112 1500 336
GeForce 8800 GT (512MB) October 2007 G92 112 1500 336
GeForce 8800 GT (256MB) December 2007 G92 112 1500 336
GeForce GTX 285 January 2009 GT200 240 648 311 $400
GeForce GT 630 (GDDR5) May 2012 GF108 96 810 311 $80
GeForce GT 440 (GDDR5) February 2011 GF108 96 810 311 $100
GeForce GT 440 (GDDR3) February 2011 GF108 96 810 311 $100
GeForce GTX 275 April 2009 GT200 240 633 304 $250
GeForce GTX 280 June 2008 GT200 240 602 289 $650 ($430)
Radeon HD 2900 GT November 2007 R600 240 600 288 $200
GeForce GT 730 (128-bit, DDR3) June 2014 GF108 96 700 269 $69
GeForce GT 530 May 2011 GF118 96 700 269 OEM
GeForce GT 430 October 2010 GF108 96 700 269 $79
GeForce 9600 GSO May 2008 G92 96 1375 264
GeForce 8800 GS January 2008 G92 96 1375 264
GeForce GT 240 (GDDR5) November 2009 GT215 96 1340 257 OEM
GeForce GT 240 (DDR3) November 2009 GT215 96 1340 257 OEM
GeForce GTX 260 September 2008 GT200 216 576 249 $300
Radeon HD 6450 April 2011 Caicos 160 750 240 $55
GeForce 8800 GTS (640MB) November 2006 G80 96 1188 228
GeForce 8800 GTS (320MB) February 2007 G80 96 1188 228
GeForce GTX 260 June 2008 GT200 192 576 221 $400 ($270)
GeForce 9600 GT February 2008 G94 64 1625 208
Radeon R5 230 April 2014 Caicos 160 625 200
Radeon HD 2600 XT June 2007 RV630 120 800 192 $149
Radeon HD 3650 (DDR3) January 2008 RV635 120 725 174
Radeon HD 3650 (DDR2) January 2008 RV635 120 725 174
GeForce GT 520 April 2011 GF119 48 810 156 $59
Radeon HD 2600 Pro June 2007 RV630 120 600 144 $99
GeForce GT 220 (DDR3) October 2009 GT216 48 1360 131 OEM
GeForce GT 220 (DDR2) October 2009 GT216 48 1335 128 OEM
Radeon HD 5450 February 2010 Cedar 80 650 104 $50
Radeon HD 4550 September 2008 RV710 80 600 96
Radeon HD 4350 September 2008 RV710 80 600 96
GeForce 8600 GTS April 2007 G84 32 1450 93
GeForce 9500 GT (GDDR3) July 2008 G96 32 1400 90
GeForce 9500 GT (DDR2) July 2008 G96 32 1400 90
GeForce 8600 GT (GDDR3) April 2007 G84 32 1188 76
GeForce 8600 GT (DDR2) April 2007 G84 32 1188 76
GeForce GT 420 September 2010 GF108 48 700 67 OEM
Radeon HD 2400 XT June 2007 RV610 40 650 52 $55
GeForce 9400 GT August 2008 G96 16 1400 45
Radeon HD 2400 Pro June 2007 RV610 40 525 42
Radeon HD 2300 June 2007 RV610 40 525 42
GeForce 8600 GS April 2007 G84 16 1180 38
Radeon X1950 XTX * October 2006 R580+ 48 650 31. 2 * $449
Radeon X1900 XTX * January 2006 R580 48 650 31.2 * $649
Radeon X1950 XT * October 2006 R580+ 48 625 30.0 *
Radeon X1900 XT * January 2006 R580 48 625 30.0 * $549
GeForce 8500 GT April 2007 G86 16 900 29
GeForce 8400 GS June 2007 G86 16 900 29
GeForce 7950 GX2 * June 2006 2x G71 48 500 24.0 *
GeForce 9300 GS June 2008 G98 8 1400 22
GeForce 9300 GE June 2008 G98 8 1300 21
Radeon X1950 Pro * October 2006 RV570 36 575 20. 7 *
Radeon X1900 GT * May 2006 R580 36 575 20.7 *
Radeon X1950 GT * January 2007 RV570 36 500 18.0 *
GeForce 7900 GTX * March 2006 G71 24 650 15.6 *
GeForce 7900 GTO * October 2006 G71 24 650 15.6 *
GeForce 8300 GS July 2007 G86 8 900 14
GeForce 7950 GT * September 2006 G71 24 550 13.2 *
GeForce 7800 GTX (512MB) * November 2005 G70 24 550 13.2 *
Radeon X1650 XT * October 2006 RV560 24 525 12. 6 *
GeForce 7900 GT * March 2006 G71 24 450 10.8 *
GeForce 7800 GTX (256MB) * June 2005 G70 24 430 10.3 *
Radeon X1800 XT * October 2005 R520 16 625 10.0 * $549
Radeon X1650 GT * May 2007 RV560 24 400 9.6 *
GeForce 7900 GS * May 2006 G71 20 450 9.0 *
Radeon X850 XT Platinum * December 2004 R480 16 540 8.6 *
Radeon X850 XT * December 2004 R480 16 520 8.3 *
Radeon X800 XT Platinum * May 2004 R423 16 520 8. 3 *
Radeon X800 XT * December 2004 R423 16 500 8.0 *
Radeon X1800 XL * October 2005 R520 16 500 8.0 *
GeForce 7800 GT * August 2005 G70 20 400 8.0 *
Radeon X1650 Pro * August 2006 RV535 12 600 7.2 *
Radeon X1600 XT * October 2005 RV530 12 590 7.1 *
GeForce 7600 GT * March 2006 G73 12 560 6.7 *
Radeon X800 XL * December 2004 R430 16 400 6.4 *
GeForce 6800 Ultra * June 2004 NV45 16 400 6. 4 *
Radeon X850 Pro * December 2004 R480 12 507 6.1 *
Radeon X1800 GTO * March 2006 R520 12 500 6.0 * $249
Radeon X1600 Pro * October 2005 RV530 12 500 6.0 *
Radeon X1300 XT * August 2006 RV530 12 500 6.0 *
GeForce 7800 GS * February 2006 G70 16 375 6.0 *
Radeon X800 Pro * May 2004 R423 12 475 5.7 *
GeForce 6800 GT * June 2004 NV45 16 350 5.6 *
GeForce 6800 GS (PCIe) * November 2005 NV42 12 425 5. 1 *
Radeon X800 GTO (256MB) * September 2005 R423/R480 12 400 4.8 *
Radeon X800 GTO (128MB) * September 2005 R423/R480 12 400 4.8 *
GeForce 7600 GS * March 2006 G73 12 400 4.8 *
Radeon X800 * December 2004 R430 12 392 4.7 *
GeForce 6800 GS (AGP) * December 2005 NV40 12 350 4.2 *
GeForce 6600 GT * November 2004 NV43 8 500 4.0 *
GeForce 6800 * November 2004 NV41/NV42 12 325 3.9 *
Radeon X800 GT * December 2005 R423/R480 8 475 3. 8 *
Radeon X800 SE * October 2004 R420 8 425 3.4 *
Radeon X700 Pro * December 2004 RV410 8 425 3.4 *
Radeon 9800 XT * September 2003 R360 8 412 3.3 *
Radeon X700 * September 2005 RV410 8 400 3.2 *
Radeon 9800 Pro * March 2003 R350 8 380 3.0 *
GeForce 7300 GT (GDDR3) * May 2006 G73 8 350 2.8 *
GeForce 7300 GT (DDR2) * May 2006 G73 8 350 2.8 *
Radeon 9800 SE (128-bit) * March 2003 R350 8 325 2. 6 *
Radeon 9800 * March 2003 R350 8 325 2.6 *
Radeon 9700 Pro * July 2002 R300 8 325 2.6 *
GeForce 6800 XT * September 2005 NV42 8 325 2.6 *
GeForce 6800 LE * January 2005 NV41/NV42 8 325 2.6 *
Radeon X1300 Pro * October 2005 RV515 4 600 2.4 *
GeForce 6600 (128-bit) * August 2004 NV43 8 300 2.4 *
Radeon 9700 * October 2002 R300 8 275 2.2 *
Radeon 9500 Pro * October 2002 R300 8 275 2. 2 *
GeForce 7300 GS * January 2006 G72 4 550 2.2 *
Radeon X600 XT * September 2004 RV380 4 500 2.0 *
Radeon X1550 * January 2007 RV516 4 500 2.0 *
Radeon 9600 XT * September 2003 RV360 4 500 2.0 *
GeForce FX 5800 Ultra * January 2003 NV30 4 500 2.0 *
GeForce FX 5950 Ultra * October 2003 NV38 4 475 1.9 *
GeForce FX 5700 Ultra * October 2003 NV36 4 475 1.9 *
GeForce FX 5900 Ultra * May 2003 NV35 4 450 1. 8 *
GeForce FX 5700 * October 2003 NV36 4 425 1.7 *
Radeon X600 Pro * September 2004 RV370 4 400 1.6 *
Radeon X600 Pro * September 2004 RV380 4 400 1.6 *
Radeon X600 * September 2004 RV370 4 400 1.6 *
Radeon 9600 Pro * March 2003 RV350 4 400 1.6 *
GeForce FX 5900 XT * December 2003 NV35 4 390 1.6 *
GeForce FX 5900 * May 2003 NV35 4 400 1.6 *
GeForce FX 5800 * January 2003 NV30 4 400 1. 6 *
GeForce FX 5600 Ultra * March 2003 NV31 4 400 1.6 *
Radeon 9800 SE (256-bit) * March 2003 R350 4 380 1.5 *
GeForce 7300 LE * March 2006 G72 4 350 1.4 *
GeForce 6200 TurboCache * December 2004 NV44 4 350 1.4 *
Radeon 9600 SE * September 2003 RV350 4 325 1.3 *
Radeon 9600 * September 2003 RV350 4 325 1.3 *
GeForce FX 5600 * March 2003 NV31 4 325 1.3 *
GeForce FX 5200 Ultra * March 2003 NV34 4 325 1. 3 *
GeForce 6600 LE * June 1905 NV43 4 325 1.3 *
Radeon X300 SE * September 2004 RV370 4 300 1.2 *
GeForce 6200 * October 2004 NV43 4 300 1.2 *
GeForce 4 Ti4800 * January 2003 NV28 4 300 1.2 *
GeForce 4 Ti4600 * February 2002 NV25 4 300 1.2 *
Radeon 9500 * October 2002 R300 4 275 1.1 *
Radeon 8500 * August 2001 R200 4 275 1.1 *
GeForce FX 5500 * March 2004 NV34B 4 270 1. 1 *
GeForce 4 Ti4800 SE * January 2003 NV28 4 275 1.1 *
GeForce 4 Ti4400 * February 2002 NV25 4 275 1.1 *
Radeon X1050 (128-bit) * December 2006 RV350 4 250 1.0 *
Radeon 9550 * January 2004 RV350 4 250 1.0 *
Radeon 9250 * March 2004 RV280 4 240 1.0 *
Radeon 9200 * April 2003 RV280 4 250 1.0 *
Radeon 9100 * April 2003 R200 4 250 1.0 *
Radeon 9000 * August 2002 RV250 4 250 1. 0 *
GeForce FX 5700 LE * March 2004 NV36 4 250 1.0 *
GeForce FX 5200 (64-bit) * March 2003 NV34 4 250 1.0 *
GeForce FX 5200 (128-bit) * March 2003 NV34 4 250 1.0 *
GeForce 4 Ti4200 * April 2002 NV25 4 250 1.0 *
GeForce 3 Ti500 * October 2001 NV20 4 240 1.0 *
GeForce 2 Ultra * August 2000 NV16 4 250 1.0 *
GeForce 2 Ti * October 2001 NV15 4 250 1.0 *
GeForce 7200 GS * January 2006 G72 2 450 0. 9 *
Radeon X300 * September 2004 RV370 4 200 0.8 *
Radeon 9200 SE * March 2003 RV280 4 200 0.8 *
GeForce 3 * February 2001 NV20 4 200 0.8 *
GeForce 2 GTS * April 2000 NV15 4 200 0.8 *
GeForce 3 Ti200 * October 2001 NV20 4 175 0.7 *
Radeon 7500 * August 2001 RV200 2 290 0.6 *
GeForce 4 MX460 * February 2002 NV17 2 300 0.6 *
GeForce 4 MX440 * February 2002 NV17 2 275 0. 6 *
Rage Fury MAXX * October 1999 2x ATI Rage 4 125 0.5 *
GeForce 4 MX420 * February 2002 NV17 2 250 0.5 *
GeForce 256 SDR * October 1999 NV10 4 120 0.5 *
GeForce 256 DDR * December 1999 NV10 4 120 0.5 *
GeForce 2 MX400 * March 2001 NV11 2 200 0.4 *
GeForce 2 MX200 * March 2001 NV11 2 175 0.4 *
Rage 128 Ultra * August 1999 ATI Rage 2 130 0.3 *
Rage 128 Pro * August 1999 ATI Rage 2 125 0. 3 *
Radeon SDR * June 2000 R100 2 166 0.3 *
Radeon LE * May 2001 R100 2 150 0.3 *
Radeon DDR * April 2000 R100 2 166 0.3 *
Radeon 7200 SDR * June 2000 R100 2 166 0.3 *
Radeon 7200 DDR * April 2000 R100 2 166 0.3 *
Nvidia Riva TNT2 Ultra * March 1999 NV5 2 150 0.3 *
Nvidia Riva TNT2 Pro * October 1999 NV5 2 143 0.3 *
Nvidia Riva TNT2 * March 1999 NV5 2 125 0. 3 *
Rage 128 GL * August 1998 ATI Rage 2 103 0.2 *
Radeon 7000 * February 2001 RV100 1 183 0.2 *
Nvidia Riva TNT * June 1998 NV4 2 90 0.2 *
Nvidia Riva 128 * August 1997 NV3 1 100 0.1 *

* — Denotes performance measured in gigaoperations per second, as opposed to GFLOPS. Older GPU architectures without unified shader support aren’t directly comparable with newer architectures.

Finding Discounts on the Best Graphics Cards

With all the GPU shortages these days, you’re unlikely to see huge sales on a graphics card, but you may find some savings by checking out the latest Newegg promo codes, Best Buy promo codes and Micro Center coupon codes.

For even more information, check out our Graphics Card Buyer’s Guide.

MORE: Best Graphics Cards for Gaming

MORE: Graphics Card Power Consumption Tested

MORE: How to Stress-Test Graphics Cards (Like We Do)

MORE: CPU Benchmarks

Want to comment on this story? Let us know what you think in the Tom’s Hardware Forums.

Jarred Walton is a senior editor at Tom’s Hardware focusing on everything GPU. He has been working as a tech journalist since 2004, writing for AnandTech, Maximum PC, and PC Gamer. From the first S3 Virge ‘3D decelerators’ to today’s GPUs, Jarred keeps up with all the latest graphics trends and is the one to ask about game performance.

How to Buy the Right Graphics Card: A GPU Guide for 2022

Skip to main content

Tom’s Hardware is supported by its audience. When you purchase through links on our site, we may earn an affiliate commission. Here’s why you can trust us.

(Image credit: Tom’s Hardware)

Getting the best graphics card is key if you’re looking to buy the best gaming PC or looking to build a PC on your own. The graphics card is even more important than the CPU. Unfortunately, the process of figuring out how to buy a GPU can be intimidating. There’s so much to consider, from the type of monitor you’re using (for recommendations, see our Best Gaming Monitors page) to the size of your PC case to the game settings you plan to play at.

Below are the things you need to keep in mind when shopping for your next GPU. For specific recommendations, see our best graphics cards list of the current options, as well as the GPU Benchmarks Hierarchy to see how today’s cards compare to older cards that you might be looking to upgrade and replace.

Thankfully, the supply and GPU prices on Nvidia’s RTX 30-series cards as well as AMD’s RX 6000 cards continues to improve. After 18 months of extreme prices, most cards can now be found online for only 20–30% over MSRP, sometimes less. However, note that next-generation GPUs are around the corner, like the Nvidia ‘Ada’ RTX 40-series and AMD’s RDNA3, so keep that in mind.

Quick tips

  • Save some money for the CPU. If you spend all your money on graphics and don’t opt for one of the best CPUs, your system might score well on synthetic benchmarks but won’t do as well in real game play (due to lower minimum frame rates).
  • Match your monitor resolution. Many mainstream cards are sufficient for gaming at 1080p resolutions at between 30-60 fps, but you’ll need a high-end card for resolutions at or near 4K resolution with high in-game settings on the most demanding titles. So be sure to pair your GPU with the best gaming monitor for your needs.
  • Consider your refresh rate. If your monitor has triple-digit refresh rates, you’ll need a powerful card and processor to reach its full potential. Alternatively, if your monitor tops out at 60Hz and 1080p, there’s no point in paying extra for a powerful card that pushes pixels faster than your display can keep up with.
  • Do you have enough power and space? Make sure your PC case has enough room for the card you’re considering, and that your power supply has enough watts to spare, along with the correct type of power connectors (up to three 8-pin PCIe, depending on the card).
  • Check the MSRP before buying. A good way to tell if you’re getting a deal is to check the launch price or MSRP of the card you’re considering before buying. Tools like CamelCamelCamel can help separate the real deals from the fake mark-up-then-discount offerings.
  • Don’t get dual cards—they’re not worth it. Game support for Multi-card SLI or CrossFire setups has basically died. Get the best single card you can afford. Adding a second card is usually more trouble than it’s worth.
  • Don’t count on overclocking for serious performance boosts. If you need better performance, buy a more-powerful card. Graphics cards don’t typically have large amounts of overclocking headroom, usually only 5-10%.

AMD or Nvidia?

Nvidia and AMD GPUs (Image credit: Tom’s Hardware)

There are hundreds of graphics cards from dozens of manufacturers, but only two companies actually make the GPUs that power these components: Nvidia and AMD — though Intel’s Xe Graphics has started to ship for laptops and should also come to desktops in the next few months. With its RX 6000 cards, AMD is more competitive than it has been in years with Nvidia and its current-gen Ampere cards, like the GeForce RTX 3080, in general performance.

That said, the realistically lit elephant in the room that we’ve been ignoring thus-far is real-time ray tracing. Introduced as a major new feature with Nvidia’s now previous-generation RTX 20-series cards, «Team Green» is now on its second generation RTX with 30-series GPUs. AMD («Team Red») stepped into this game in a big way in 2020 with its RX 6000 cards, but it’s still on its first go-round with real-time ray tracing, and so lags behind Nvidia on this front.

Still, the rollout of games that make use of (and specifically good use of) ray tracing has been slow. There’s no doubt that more games are adding RT support — and many more will in the future as ray tracing is also supported by the Sony PlayStation 5 and Microsoft Series S|X consoles. At present, the list of games with what we would categorize as impressive use of ray tracing remains relatively limited.

(Image credit: Tom’s Hardware)

Our Ray Tracing GPU Benchmarks Hierarchy breaks things down using six demanding RT games. Games that only use a single RT effect, like reflections, tend to be less demanding and less impressive overall. So weigh the importance of ray tracing performance with how interested you are in these games, how important the best possible visuals are to your enjoyment, and how much future-proofing you want baked into your GPU.

Also, don’t forget DLSS, Nvidia’s AI-assisted resolution upscaling. It can deliver improved performance with less of a hit on frame rates than is typical from maxing out your monitor’s resolution the traditional way. Support for this feature is limited to a subset of games, admittedly a growing one — many of the complete ray tracing games support DLSS. AMD has its own open source alternative to DLSS, called Fidelity FX Super Resolution (AMD FSR), and FSR 2.0 should further improve things, but DLSS is more widely supported in games that really need upscaling.

For more on these subjects as well as screen-smoothing variable refresh technologies, see our AMD vs Nvidia: Who Makes the Best GPUs? and FreeSync vs. G-Sync: Which Variable Refresh Tech Is Best Today? features.

How Much Can You Spend?

The price of video cards varies greatly, with super low-end cards starting under $100 and high-end models going for $2,000 or more in the case of the GeForce RTX 3090 Ti. As is often the case, top-end cards aren’t worth the money unless for some reason you absolutely have to have the best performance possible, or if you do professional work where 10% more performance will pay for itself over time.

Dropping a tier or two down will greatly improve the bang for the buck. Currently, for example, an RTX 3080 12GB can be had for about $1,000. That’s half as much as the RTX 3090 Ti, for about 15% less performance on average. The same goes for the AMD side. The RX 6900 XT costs about $1,050 while the RX 6700 XT can be had for half that much. There’s no question about the 6900 being faster, but is it worth paying double the price? Only you can decide.

Here’s the short list of current generation cards and the best prices we’re tracking right now:

  • EVGA RTX 3090 Ti for $1,999.99 at EVGA
  • MSI RTX 3090 for $1,679.99 at Newegg
  • MSI RTX 3080 Ti for $1,269.99 at Newegg
  • EVGA RTX 3080 12GB for $1,107.94 at Amazon
  • EVGA RTX 3080 10GB for $919.99 at EVGA
  • EVGA RTX 3070 Ti for $759.99 at EVGA
  • Gigabyte RTX 3070 for $729.99 at Newegg
  • MSI RTX 3060 Ti for $579.99 at Newegg
  • PNY RTX 3060 for $488.99 at Amazon
  • EVGA RTX 3050 for $249. 99 at EVGA
  • MSI RX 6900 XT for $1,019.99 at Newegg
  • Sapphire RX 6800 XT for $859.00 at Newegg
  • ASRock RX 6800 for $799.99 at Newegg
  • ASRock RX 6700 XT for $528.99 at Newegg
  • XFX RX 6600 XT for $429.99 at Newegg
  • ASRock RX 6600 for $335.99 at Newegg
  • PowerColor RX 6500 XT for $199.99 at Amazon
  • XFX RX 6400 for $179.99 at Newegg

Which GPUs are budget, mid-range and high-end?

Here’s a breakdown of the major current GPUs and where they stand, grouped roughly by price and performance. (For example, note that the GTX 1070 is with the ‘mid-range’ now, since it’s about as fast as a 1660 Super.) Remember that not all cards with a given GPU will perform exactly the same. For more detail, check out the GPU Benchmarks page.

GPUs (in perf order) Class Recommended Use
Nvidia GeForce GT 1030; AMD Radeon RX 550 Super cheap Only buy these if you don’t game (or you don’t game much) and your CPU doesn’t have integrated graphics.
Nvidia GeForce GTX 1650 Super, Nvidia GTX 1650; AMD Radeon RX 6500 XT, RX 6400, RX 5500 XT 4GB/8GB. Older: Nvidia GTX 1060, GTX 1050 Ti and GTX 1050; AMD RX 590, RX 580, RX 570, RX 560 Budget cards Decent for playing games at 1080p or lower res at medium-to-low settings
Nvidia GeForce RTX 3050, RTX 2060, GTX 1660 Ti, GTX 1660 Super, GTX 1660; AMD Radeon RX 6600 XT, RX 6600, RX 5700, RX 5600 XT. Older: Nvidia GTX 1070 Ti, GTX 1070; AMD RX Vega 56 Mid-range cards Good for 1080p gaming, compatible with VR headsets
Nvidia GeForce RTX 3070 Ti, RTX 3070, RTX 3060 Ti, RTX 3060, RTX 2080 Ti, RTX 2080 Super, RTX 2070 Super, RTX 2070, RTX 2060 Super; AMD Radeon RX 6800, RX 5700 XT. Older: Nvidia GTX 1080 Ti, GTX 1080; AMD Radeon VII, RX Vega 64 High-end Good for VR headsets and gaming at resolutions at 1440p or high-refresh 1080p monitors.
Nvidia GeForce RTX 3090 Ti, 3090, RTX 3080 Ti, RTX 3080, Titan RTX. AMD Radeon RX 6900 XT, RX 6800 XT. Older: Nvidia Titan V, Titan Xp Premium / Extreme These are best for 4K, and the RTX cards support new ray-tracing and A.I. tech.

How to buy a GPU: Which specs matter and which don’t?

  • Graphics card memory amount: Critical. Get a card with at least 6GB, and preferably 8GB or more for gaming at 1080p. You’ll need more memory if you play with all the settings turned up or you install high-resolution texture packs. And if you’re gaming at very high resolutions such as 4K, more than 8GB is ideal.
  • Form factor: Very important. You need to make sure you have room in your case for your card. Look at the length, height, and thickness. Graphics cards can come in half-height (slim), single-slot, dual-slot, and even triple-slot flavors (or more). Most gaming-focused cards will be full-height and occupy two or more expansion slots, with current-gen cards being thicker and larger than many previous-gen models. Even if a card technically only takes up two slots in your case, if it has a big heatsink and fan shroud, it can block an adjacent slot. If you have a tiny Mini-ITX motherboard, look for a ‘mini’ card, which is generally 8 inches (205mm) long or less. However, some cards that carry this moniker are longer, so check the specs.
  • TDP: Important. Thermal Design Power or TDP is a measurement of heat dissipation, but it also gives you an estimate of how many watts you’ll need to run your card at stock settings. (AMD and Nvidia both seem to be shifting to TBP, Typical Board Power, which means the power of the entire card. That’s what most of us expect when we’re talking about graphics power anyway.) If you’re running a 400-watt power supply unit (PSU) with an overclocked 95-watt CPU and you want to add a card with a 250-watt TDP, you’re almost certainly going to need a PSU upgrade. Generally speaking, a 600W PSU was fine for many previous-generation cards. But if you’re opting for an RTX 3080/RX 6800 XT or above, you’re best choosing a higher-wattage PSU, especially if overclocking is on the table. With cards like the RTX 3090 Ti, and rumors of next-gen 600W GPUs on the horizon, extreme users will probably want a 1200-1600W PSU. Yikes!

Power connector (Image credit: Tom’s Hardware)

  • Power Connectors: Important. All serious gaming cards draw more than the standard maximum of 75W that the x16 PCIe slot provides. These cards require connecting supplemental PCIe power connectors that come in 6- and 8-pin varieties. Nvidia’s own RTX 30-series cards come with 12-pin connectors, but the cards also include 8-pin to 12-pin adapters. Some cards have one of these connectors, some two or even three, and 6- and 8-pin ports can exist on the same card. If your power supply doesn’t have the supplemental connectors you need, you’ll want to upgrade—adapters that draw power from a couple of SATA or Molex connectors are not recommended as long-term solutions.

Ports (Image credit: Tom’s Hardware)

  • Ports: Critical. Some monitors have HDMI, others use DisplayPort, and some older units only have DVI. A few monitors also support USB Type-C routing DisplayPort signals, but these are relatively rare for the time being. Make sure the card you plan to buy has the connectors you need for your monitor(s), so you don’t have to buy an adapter—or potentially a new display (unless you want to). Have a choice and not sure which port you want to use? See our HDMI vs. DisplayPort story for more details.
  • Clock speed: Somewhat important. Among cards with the same GPU (ex: an RTX 3060 Ti), some will be manufacturer overclocked to a slightly higher speed, which can make a modest 3–5% difference in frame rates. Clock speed isn’t everything, however, as memory speed, core counts and architecture need to be factored in. Better cooling often trumps clock speed as well, on cards with the same GPU.
  • CUDA Cores / Stream Processors: Somewhat important, like clock speed, as it only gives you part of what you need to know when trying to determine the approximate performance level of a GPU. Comparing core counts within the same architecture is more meaningful than comparing different architectures. So looking at Nvidia Turing vs. Ampere CUDA cores (or Streaming Multiprocessors) isn’t as useful as looking at just Ampere. The same goes for AMD, where comparing Navi and Vega or Polaris Stream Processors (or Compute Units) isn’t particularly helpful. Comparing AMD and Nvidia architectures based purely on core counts is even less useful.
  • TFLOPS / GFLOPS: Important. TFLOPS, or trillions of floating-point operations per second, is an indication of the maximum theoretical performance of a GPU. (It may also be expressed as GFLOPS, or billions of FLOPS.) Core count multiplied by the clock speed GHz, multiplied by two (for FMA, or Fused Multiply Add instructions), will give you the TFLOPS for a GPU. Comparing within the same architecture, TFLOPS generally tells you how much faster on chip is compared to another. Comparing across architectures (e.g., AMD Navi 10 vs. Nvidia Turing TU106, or AMD Navi 10 vs. AMD Vega 10) is less useful.
  • Memory speed / bandwidth: Somewhat important. Like higher clock speed, faster memory can make one card faster than another. The GTX 1650 GDDR6 for example is about 15% faster than the GTX 1650 GDDR5, all thanks to the increased memory bandwidth. Note that features like AMD’s Infinity Cache on RDNA 2 help reduce the number of memory accesses, so bandwidth alone isn’t the only factor to consider.

Can it support VR?

If you want to use your GPU with a PC VR HMD, you need at least a mid-range card, with optimal performance coming from a card like the Nvidia RTX 2060 Super/AMD RX 5700 or higher. The lowest-end cards you can use with these headsets are the AMD Radeon RX 570 and Nvidia GTX 1060. And the card requirements of course increase with newer, higher-resolution headsets. Obviously, this isn’t a critical factory if you have no interest in VR.

What about ray tracing and AI?

We discussed this above, but to briefly recap, Nvidia’s latest RTX 30-series GPUs are the best solution for ray tracing and DLSS. AMD’s RX 6000-series GPUs have similar ray tracing performance to Nvidia’s RTX 20-series, but they lack support for DLSS and AMD’s FidelityFX Super Resolution isn’t quite the same thing. Intel for its part will support RT in hardware and has a competing XeSS upscaling solution that uses Xe Matrix cores, basically the same idea as Nvidia’s Tensor cores. From what we know, Intel’s RT performance will be very low, given even the fastest Arc A770 only has 32 ray tracing units — though we don’t yet know how fast the RTUs are in comparison to Nvidia’s RT cores.

Game support for DXR (DirectX Raytracing) and DLSS/FSR continues to improve, but there are tons of games where it’s simply not an important consideration. If you like to turn on all the bells and whistles, placebo effect increases in image quality be damned, that’s fine. We expect RT performance to become increasingly important in the coming years, but it could be two or three more GPU architectures before it’s a make or break deal.

(Image credit: Tom’s Hardware)

Even after you decide what GPU you’re after (say, for example, an RTX 3060 Ti), you’ll usually be faced with plenty of options in terms of cooler design and brand or manufacturer. Nvidia makes and sells its own cards under the Founders Edition moniker for higher-end models, while AMD licenses its reference design to other manufacturers. Both companies’ GPUs appear in third-party cards from several different vendors.

More expensive third-party cards will have elaborate coolers, extra fans, lots of RGB lighting, and often higher clock speeds, but they can also be more expensive than the reference card. Overclocking gains are often minimal, with gains of just a few FPS, so don’t feel bad if you’re not running a blinged-out card. That said, beefier cooling can often translate to cooler, quieter operation, which can be important given that high-end graphics cards are usually the noisiest, most heat-generating parts in a PC build.

We’ve also noticed that Nvidia’s RTX 3080 and 3090 Founders Edition cards (along with several custom models) can get particularly hot on their GDDR6X, so it pays to do some research. For much more on this discussion, see our Graphics Card Face-Off: Founders Edition or Reference GPUs vs 3rd-Party Design feature.

Once you’ve considered all the above and are ready to narrow down your choices, you can head to our GPU Benchmarks and our Best Graphics Cards to help finalize your buying decision. Here we include a condensed version of our current favorite cards for common resolutions and gaming scenarios below. Keep in mind that there are third-party options for all of these cards, so you may want to use these picks as a jumping-off point to finding, say, the best AMD Radeon RX 6800 XT model for your particular gaming build.

Best Budget Pick

Nvidia GeForce RTX 3050

The Nvidia GeForce RTX 3050  almost looked too good to be true, promising full RT and DLSS support with a starting price of $250. At launch, it immediately sold out and we saw prices of over $400. Three months later, you can actually find the cards in stock for just $250. Some might argue that’s not really a «budget» price, but dropping down $50 to the RX 6500 XT results in 35% less performance and effectively useless DXR support. If you want to go lower than $250, we suggest looking at previous generation cards and perhaps even a used graphics card. That’s a big can of worms to open, but when the cheapest GTX 1650 Super cards cost well over $300, there’s no point in even considering them.

Best Mainstream Pick

AMD Radeon RX 6600

The AMD Radeon RX 6600 nominally costs the same $329 as the RTX 3060 below, and performance outside of DXR/DLSS games is basically tied. However, AMD’s GPU can actually be found for close to MSRP, while Nvidia’s card costs nearly 50% more. Winner: AMD

Best Mainstream Nvidia Pick

Nvidia GeForce RTX 3060

The Nvidia GeForce RTX 3060  upgrades the memory and GPU quite a bit compared to the budget 3050, delivering 35% more performance on average. 12GB of VRAM also means you won’t need to worry about running out of memory any time soon. Nominally priced at $329, the RTX 3060 still tends to cost more than we’d like, but keep an eye out for future price drops.

Best Card for 1440p

AMD Radeon RX 6700 XT

Just a bit more money than the RTX 3060 will get you an AMD Radeon RX 6700 XT , and a honking 37% boost to performance in most games. It’s about a tie in DXR performance, making AMD’s card the easy pick this time. You’ll also get great 1440p gaming performance, with over 60 fps in most games even at ultra settings, and 12GB of VRAM should be plenty for the next several years at least.

Best High-End Card

Nvidia GeForce RTX 3080

If you’re looking for the champion of graphics cards, right now it’s the GeForce RTX 3080 . Technically there are slightly faster cards, but they all cost more (i. e. RTX 3090 Ti, RTX 3090, RTX 3080 Ti) or have very lackluster ray tracing performance (RX 6900 XT, RX 6800 XT). The RTX 3080 can max out all the graphics settings at 4K in most games, and DLSS can do wonders for ray tracing performance. Just beware that Nvidia’s next-generation Ada GPUs are slated to arrive around the September timeframe.

MORE: Best Graphics Cards

MORE: GPU Benchmarks

MORE: How to Stress-Test Graphics Cards (Like We Do)

MORE: All Graphics Content

MORE: How to Sell Your Used PC Components

After a rough start with the Mattel Aquarius as a child, Matt built his first PC in the late 1990s and ventured into mild PC modding in the early 2000s. He’s spent the last 15 years covering emerging technology for Smithsonian, Popular Science, and Consumer Reports, while testing components and PCs for Computer Shopper, PCMag and Digital Trends.

Topics

Buyer’s Guides

Tom’s Hardware is part of Future US Inc, an international media group and leading digital publisher. Visit our corporate site .

©
Future US, Inc. Full 7th Floor, 130 West 42nd Street,
New York,
NY 10036.

AMD Radeon RX 6950 XT Review: The Emperor’s New GDDR6

Skip to main content

Tom’s Hardware is supported by its audience. When you purchase through links on our site, we may earn an affiliate commission. Here’s why you can trust us.

(Image: © Tom’s Hardware)

Tom’s Hardware Verdict

The Sapphire RX 6950 XT Nitro+ Pure boasts some of the fastest non-ray tracing results we’ve ever seen. Technically the RTX 3090 Ti still wins at 4K ultra, but 1080p and 1440p go to AMD, for a bit more than half the cost of Nvidia’s latest tour de force. The main drawback is that AMD’s RDNA 3 architecture should arrive before the end of the year, providing even better performance and features.

TODAY’S BEST DEALS

Pros
  • +

    Great non-ray tracing performance

  • +

    Affordable compared to Nvidia cards

  • +

    Reasonably efficient

Cons
  • AMD RDNA 3 and Nvidia Ada are coming

  • Not a major upgrade from an RX 6900 XT

  • Sapphire card needs lots of room

After numerous leaks and rumors, the AMD Radeon RX 6950 XT has arrived, accompanied by the Radeon RX 6750 XT and RX 6650 XT. With faster memory, higher GPU clocks, and a modest increase in power consumption, AMD’s fastest can go toe-to-toe with Nvidia’s fastest and come out with some key wins. 4K and ray tracing still favor Nvidia, but at nearly twice the cost, it’s difficult to justify moving up to the RTX 3090 Ti. Nevertheless, for anyone who wants maximum AMD performance, the 6950 XT ranks as the best graphics card from Team Red and even takes over the top position in our GPU benchmarks hierarchy, at least for the traditional ranking.

AMD didn’t provide samples of its new cards, unfortunately, and despite our best efforts, we were only able to procure the RX 6950 XT prior to launch — thanks to Sapphire. We’ll have RX 6750 XT and RX 6650 XT reviews in the near future, and we already have an Asus RX 6750 XT in hand (it arrived yesterday, but testing isn’t complete yet). As with Nvidia’s RTX 3090 Ti launch, that means we’ll be using factory overclocked cards, which is probably fine as that’s what most people will be buying anyway. Here’s the breakdown of the specs for the top AMD and Nvidia GPUs, plus the other two new AMD offerings. 

GPU Specifications
Graphics Card RX 6950 XT Sapphire RX 6950 XT RTX 3090 Ti RX 6750 XT RX 6650 XT
Architecture Navi 21 Navi 21 GA102 Navi 22 Navi 23
Process Technology TSMC N7 TSMC N7 Samsung 8N TSMC N7 TSMC N7
Transistors (Billion) 26. 2) 519 519 628.4 336 237
SMs / CUs 80 80 84 40 32
GPU Cores 5120 5120 10752 2560 2048
Tensor Cores N/A N/A 336 N/A N/A
Ray Tracing Cores 80 80 84 40 32
Boost Clock (MHz) 2435 2310 1860 2600 2635
VRAM Speed (Gbps) 18 18 21 18 18
VRAM (GB) 16 16 24 12 8
VRAM Bus Width 256 256 384 192 128
ROPs 128 128 112 64 64
TMUs 320 320 336 160 128
TFLOPS FP32 (Boost) 24.9 23. 7 40 13.3 10.8
TFLOPS FP16 (Tensor) N/A N/A 160 (320) N/A N/A
Bandwidth (GBps) 576 576 1008 432 288
TDP (watts) ~370? 335 450 250 180
Launch Date May-22 May-22 Mar-22 May-22 May-22
Launch Price $1,249 $1,099 $1,999 $549 $399

AMD’s three new models take the existing Navi 21, 22, and 23 GPUs in their maximum configurations and add in 18 Gbps GDDR6 memory, slightly higher power levels, and increased prices. The RX 6900 XT and RX 6700 XT will apparently continue to be sold alongside the newcomers, while the RX 6600 XT will be phased out and the RX 6650 XT will be the sole replacement. AMD hasn’t announced any official price changes to the 6900 or 6700 cards yet, though we’ve seen the RX 6900 XT selling for as little as $900 (after a $50 rebate) in the past week.

It will be interesting to see how retail pricing changes over the coming months, as many graphics card manufacturers still seem to be caught in 2021 thinking. AMD initially gave a target price of $1299 for the 6950 XT but dropped that to $1,099 a few days later, probably due to feedback from the press it briefed. Nvidia’s stratospheric pricing on the 3090 Ti might make AMD’s prices look at least somewhat reasonable, but practically speaking, the 6950 would have likely been a $700–$800 card in the pre-pandemic days.

Sapphire’s 6950 XT Nitro+ Pure tacks on another $150 to the reference pricing but also increases the boost clock by about 5% and adds in plenty of bling for good measure. On paper, the reference 6950 XT has 12.5% more memory bandwidth than the reference 6900 XT, while the Game Clock is only 85MHz higher (2100MHz vs. 2015MHz). That’s a small 4.2% improvement in theoretical clock speeds, but as we’ll see later, the factory overclocked Sapphire card tends to clock much higher than the listed Game Clock and even the Boost Clock.

The higher power limit also comes into play, and while Sapphire didn’t specify an official TBP (Typical Board Power) and only gave TGP (Total Graphics Power, which only measures the power consumed by the main GPU chip), all told we’d expect performance to be about 10% than the 6900 XT. As we’ll see in a moment, that’s often the case but there are occasions where the 6950 XT delivers much higher performance.

  • MORE: Best Graphics Cards
  • MORE: GPU Benchmarks and Hierarchy
  • MORE: All Graphics Content

TODAY’S BEST DEALS

  • 1

Current page:
Intro and Specifications

Next Page Sapphire Radeon RX 6950 XT Nitro+ Pure

Jarred Walton is a senior editor at Tom’s Hardware focusing on everything GPU. He has been working as a tech journalist since 2004, writing for AnandTech, Maximum PC, and PC Gamer. From the first S3 Virge ‘3D decelerators’ to today’s GPUs, Jarred keeps up with all the latest graphics trends and is the one to ask about game performance.

Tom’s Hardware is part of Future US Inc, an international media group and leading digital publisher. Visit our corporate site .

©
Future US, Inc. Full 7th Floor, 130 West 42nd Street,
New York,
NY 10036.

what it is, how it looks and how to fix it — action plan

Updated:

Video card is one of the important components of any computer or laptop. Like other components, it can fail over time. Artifacts are usually the initial signal of failure.

In most cases, it is certainly possible to cure it, and I will tell you below how to do it.

Article content

    They mostly appear when there is a heavy load on the graphics core of the card. For example, when you play games, launch resource-intensive graphic editors or applications (Photoshop, 3ds Max, After Effects, and others).

    However, graphics card artifacts may be present all the time. This indicates a malfunction and the imminent failure of the graphics card. In this case, you need to try to reanimate it yourself or take it to a service center (although most often the “GPU” chip is warmed up there, this gives a result, but not a long one).

    What artifacts look like

    It is very important to be able to distinguish between artifacts and dead pixels on a monitor screen. Dead pixels are white, black or colored dots on the screen. They are immobile and are always present while the monitor is on. Checking their presence is easy, just connect the monitor to another computer or laptop.

    Dead pixels (photo)

    Artifacts are constantly changing their position, they are never in one place.

    How artifacts look like (photo)

    Dividing artifacts into types

    In fact, there are many reasons for this malfunction, but for convenience, I propose to divide them into just a few groups:

    1. Software . These include incorrectly set board settings, an old or unstable driver version, and it is also often the case in the game or application itself.
    2. Hardware . When the problem is on the side of the device (video chip blade, swollen capacitors, etc.).

    Let’s take a closer look at each group and try to repair the video card on our own.

    Software

    Such artifacts occur only under load (in games or resource-intensive applications that mainly use 3D graphics).

    Such programs create a heavy load on the video card, and it can no longer correctly process the transmitted information and display a complex and rapidly updated image.

    Causes and remedies
    1. If you have an old video card, in order to remove artifacts, it is enough to lower the graphics quality in the settings.
    2. A glitch in the game where the image is distorted due to the game itself. As a rule, this happens to almost all users without exception. To check this, just install the same game on another computer, laptop or read reviews, forums or blogs. The solution will be given there.
    3. Update the board drivers to the latest version, install the latest game updates, update DirectX, Visual C++ libraries and .Net Framework. Sometimes it helps to install an old driver released at the same time as the board. Updating the video card software must be done correctly, first remove the old one and only then install the new one.

      Link to the NVIDIA website.
      Link to the AMD Radeon website.

    4. Very often artifacts appear after installing fresh drivers. But this already speaks of a hardware failure. Read about it below.
    5. Lower the operating frequency and voltage of the graphics card. Some cards are released with factory overclocking, they have a prefix (TI) in their name. Usually in such cases, artifacts appear the first time you start the computer and never disappear. Or this happens during self-acceleration, when the frequency and voltage are greatly overestimated. Read about how to overclock a graphics card, do the same, only in reverse.

      Frequency and power reduction

    6. Also, the problem can be observed with old cards with a small amount of memory and support for shaders of the second version. True, this is extremely rare.

    Hardware

    Only a few of the hardware artifacts need to be fixed (full list below). In most cases, you have to deal with the dump of the video chip « GPU » (it is treated only by warming up, the result is unpredictable).

    Causes and remedies
    1. Overheating. Check the functionality of the cooler. It often happens that the cooling system becomes clogged and starts to work poorly. Therefore, it must be cleaned periodically. Also remove the cooler from the board and check if the thermal paste underneath has dried out. Replace if necessary.
    2. Physical damage. Some of them are quite easy to diagnose, it is enough to visually inspect the board. For example, for the presence of swollen capacitors, darkening of the textolite, scratches or chips. Bursting capacitors

      At the same time, code 43 is displayed in the device manager. Carefully inspect it for the presence of an auxiliary power connector and connect the appropriate cable to it.

      Auxiliary power connection

    3. Insufficient power supply unit. If your power supply is not powerful enough, then it may simply not be enough. Another point is a faulty power supply that needs to be replaced or repaired.
    4. If artifacts occur on the integrated video card, then the cause lies in a malfunction of the motherboard or processor. If, when connecting different, known to be good cards to the PCI-Express slot, the defects do not go away, then the mother is to blame (the slot itself, the conders or the northbridge).

    Checking a video card for artifacts

    Sometimes, after purchasing a new or used video card, it becomes necessary to check its performance. In this respect, several programs have been developed that artificially create a load on the video card, which allows you to identify any existing malfunctions.

    Test at Furmark

    Take advantage of these and run several tests lasting 30-60 minutes. This is the time needed for an objective assessment.

    If during the test the video card starts to artifact, you should think about returning it under warranty (if it is new) or giving it back to the previous owner. Of course, you can try to fix it, how to do it, I told above.

    If none of the recommendations helped you, then most likely the problem is a chip malfunction « GPU » or memory modules. Here it would be correct to contact the service center or purchase a new card.

    Video lesson

    Types of malfunctions, features of their manifestation and detection

    Home | Feedback

    ⇐ PreviousPage 6 of 10Next ⇒

    The reasons for the failure or incorrect video cards can be divided into 2 types:

    1. Software — associated with the software part of the computer. For example, an incompatible driver version is installed, the OS or software does not support the video card architecture. Also, the operation of the video card is affected by its setting, which is carried out through special software provided by video card manufacturers.

    2. Hardware — connected directly with the equipment. This may be incompatibility of devices, improper installation of equipment in slots, failure of board components, faulty conductors, etc.

    The main reason for the failure of video cards is the physical destruction of any of its elements. Destruction can occur as a result of natural aging, overheating, pollution, physical impact, unstable nutrition. The instability of the video card always affects the operation of the computer.

    Common signs of damage to video cards:

    1. When you turn on the computer, the speaker starts beeping. This indicates that an attempt to initialize the video adapter fails.

    2. The computer does not respond to the power button, or the fans briefly flicker and turn off. This indicates that the video card is in a short circuit state in the primary power circuits.

    3. The computer is working, you can hear the sounds of the OS loading, but there is no image on the monitor.

    4. The computer is working, but the colors are distorted or there are graphical artifacts on the screen.

    5. Computer restarts intermittently.

    6. When processing graphic information in large volumes and high complexity, the image on the screen gradually begins to be distorted.

    These symptoms can be caused not only by a faulty video card, but also by other system components. Therefore, you should definitely make sure that the problem is really in the video card. This can be done by replacing the video card with a known working one or by connecting to the built-in video card (if any), having previously removed the external one.

    Main faults of video cards:

    1. Video chip

    When you start the computer with a video card with a burnt chip, most likely the cooler will work, but the image will not be displayed. A similar symptom may occur when the video chip is soldered off.

    Chip desoldering is a violation of the connections between the GPU or memory and the board. In this case, the repair consists in restoring the BGA mount, i.e. solder the GPU or memory back to the board. In some cases, it is required to replace the balls (reballing) on ​​which the video chip is soldered.

    2. Power supply

    Quite often failures are failed elements of the pulse converter, the task of which is to convert the voltage from the standard 12 volts to the voltage needed for the GPU and memory. Keys (field-effect transistors), drivers that control keys and PWM often fail. The performance of these elements is checked by an oscilloscope. In the absence of an oscilloscope, transistors are checked with a multimeter. In some cases, there are no transistors (in their usual sense), but transistor assemblies on the BGA mount (turquoise elements) are used. Such an implementation of the power system can significantly complicate the repair. SMD elements also often fail, usually they can be determined visually.

    3. Capacitors

    A video card with swollen, exploded capacitors may either prevent the computer from starting at all, or may even lead to reboots or freezes of the computer under load. Sometimes, the video card continues to work stably even with 1 — 2 exploded capacitors, but as a rule this is not for long, and sooner or later it will still fail.

    Swollen or exploded capacitors are replaced with new ones. It is allowed to install capacitors of a larger capacity or voltage, but in no case less.

    4. BIOS

    The BIOS itself rarely fails, the main BIOS problems are usually the result of user actions. The BIOS can be restored by flashing with a programmer, or by replacing the chip.

    5. GPU Chip Chip

    Video chip chip usually occurs due to inaccurate mounting/dismantling of the cooling system and almost always leads to GPU failure. In some cases, the video card continues to work with a chip. A video card with a chipped chip is repaired by replacing the chip.

    6. Chipped SMD elements.

    Not infrequently, when mounting/dismantling components, assemblers rip off SMD elements. In such cases, it is necessary to carefully inspect the entire board for chipped elements and, if any, replace them with new ones.

    7. Artifacts.

    Due to a BGA soldering failure or a GPU or memory failure, so-called “artifacts” may appear on the screen. Artifacts are unwanted features of an image generated by a graphics card. Outwardly, they can look like colored dots, stripes, triangles, moiré, color distortions, uneven lines, inconsistent movement of image parts, “gaps” between polygons, and so on.

    Also, artifacts can occur due to overheating or incorrect user actions, such as overclocking, changing timings, flashing BIOS. In rare cases, the appearance of artifacts is caused by problems with GPU or memory power.

    Often, by the nature of the artifacts, you can determine the cause of their appearance. Colored stripes or dots are usually the result of a BGA miswiring. If the artifacts are dynamic i.e. stripes and dots are in constant motion, then the video chip or one of the memory chips is most likely faulty.

    Artifacts also appear due to a defective interface cable. You can check this by moving the cable. If the artifacts disappear, then the problem is in it.

    8. Memory

    If the memory chip fails, dynamic artifacts appear on the screen, or the video card stops working altogether.

    9. One of the RGB colors dropped out.

    Most modern video cards are equipped with two video outputs, so the first thing to do is to connect the monitor to another connector. If everything is in order at the second output (that is, it’s not the video chip), then we look at the resistance on the RGB pins (it should be 75 ohms) and check the video output piping.

    If everything is in order with the binding, then you should pay attention to the switch responsible for choosing the connector. Usually the switch looks like a 16-foot microcircuit, we call it for a breakdown to the ground.

    10. Hang, BSOD, black screen

    BSOD usually occurs due to a faulty or falling off GPU, overheating, overclocking. It often happens that the card works stably without drivers, but when you try to install them, a BSOD crashes or the OS stops loading.

    11. Dirty, insufficient cooling

    Dirty cooling system, the most common cause of video card failure. Overheating can lead to both temporary malfunctions (artifacts, BSOD, reboots) that will disappear after cleaning, and irreversible damage to the video card.

    ⇐ Previous12345678910Next ⇒

    ©2015 arhivinfo.ru All rights belong to the authors of the posted materials.

    Much faster? Adobe Premiere Pro has learned how to use the GPU of video cards in a new way

    In addition to some improvements and innovations, , starting from version Premiere Pro 14. 2, the ability to use the graphics processor of the video card has finally become available ( GPU) to encode ( Encoding) video when exporting to the final file.

    Why finally?

    The fact is that many other common video editors have long been able to use the GPU for encoding, and it was very strange that the powerful super popular Premiere Pro application was deprived of this function.

    Theoretically, this allows you to significantly, sometimes by several times, reduce the processing time, which, of course, is very important.

    Couldn’t Adobe Premiere Pro use the graphics card’s GPU before?

    Adobe Premiere Pro has been able to use the computing power of GPU video cards for video editing for many years now.

    What was before…

    Activating the setting Mercury Playback Engine (GPU Accelerated) , which has been available for a long time, allows you to “transfer” to the shoulders of video cores the calculations of various transition effects between frames, scaling, cropping stabilization, contrast adjustment, color correction, noise reduction, sharpening, blur and many others.

    Effects that can be processed by the GPU are even marked with a special icon in the app.

    Effects that support the operation of the graphics processor of the video card are specially marked

    Also Mercury Playback Engine (GPU Accelerated) improves the smoothness and speed of display in the preview window, which significantly adds comfort in the process of working with the application.

    What now…

    In addition to the above functions, the video card can now also be used when exporting video ( File — Export — Media) to the final file for encoding Encoding (video).

    Here we make some clarification. In fact, GPU export (encoding) was available in Premiere Pro before. However, only integrated graphics adapters in Intel processors (current models) that support Intel Quick Sync were supported.

    And in update 14.2, the function became available for NVIDIA and AMD video adapters.

    As a result, the GPU, on the one hand, is involved in rendering frames, applying various video effects, and then also helping to encode it into the final file.

    Enable GPU support in Adobe Premiere Pro

    First, make sure Mercury Playback Engine (GPU Accelerated) is selected in the project settings (located here: File — Project setting — General).

    Now the graphics card can help with rendering and applying various video effects.

    Let’s move on to the new features introduced in Adobe Premiere Pro 14.2.

    Enable hardware acceleration for encoding ( encoding) and decoding (located here: Edit — Preferences — Media ).

    When you have done the necessary video editing, in the settings for exporting video to the final file (File — Export — Media), you must select the codec format H. 264 or HEVC (H.265).

    After that, you will be able to select Hardware encoding for hardware video encoding using the graphics card.

    An exact list of compatible GPU models for video cards could not be found at the time of preparation of the article.

    But, presumably, in the case of NVIDIA, export encoding support will be available on video cards from GTX 1050 and higher (must contain the appropriate NVENC encoding block).

    With AMD, it looks like the feature will work on modern professional video adapters, and as for the AMD Radeon RX 500 and RX 5000 gaming series, it’s difficult to answer here.

    Oh, and one more thing… Mercury Playback Engine (GPU Accelerated) settings and export hardware encoding settings are not related.

    Let’s test

    What speed gain does using the GPU give when encoding in practice?

    Configuration of the test platform:

    • Processor: AMD Ryzen 7 2700 (recorded 3700 MHz for all nuclei)
    • : ASUS ROG STRIX X470-F GAMING
    • GTX 1070 AERO 8G OC

    • RAM: 2×8 GB DDR4 (Kingston HyperX FURY DDR4 RGB HX432C16FB3AK2/16)
    • Operating system: Windows 10 Pro 64 bit

    On our system, with normal encoding of a 4K video at 60 fps without applying any effects file is almost three times (2. 9 times, to be more precise).

    Approximately the same results were obtained with our settings both when using the H.264 codec and when using HEVC (H.265).

    Yes, the result is really great. The difference is very significant.

    We decided to experiment a little and see how the situation would change if, for example, we changed the «balance of power» between the processor and the video card.

    To mimic the installation of a weaker CPU on our 8-core (8 cores, 16 threads) AMD Radeon 7 2700, we disabled half the cores and turned it into a 4-core (4 cores, 8 threads).

    By the way, in both configurations, the frequency for all CPU cores was fixed at 3700 MHz.

    The use of a weaker CPU led to an increase in processing time and a simultaneous increase in the gap up to 3.5 times between the results using GPU ( without Hardware-coding) coding) .

    What if the project is brought a little closer to reality and “complicated” by adding various effects of color correction, sharpening, etc.

    Encoding a project with a number of additional effects

    There is still a gain from using the GPU, but not as big as before. In even more complex projects, the difference can be even smaller.

    Also note the difference in CPU and GPU loads.

    The processor (CPU) is almost 100% loaded, while the GPU shows a relatively low 14% load (it would be about zero if the Mercury Playback Engine (GPU Accelerated) was disabled).

    As you might guess, hardware encoding is not used by the video card.

    After activating Hardware coding, the GPU load rises to 36%, and the CPU drops from almost maximum to 54%.

    To recap

    Adobe Premiere Pro finally has a feature that was available in many other popular video editors.

    Deeper use of the computing power of the video card can really improve work efficiency and save time.

    How much time you can get from using the GPU when encoding in Premiere Pro is highly dependent on the system configuration and features of the project being edited.

    evo
    Test Lab Engineer

    Read also

    0001

    2021 has become a truly memorable year for the crypto industry: there are whole energy collapses due to cryptocurrency mining, a real bull run for all coins, including BTC, the capture of the PC component market by miners. All this happened in a little over six months, and the rest of 2021 promises to be even hotter. The network is only talking about the fact that the most interesting is ahead and in general, BTC to the moon — $100,000 each! Whether this is true or not is not yet clear. But one thing is clear: cryptocurrency seriously affects the usual way of the modern economy. And the point is not even what means of calculation, where and how are used. Even producing quite familiar goods, large players are forced to adapt to the trends dictated by the crypto industry.

    Of course, we will first of all talk about the video card market and how the whales are struggling with the shortage that has arisen. Everyone is excited about the sharp and excessive increase in the price of PC components. But it was powerful GPU processors that were targeted by fans of decentralized money. Thanks to the sharp rise in prices for virtual assets, the ease of mining new coins and the understanding that crypto is the future, there are no video cards left on store shelves.

    In the fight against this problem, according to Nvidia, the division of products into «specialized for miners» and into gaming equipment will help. Since the beginning of the year, attempts have been made to pacify the ardor of miners with the help of technical restrictions on GeForce 3060 video cards, but then they were unsuccessful. Such protection was bypassed in two ways. Now the computer giant has rolled out to the market a more serious hardware limitation, which is called the Lite Hash Rate or simply LHR. Such video cards do not allow achieving the required MX / s when mining Ethereum and coins on Ethash. And these are the main assets that are mined using GPU processors. But the difficulty is not even in this.

    The most tricky question arises when you start choosing a video card. It would seem a good solution: during the day you can work or play on such a powerful hardware, at night you can mine. You just need to find a video card not from the LHR series and that’s it. But when you get to the store, you understand that this task is not an easy one and some knowledge is needed here. The thing is that manufacturers may not mark parties with a hardware limitation in any way. They just don’t have to. With such a gesture, they seem to show that our products were created for other purposes.

    But that’s not a problem at all. Now we will learn how to distinguish video cards we need from not quite suitable ones and determine the most accurate and rational way to determine the absence of LHR on a component.

    1st method: by name

    To begin with, it is worth remembering one important circumstance: if you decide to get a brand new RTX 3090, then the following will not concern you. They do not have any restrictions and are ideal for both mining and other purposes.

    As for the LHR on the 3000th series from Nvidia, it is important to understand here that even video cards with such protection are suitable for cryptocurrency mining. Read about how to achieve the desired MX / s with LHR in our other article on the site — the method is very simple.

    As mentioned earlier, the manufacturer is not obliged to indicate this feature on the product. But often such an indication is made. You need to know, for example, that for 3060 no markings are made at all, although it has, in fact, two types of mining blocking. Both software and hardware. Apparently, this is why Nvidia decided not to bother, the video card was originally not for miners!

    If we consider the Nvidia RTX 3070 Ti and RTX 3080 Ti as a purchase option, then they will all come with LHR without exception.

    Let’s take the RTX 3060Ti, RTX 3070 and RTX 3080 series. These video cards can be either with LHR or do not belong to the new revision, when there was no mining limiter yet. In this series, the manufacturer tries to indicate the features of the product in the name itself, so that there are no angry reviews on the Internet later.

    Based on Nvidia’s current policy, it can only be unequivocally stated that if «LHR» is written on the box, then the hashrate will be underestimated. If there are no such marks, then another way to assess the capability of the equipment is suitable.

    2nd method: check by GPU ID

    The essence of the method, I think, is clear from the name itself. It remains only to decide how this same GPU ID can be found out?

    This is quite easy to do. We will need to install the card in the PC and run the CPU-Z program. Not only does it allow you to find out the ID, but let’s look at its example. When running on the “Graphics” tab in the Code name line, you will be able to see if there is a hardware mining limiter or not.

    This method will not work when buying a video card via the Internet. After all, for its implementation, you need the ability to install it on a computer. This method is ideal for the secondary market or when buying in retail chains, if you manage to negotiate with the seller. Another big plus is that this verification method is the most accurate. It will be impossible to make a mistake here.

    Video card

    GPU ID

    LHR

    Features

    RTX 3060

    GA106-300

    Software LHR

    GA106-302

    LHR

    RTX 3060 Ti

    GA104-200

    GA104-202

    LHR

    RTX 3070

    GA104-300

    GA104-302

    LHR

    RTX 3070 Ti

    GA104-400

    LHR

    RTX 3080

    GA102-200

    GA102-202 LHR

    RTX 3080 Ti

    GA102-225

    LHR

    RTX 3090

    GA102-300

    3rd way: same as

    When such an innovation as LHR came to the video card market, manufacturers could not help but make adjustments to the designations of models produced from their assembly line. This method has something in common with the first, so it has similar disadvantages.

    So, if we talk about ASUS, then the company marks video cards “not for mining” using the designation V2;

    Their Gigabyte competitors designate a Rev. 2.0;

    There are those who do not force you to solve puzzles. Manufacturers MSI and Zotac simply write LHR on the boxes;

    Palit puts V1 on LHR video cards;

    EVGA has the designation “KL”, and KFA2 writes KCK in the model code.

    Those companies that do not directly write LHR have a chance to buy a video card without restrictions, even if there are appropriate markings. But you shouldn’t rely on it.

    4th method: serial number

    LHR appeared only in May 2021, so if the video card has a serial number with a production date earlier, then it will not have 100% hardware limitation of mining. Depending on the manufacturer of the card, the serial number can sometimes determine the date of manufacture.

    For example, when buying a Gigabyte product, you will see a number in this format: SN 21261101001010, where 21 is the year and 26 is the week of the specified year. Therefore, such a video card will most likely be with a mining limitation.

    For example, we can also consider video cards from Palit. BN 20 015444 — this is what their serial number looks like. Here, the letter N denotes the month — November, and the number 20 — the year. It turns out that the card is from November 2020 and is not protected from mining. 1 to 9month (January-September) will be marked with numbers from 1 to 9 instead of letters.

    ***

    Other manufacturers will also transfer product information in the serial number of the video card. On the Internet you can find information on any of them. The features of serial code generation will tell you whether to take a GPU processor for mining or not.

    Combining at least 2 of the 4 named methods, you can determine with a high degree of probability (and often with one hundred percent) whether the machine is suitable for mining virtual money. But LHR is not as scary now as it was a month ago, and in another month there is a possibility that Ethereum miners will not care at all on LHR.

    You can read more about unlocking LHR in our previous article
    Articles

    Evgeny Serov

    December 30, 2018

    This is the first article in a series dedicated to gaming broadcasts — a topic that our readers have repeatedly asked to analyze.

    In subsequent articles, we will try to find the correct answer to the question “Which hardware is better?” and “What are the best quality settings to use?”. This article will focus on settings — we will learn which encoding settings offer the best performance/quality ratio, and how the various popular modes differ from each other.

    First key topic: which encoding method is better — CPU software encoding or video card hardware accelerated encoding…

    To begin with, before we go directly to testing, let’s talk about the test platform.

    The first question that concerns us is which encoding method is better: software encoding of the processor or encoding with hardware acceleration of the video card. This is really important, because if the encoding of the video card is better, then the processor will not be so important for gaming broadcasts, and if it is the other way around, then the processor becomes the most important element for obtaining a high level of quality, not only in the matter of broadcasts, but also in the final gaming performance.

    In the past few months, graphics card encoding has been taken to the next level with Nvidia updating the hardware encoding engine on graphics cards with the new Turing architecture.


    Photo from Techspot.com

    On the new graphics cards, a lot of attention has been paid to improving performance and improving HEVC compatibility, which is not very important for streaming. The new Turing architecture engine offers a 15% improvement in H.264 video quality compared to the previous generation of Pascal graphics cards (GTX 10xx series). We will definitely pay attention to this, and at the same time see how Turing works with x264 software encoding. So, in the tests, we will use the RTX 2080 graphics card to see how the Turing encoding works, Titan X Pascal for Pascal graphics card tests, and Vega 64 to see how things go for AMD.

    In the second part of the study, we will look at software encoding with x264 at various settings. We will leave the comparison of software encoding on different processors for another article — in this one it is more interesting for us to understand how each of the settings affects performance and quality.


    Photo from Techspot.com

    All tests were performed on a Core i7-8700k overclocked to 4.9 GHz and 16 GB of DDR4-3000 RAM. This is the platform we recommend for gaming at the maximum. In the future, we also plan to find out how good 9 is900K versus AMD Ryzen processors.

    We are using the latest version of OBS for capturing, set to record in 1080p at 60 fps at a constant bit rate of 6000 kbps. These are the highest quality settings recommended by Twitch. If you are going to record gameplay for other purposes, then we recommend that you increase the bitrate, but to stream on Twitch, you need to have 6 Mbps or less if your channel is not connected to an affiliate program.

    For testing, we use Assassin’s Creed Odyssey, which is very demanding on the processor and video card, and therefore, it has certain problems with software encoding through the processor. The second game will be Forza Horizon 4, a slightly less CPU intensive but fairly fast game that can be problematic at low bitrates. Both games are not the best choice for gaming broadcasts, but each is interesting in its own way for our tests.

    Let’s start with encoding with video cards, because for many years there were huge problems with it. What we’re most interested in is whether Turing was able to fix the bugs of its predecessors, which made it almost impossible to use coding.

    On Nvidia graphics cards, we used NVENC in OBS and selected “High Quality” at 6 Mbps bitrate. Of course, there are other add-ons, but «High Quality» produces, as you might guess, the highest quality. On AMD’s Vega 64 graphics cards, we tried many different settings (both overall quality and bitrate), but without much success, as you will soon see for yourself.


    Photo from Techspot.com

    Comparing the NVENC settings on Turing and Pascal video cards, we can say that there is almost no difference at a bit rate of 6 Mbps. In both cases, there is a problem with macroblocks remaining in the picture, and in general the quality leaves much to be desired. Speaking specifically about Forza Horizon 4, macro blocking is most noticeable on the road and looks terrible. In Turing, of course, the picture is a little clearer, and macroblocks get out less often, but in general one thing can be said — both options are disgusting. If you’re going to be in the business of streaming games, this is not the level of quality that you can impress viewers with.


    Photo from Techspot.com

    For AMD, the situation is even worse — when the GPU load approaches 100%, then encoding simply breaks down completely and produces no more than 1 frame per second, which did not happen with Nvidia video cards. We were able to run the encoder with a framerate limit, which reduced the GPU load to about 60% in Forza Horizon 4, but even with the “Quality” add-ons, Vega 64 produced a picture worse than Nvidia Pascal cards.


    Photo from Techspot.com

    With the fact that AMD’s encoder «fell off» at the very beginning, let’s look at the confrontation between Nvidia’s NVENC and x264 processor software coding. In Assassin’s Creed Odyssey’s slower performance test, NVENC even at «High Quality» is noticeably worse than x264 with «Veryfast» add-ons, especially when comparing fine details, although both use a 6 Mbps bitrate. The Veryfast x264 isn’t perfect, but against the backdrop of NVENC Turing cards with lots of macro blocking and fuzzy details, it looks like a clear winner.


    Photo from Techspot.com

    In Forza Horizon 4’s faster performance test, Turing’s NVENC graphics cards do a veryfast x264 add-on in places. Nvidia’s variant still suffers from macro blocking, but veryfast has huge issues with the quality of detail in motion. In a game with this much movement, NVENC is about equal in quality to «faster» x264. However, the “fast” x264 add-on works much better with moving objects than NVENC and even does it completely, in cases where there is minimal or no movement on the screen at all.


    Photo from Techspot.com

    These results were quite surprising, especially considering the fact that Nvidia has promised that the new NVENC engine on Turing cards runs H.264 at x264 fast add-on level, if not better when streaming in 1080p quality at 60fps and bitrate 6 Mbps. But if you look at the test in Assassin’s Creed Odyssey, you can see a completely different thing — software coding is simply better.


    Photo from Techspot.com

    Speaking of x264 software encoding add-ons, there is a pretty noticeable difference between each (veryfast, faster, fast and medium). In the slow Assassin’s Creed Odyssey (if you leave out the performance problems with each add-on for now) — veryfast and faster do not give the best picture: a lot of blurry frames, macro blocking in some areas and poor processing of details in motion.

    These two add-ons are best left for those cases where quality is not particularly important, since at 6 Mbps the image is very mediocre.


    Photo from Techspot.com

    The fast add-on is the bare minimum that you should use if you really want to provide your viewers with a quality picture. The difference in quality between faster and fast is quite noticeable, because previously blurred details look quite clear.

    Medium is another step forward, but the difference in quality between fast and medium is smaller than between faster and fast. As you’ll see later, medium is a fairly performance-heavy add-on, so running it on the same system the game is running is clearly not worth it. In addition, we checked the slow add-on, but it is even worse there — such a strong performance hit is clearly not worth it.


    Photo from Techspot.com

    For fast movement in Forza Horizon 4, again, you should immediately forget about veryfast, because in the case of similar games it is even worse than NVENC. Unfortunately, due to the 6 Mbps bitrate, any add-on will be far from the original material, but medium will visually be the closest to it, and it looks much better than with fast.

    With faster, as mentioned above, everything is terrible, so there is no point in using anything below fast for this type of game. I would like to note that medium would work best at a higher bitrate, but Twitch has limitations, so 6 Mbps is our bitrate ceiling.

    Capacity

    Image quality is only the first half of our equation. The second half will be performance. When you’re streaming a game from the same computer you’re playing on, it’s important that the performance of both the game and the stream is adequate.

    Let’s start with graphs of the impact of encoding with a video card on performance…

    By enabling NVENC on Pascal or Turing cards, you will lose approximately 10-20% fps, depending on the game. In other words, between streaming with NVENC and streaming off, there will be a 10-20% performance difference. However, the more the game depends on the graphics card, the more NVENC will hit performance. That’s why Forza Horizon 4 drops more frames than CPU-dependent Assassin’s Creed Odyssey.

    But there is good news too! Even if you play at slightly lower frames per second when using NVENC, the broadcast will have a perfect picture without frame drops, even if the game loads the video card at 100%. The encoding engine of AMD cards does not affect the performance of the game as much, but in the case of a high load of the video card, there is a drop in the number of frames per second by about 90%, which, as we mentioned earlier, makes it useless.

    Performance in program encoding mode varies by game. In the case of the CPU- and GPU-intensive Assassin’s Creed Odyssey, using CPU software encoding to stream can have a negative impact on frame rates, and high-quality add-ons may not be able to keep up.

    On a system with a Core i7-8700K and an RTX 2080, we ran Odyssey with its own specific graphics settings, but the game ran stable (no drop in framerate on stream) only on the x264 veryfast encoding add-on. X264 veryfast also hit frame rates by about 17%, which is even more than NVENC. However, veryfast still looks better than NVENC for such a game, so the small performance hit is worth it.

    Meanwhile, already on the faster add-on, you can notice a deterioration in the quality of the broadcast. Although it was only 8.5%, the broadcast was difficult to watch with the resulting picture, it was jerky. In addition, the game’s frame rate has dropped from an average of 90 to 63, and the very minimum has dropped to almost 30. Here you can clearly see that the add-on is overloading the system. With fast and medium, the situation is even worse — they have a decrease in the number of frames by 62% and 82%, respectively. The most interesting thing is that the frame rate in the game on such add-ons is higher than on faster, but perhaps this is due to the fact that the encoder is overloaded, as a result of which a little more processor power is allocated to the game.

    One way to improve performance is to limit the frame rate of the game. You can put 60, since the broadcast still has a limit of 60 frames per second. But even with the cap, things don’t get any better: the fast add-on still sees a 9% drop in frames, and faster doesn’t have any drops at all, although it did show a slight drop to 40 frames. The only way to consistently use fast in this case is to lower the graphics settings and try again, but, alas, this article is not about optimizing Assassin’s Creed for streaming with our hardware.

    In the second part of the study, it will be interesting to understand how other processors will perform. But in this part, the 8700K, a popular high-end gaming processor, showed an approximate situation with the broadcast of a game that is extremely demanding on the processor and video card. However, inferior processors, especially low-core ones from Intel, will mostly work fine on the veryfast add-on.

    But the less CPU dependent Forza Horizon 4 is an interesting case, as CPU software coding produces better performance than GPU hardware coding. This happens because the processor has additional power in reserve that can be used for encoding without “eating off” the performance of the video card.

    The veryfast x264 add-on reduced performance by only 6% (based on minimum frames per second), but the difference between veryfast and fast was only 5%, despite the fact that video encoding on the fast add-on required significantly more processor power.

    On the stream itself, we didn’t see a drop in frame rates on the veryfast and faster add-ons, but already on fast, we could see a drop in the frame rate of the stream by about 12%. Because of this, she periodically went in jerks. Considering that the game was running at 120 frames per second, you can easily set a limit of 60 frames, thereby reducing the load on the processor. With a similar limitation, the fast add-in eventually works without a drop in the number of broadcast frames. In addition, this limitation gives us the opportunity to try medium, but even with our 8700K processor, there was a drop in the number of frames by about 2%, which is not good. If we planned to continue working with the medium add-on, then we would have to dig a little in the graphics settings in order to further reduce the load on the processor.

    Preliminary results

    Based on the test results, several interesting conclusions can be drawn. We’ve learned that Turing’s H.264 video card encoding engine isn’t much better (although it’s been claimed to be) compared to Pascal, and video card encoding is still not an option for streaming.

    The only time I would recommend NVENC is for fast gaming on systems that can’t handle CPU encoding on x264 faster add-ons or higher. For slower games, you should use veryfast x264 instead of NVENC. Moreover, veryfast will handle most PCs built for theoretical broadcasting.

    The coding engine of AMD video cards, to put it mildly, needs serious improvement in order to be considered at all. He is not able to work with a loaded video card, and when he still manages to work — the quality is simply terrible.


    Photo from Techspot.com

    With processors, the situation is a little more complicated, since which add-ons you can pull depends on the processor and the specific game. On our 8700K system, the spread was as follows: in the first case, we had to use the veryfast add-on in a CPU-heavy game, and in the second, in a less demanding game, we could already use fast or even medium, getting a stable 60 frames per second in good quality.

    Streamers should use at least the fast add-on, as this is the first add-on from the end that produces quite good quality at a bitrate of 6 Mbps. Although not ideal for fast scenes, this add-on works many times better than faster and veryfast, while remaining more or less affordable for medium systems. If you have very powerful hardware, then you can try medium, but it’s better not to touch slower add-ons.

    While it’s great to play and stream on the same PC, this study is more suitable for beginner streamers or streamers with a flexible schedule. Anyone broadcasting professionally or as a job should have a second, separate recording computer — with a good capture card and processor. This removes the entire load from the main computer, which makes it possible to use medium, slow and lower add-ons, i.e. get the best quality without sacrificing performance.

    We looked at the optimal add-ons in terms of quality, and in the next article we will try to figure out which processors are able to encode on these add-ons. Stay with us!

    Intel specialists about the Arc Alchemist gaming video card — Hardware on DTF

    \u0442\u043e \u043c\u0435\u0448\u0430\u0435\u0442 \u0432\u0430\u043c \u043d\u0430\u0439\u0442\u0438 \u0445\u043e\u0440\u043e\u0448\u0435\u0433\u043e \u0440\u0430\u0431\u043e\u0442\u043d\u0438\u043a\u0430?»,»buttonText»:»\u0420 \u0441\u0441\u043a\u0430\u0436\u0438\u0442\u0435″,»imageUuid»:»693626ef-cadc-5f47-8154-4026a1e81dc5″,»isPaidAndBannersEnabled»:false}

    analogue of DLSS and plans for the future

    12 494
    views

    On August 16, 2021, Intel announced its brand of Arc discrete graphics cards — the first model will be released in early 2022. Following at a special Intel Architecture Day event, the company shared details on the Xe-HPG architecture of the Arc Alchemist gaming adapter, including hardware tracing, proprietary Spectacular Xe SuperSampling (XeSS) supersampling technology, and XMX matrix engines.

    On August 24, Digital Foundry spoke with Intel Vice President Lisa Pearce and Engineer Tom Petersen about what to expect from Alchemist, how open XeSS technology will be, and how the company sees the future of the video card industry.

    Video card Arc Alchemist

    • Intel representatives did not provide any details about the performance of the adapters, as the company will disclose this information gradually by the release. However, Petersen emphasized that Alchemist is a competitive adapter that supports all the necessary technologies for modern games.
    • The company uses TSMC’s advanced 7nm N6 process in Alchemist, which provides 15% denser transistor placement compared to the conventional 7nm process. With Intel being the first to use such chips in graphics cards, it will give the company some edge over the competition.
    • Intel believes that a focus on machine learning and ray tracing hardware engines is the natural way forward for technology. The same dedicated engines allow much more efficient tracing than compute shaders, Petersen says.
    • The company will optimize and ensure the compatibility of its adapters not only with the latest games and APIs, but also with legacy ones, including those using DirectX 9 and 11 graphics APIs.
    • The Alchemist graphics card will have unique software features, but the company will reveal details about them closer to launch.

    XeSS technology — equivalent to DLSS

    • Spectacular Xe SuperSampling (XeSS) intelligent scaling technology reconstructs sub-pixel image detail from adjacent pixels and from previous motion-compensated frames. It allows you to double the performance at higher resolutions in games without a noticeable loss in image quality.
    • XeSS receives data for image synthesis, learning not on a specific game, but on several. According to Petersen, this option is one of the best, since the algorithm does not need to be adapted from scratch for each specific game.
    • XeSS technology is capable of running on XMX matrix engines that will appear in the Xe-HPG core graphics architecture, as well as using the DP4a instruction set on various architectures. In the first case, the best upscaling occurs, and the performance increases more.
    • Image reconstruction is performed taking into account the motion vectors of pixels of the current frame and information about previous frames. This gives the Intel solution more information to create a new frame.
    • XeSS does not use the Microsoft DirectML library to speed up AI tasks. Intel uses its own solution for high-performance cores, as traditional shaders are not optimized for architecture using XMX engines.
    • Intel Supersampling will support various configurations, including quality and performance modes.

    Intel Strategy

    • The company sees a lot of new opportunities in the discrete video adapter market. The Xe-HPG microarchitecture is scalable, so it has great prospects for use not only by ordinary users, but also in data centers.
    • The Intel Iris Xe (DG1) graphics accelerator laid the foundation for the company’s discrete graphics cards — the company conducted a number of tests on it and made sure that it had prepared a software stack. The Alchemist graphics card, in turn, will mark the launch of a line of high performance adapters. The first model will be followed by others whose names the company has already announced: Battlemage, Celestial and Druid.