Best looking graphics card: Best Graphics Cards 2022 — Top Gaming GPUs for the Money

Nvidia GeForce RTX 4090 | TechRadar

TechRadar Verdict

The Nvidia GeForce RTX 4090 is an absolute unit of a graphics card that features an astounding gen-on-gen performance jump without a proportional jump in price, making it the best graphics card on the enthusiast scene, hands down.

  • +

    Jaw-dropping performance

  • +

    DLSS 3 is game changing

  • +

    Creatives will absolutely love it

  • +

    Outstanding value for a luxury card

  • Still very expensive

  • 16-pin connector will test your cable management skills

  • Gamers probably better off with RTX 4080

Why you can trust TechRadar
Our expert reviewers spend hours testing and comparing products and services so you can choose the best for you. Find out more about how we test.

Nvidia GeForce RTX 4090: two minute review

Well, the Nvidia GeForce RTX 4090 is finally here, and there’s no question that it delivers on many of the lofty promises made by Nvidia ahead of its launch, delivering stunning gen-on-gen performance improvement that is more akin to a revolution than an advance.  

That said, you won’t find four times performance increases here, and only in some instances will you see a 2x increase in performance over the Nvidia GeForce RTX 3090, much less the Nvidia GeForce RTX 3090 Ti, but a 50% to 70% increase in synthetic and gaming performance should be expected across the board with very rare exceptions where the GPU runs too far ahead of the CPU.

On the creative side of things, this card was made to render, completely lapping the RTX 3090 in Blender Cycles performance, which makes this the best graphics card for creatives on the market, period, hands down.

On the gaming side, this is the first graphics card to deliver fully native 4K ray-traced gaming performance at a very playable framerate, without the need for DLSS, showing the maturity of Nvidia’s third-generation ray tracing cores.

Even more incredible, Nvidia’s new DLSS 3 shows even more promise, delivering substantially faster framerates over the already revolutionary DLSS 2. 0. And while we did not test DLSS 3 as extensively as we did the RTX 4090’s native hardware (for reasons we’ll explain in a bit), from what we’ve seen, Nvidia’s new tech is probably an even more important advance than anything having to do with the hardware.

On trhe downside, the card does require even more power than its predecessor, and when paired with something like the Intel Core i9-12900K, you’re going to be pulling close to 700W of power between these two components alone. Worse still, this additional power draw requires some very strategic cable management to practically use, and for a lot of builders, this is going to be a hard card to show off in a case with a bundle of PCIe cables in the way.

The price has also increased over its predecessor, though given its incredible performance and the price of the previous graphics card champ, the RTX 3090 Ti, the RTX 4090 offers for more performance for the price than any other card on the market other than the Nvidia GeForce RTX 3080 and Nvidia GeForce RTX 3080 Ti. So even though the Nvidia RTX 4090 is a very expensive card, what you are getting for the price makes it a very compelling value proposition if you can afford it.

In the end, the Nvidia GeForce RTX 4090 is definitely an enthusiast graphics card in terms of price and performance, since the level of power on offer here is really overkill for the vast majority of people who will even consider buying it. That said, if you are that enthusiast – or if you are a creative or a researcher who can actually demonstrate a need for this much power – there’s isn’t much else to say but to buy this card. 

It is more powerful than many of us ever thought it could be, and while I’d definitely argue that the forthcoming Nvidia GeForce RTX 4080 is likely to be the much better purchase for gamers and even the majority of creatives out there, the RTX 4090 was always going to be a card for the early adopters out there, and it will pretty much give you everything you could want in an enthusiast graphics card.

  • Nvidia RTX 4090 at Amazon for $2,478.98

Nvidia GeForce RTX 4090: Price & availability

(Image credit: Future)

  • How much is it? MSRP listed at $1,599 (about £1,359, AU$2,300)
  • When is it out? It is available October 12, 2022.
  • Where can you get it? Available in the US, UK, and Australia.

The Nvidia GeForce RTX 4090 goes on sale worldwide on October 12, 2022, with an MSRP of $1,599 in the US (about £1,359/AU$2,300).

This is $100 more than the MSRP of the RTX 3090 when it was released in September 2020, but is also about $400 less than the MSRP of the RTX 3090 Ti, though the latter has come down considerably in price since the RTX 4090 was announced.

And while this is unquestionably expensive, this card is meant more as a creative professional’s graphics card than it is for the average consumer, occupying the prosumer gray area between the best gaming PC and a Pixar-workstation.

Of course, third-party versions of the RTX 4090 are going to cost even more, and demand for this card is likely to drive up the price quite a bit at launch, but with the crash of the cryptobubble, we don’t think we’ll see quite the run-up in prices that we saw with the last generation of graphics cards.

Finally, one thing to note is that while this is an expensive graphics card, its performance is so far out ahead of similarly priced cards, that it offers a much better price to performance value than just about any other card out there, and it is far ahead of its immediate predecessors in this regard. Honestly, we don’t really see this kind of price-to-performance ratio outside of the best cheap graphics cards, so this was definitely one of the biggest surprises coming out of our testing.

Today’s best Nvidia RTX 4090 deals



Deal ends Fri, Dec 30

$2,395. 99


Deal ends Sat, Jan 7



Show More Deals

  • Value: 4 / 5

Nvidia GeForce RTX 4090: features & chipset

(Image credit: Future)

  • 4nm GPU packs in nearly three times the transistors
  • Substantial increase in Tensor Cores
  • Third generation RT Cores

Nvidia GeForce RTX 4090 key specs

GPU: AD102
CUDA cores: 16,384
Tensor cores:
Ray tracing cores: 128
Power draw (TGP): 450W
Base clock: 2,235 MHz
Boost clock:
2,520 MHz
Bandwith: 1,018 GB/s
Bus interface: PCIe 4. 0 x16
1 x HDMI 2.1, 3 x DisplayPort 1.4a
Power connector: 1 x 16-pin

The Nvidia GeForce RTX 4090 features some major generational improvements on the hardware front, courtesy of the new Nvidia Lovelace architecture. For one, the AD102 GPU uses TSMC’s 4nm node rather the Samsung 8nm node used by the Nvidia Ampere GeForce cards.

The die size is 608mm², so a little bit smaller than the 628mm² die in the GA102 GPU in the RTX 3090, and thanks to the TSMC node, Nvidia was able to cram 76.3 billion transistors onto the AD102 die, a 169% increase in transistor count over the GA102’s 28.3 billion.

The clock speeds have also see a substantial jump, with the RTX 4090’s base clock running at a speedy 2,235 MHz, compared to the RTX 3090’s 1,395 MHz. It’s boost clock also gets a commesurate jump up to 2,520 MHz from 1,695 MHz. 

It’s memory clock is also slightly faster at 1,325 MHz, up from 1,219 MHz, giving the RTX 4090 a faster effective memory speed of 21. 2 Gbps versus the RTX 3090’s 19.5 Gbps. This lets the RTX 4090 get more out of the same 24GB GDDR6X VRAM as the RTX 3090.

When it comes to the number of cores, the RTX 4090 packs in 56% more streaming multiprocessors than the RTX 3090, 128 to 82, which translates into nearly 6,000 more CUDA cores as the RTX 3090 (16,384 to 10,496). That also means that the RTX 4090 packs in 46 additional ray tracing cores and an additional 184 Tensor cores, and next-gen cores at that, so they are even better at ray tracing and vectorized computations than its predecessor.

This is immediately apparent when cranking up ray tracing to the max on games like Cyberpunk 2077, and especially when running DLSS 3, which makes the jump to full-frame rendering rather than just the pixel rendering done by earlier iterations of DLSS. 

  • Features & Chipset: 5 / 5

Nvidia GeForce RTX 4090: design

(Image credit: Future)

  • Yeah, that 16-pin connector is a pain to work with
  • A little bit thicker, but a little shorter, than the RTX 3090

The Nvidia GeForce RTX 4090 Founders Edition looks very much like its predecessor, though there are some subtle and not-so-subtle differences. First off, this is a heavier card for sure, so don’t be so surprised that we need to start adding support brackets to our PC builds. It might have been optional in the last generation, but it is absolutely a necessity with the Nvidia RTX 4090.

The Founders Edition does not come with one, but third-party cards will likely include them and manufacturers are already starting to sell them separately so we would definitely suggest you pick one up.

Otherwise, the dimensions of the RTX 4090 around that much different than the RTX 3090. It’s a bit thicker than the RTX 3090, but it’s a bit shorter as well, so if your case can fit an RTX 3090 FE it will most likely fit an RTX 4090 FE.

The fans on either side of the card help pull air through the heatsink to cool off the GPU and these work reasonably well, considering the additional power being pulled into the GPU.

Speaking of power, the RTX 4090 introduces us to a new 16-pin connector that requires four 8-pin connectors plugged into an adaptor to power the card. Considering the card’s 450W TDP, this shouldn’t be surprising, but actually trying to work with this kind of adapter in your case is probably going to be a nightmare. We definitely suggest that you look into the new PSU’s coming onto the market that support this new connector without needing to resort to an adapter. If you’re spending this much money on a new graphics card, you might as well go hog and make your life – and cable management – a bit easier.

  • Design: 4 / 5

Nvidia GeForce RTX 4090: performance

(Image credit: Future)

  • Unassisted native 4K ray-traced gaming is finally here
  • Creatives will love this card

So here we are, the section that really matters in this review. In the lead up to the Nvidia GeForce RTX 4090 announcement, we heard rumors of 2x performance increases, and those rumors were either not too far off or were actually on the mark, depending on the workload in question.

Image 1 of 8

(Image credit: Future / InfoGram)(Image credit: Future / InfoGram)(Image credit: Future / InfoGram)(Image credit: Future / InfoGram)(Image credit: Future / InfoGram)(Image credit: Future / InfoGram)(Image credit: Future / InfoGram)(Image credit: Future / InfoGram)

Across our synthetic benchmark tests, the Nvidia RTX 4090 produced eye-brow raising results from the jump, especially on more modern and advanced benchmarks like 3DMark Port Royal and Time Spy Extreme, occasionally fully lapping the RTX 3090 and running well ahead of the RTX 3090 Ti pretty much across the board.

Image 1 of 6

(Image credit: Future / InfoGram)(Image credit: Future / InfoGram)(Image credit: Future / InfoGram)(Image credit: Future / InfoGram)(Image credit: Future / InfoGram)(Image credit: Future / InfoGram)

This trend continues on to the GPU heavy creative benchmarks, with the Nvidia RTX 4090’s Blender performance being especially noteable for more than doubling the RTX 3090 Ti’s performance on two out of three tests, and blowing out any other competing 4K graphics card in Cycles rendering.

On Premiere Pro, the RTX 4090 scores noticeably higher than the RTX 3090 Ti, but the difference isn’t nearly as dramatic since PugetBench for Premiere Pro measures full system performance rather than just isolating the GPU, and Adobe Photoshop is a heavily raterized workload, which is something that AMD has an advantage in over the past couple of generations, which is something we see pretty clearly in our tests. 

Image 1 of 5

(Image credit: Future / InfoGram)(Image credit: Future / InfoGram)(Image credit: Future / InfoGram)(Image credit: Future / InfoGram)(Image credit: Future / InfoGram)

Gaming is obviously going to see some of the biggest jumps in performance with the RTX 4090, and our tests bear that out. Most gaming benchmarks show roughly 90% to 100% improved framerates with the RTX 4090 over the RTX 3090, and roughly 55% to 75% better performance than the Nvidia RTX 3090 Ti.

Image 1 of 13

(Image credit: Future / InfoGram)(Image credit: Future / InfoGram)(Image credit: Future / InfoGram)(Image credit: Future / InfoGram)(Image credit: Future / InfoGram)(Image credit: Future / InfoGram)(Image credit: Future / InfoGram)(Image credit: Future / InfoGram)(Image credit: Future / InfoGram)(Image credit: Future / InfoGram)(Image credit: Future / InfoGram)(Image credit: Future / InfoGram)(Image credit: Future / InfoGram)

These numbers are likely to jump even higher when you factor in DLSS 3. DLSS 3 isn’t available in any commercially available games yet, but we were able to test DLSS 3 on a couple of special builds games that will be available shortly after the release of the RTX 4090. A few of these games had in-game benchmarks that we could use to test the performance of DLSS 3 using Nvidia’s FrameView tool and the results showed two to three times better performance on some games than we got using current builds on Steam with DLSS 2.0.

Since we were using special builds and Nvidia-provided tools, we can’t necessarily declare these results representative until we are able to test them out on independent benchmarks, but just eyeballing the benchmark demos themselves we see an obvious improvement to the framerates of DLSS 3 over DLSS 2.0. 

Whether the two to three times better performance will hold up after its official release remains to be seen, but as much as DLSS 2.0 revolutionized the performance of the best PC games, DLSS 3 looks to be just as game-changing once it gets picked up by developers across the PC gaming scene. Needless to say, AMD needs to step up its upscaling game if it ever hopes to compete on the high-end 4K scene.

Now, there is a real question about whether most gamers will ever need anything coming close to this kind of performance, and there is such a thing as diminishing returns. Some might find that the native 4K ray tracing is neat, but kind of redundant since DLSS can get you roughly the same experience with an RTX 3090 or even an RTX 3080 Ti, but that’s a judgment that individual consumers are going to have to make. 

Personally, I think this card is at least approaching the point of overkill, but there’s no doubt that it overkills those frame rates like no other.

  • Performance: 5 / 5

Should you buy an Nvidia GeForce RTX 4090?

(Image credit: Future)

Buy the Nvidia GeForce RTX 4090 if…

You want the best graphics card on the market
There really is no competition here. This is the best there is, plain and simple.

You want native 4K ray-traced gaming
DLSS and other upscaling tech is fantastic, but if you want native 4K ray-traced gaming, this is literally the only card that can consistently do it.

You are a 3D graphics professional
If you work with major 3D rendering tools like Maya, Blender, and the like, then this graphics card will dramatically speed up your workflows.

Don’t buy the Nvidia GeForce RTX 4090 if…

You’re not looking to do native, max-4K gaming
Unless you’re looking to game on the bleeding edge of graphical performance, you probably don’t need this card.

You’re on a budget
This is a very premium graphics card by any measure.

Also consider

Nvidia GeForce RTX 3090
The RTX 3090 isn’t nearly as powerful as the RTX 4090, but it is still an amazing gaming and creative professional’s graphics card and is likely to be very cheap right now.

Read the full Nvidia GeForce RTX 3090 review

Nvidia GeForce RTX 3080
The RTX 3080 is a far cry from the RTX 4090, no doubt, but the RTX 3080 currently has the best price to performance proposition of any 4K card on the market. If you’re looking for the best value, the RTX 3080 is the clear winner here.

Read the full Nvidia GeForce RTX 3080 review

AMD Radeon RX 6950 XT
In another universe, AMD would have lead the Big Navi launch with the RX 6950 XT. It is a compelling gaming graphics card, offering excellent 4K gaming performance on par with the RTX 3090 and generally coming in at the same price as the RTX 3080 Ti.

Read the full AMD Radeon RX 6950 XT review

First reviewed in October 2022

Nvidia GeForce RTX 4090 Report Card

Swipe to scroll horizontally

Value While incredibly expensive, the performance gains make this luxury graphics card a downright bargain in comparison to the cards it is replacing 4 / 5
Features & chipset The next-gen Lovelace architecture introduces fully matured tensor and ray-tracing cores into the AD102 GPU. Faster clock speeds certainly don’t hurt either. 5 / 5
Design The Founders Edition card isn’t that much bigger than the RTX 3090, but it is heavier, so you’ll likely need to invest in a GPU support bracket. Also, the 16-pin connector is going to challenge your cable management skills like nothing before. 4 / 5
Performance You’re looking at double the performance of trhe RTX 3090 and up to 75% better performance than the RTX 3090 Ti, and that’s not even factoring in what DLSS 3 will bring to the table. 5 / 5
Total Without a doubt, the Nvidia GeForce RTX 4090 is the best graphics card on the market, and it’s hard to see what beats this. It runs laps around the RTX 3090 and does it for slightly more than its predecessor’s MSRP. It is definitely for the enthusiasts out there, but it’s unquestionably worth that enthusiasm. 4.5 / 5

John (He/Him) is the US Computing Editor here at TechRadar and he is also a programmer, gamer, activist, and Brooklyn College alum currently living in Brooklyn, NY. 

Named by the CTA as a CES 2020 Media Trailblazer for his science and technology reporting, John specializes in all areas of computer science, including industry news, hardware reviews, PC gaming, as well as general science writing and the social impact of the tech industry.

You can find him online on Twitter at @thisdotjohn

Currently playing: The Last Stand: Aftermath, Cartel Tycoon

Nvidia RTX 4090 review | PC Gamer

Our Verdict

The RTX 4090 may not be subtle but the finesse of DLSS 3 and Frame Generation, and the raw graphical grunt of a 2.7GHz GPU combine to make one hell of a gaming card.

  • Excellent gen-on-gen performance
  • DLSS Frame Generation is magic
  • Super-high clock speeds
  • Massive
  • Ultra-enthusiast pricing
  • Non-4K performance is constrained
  • High power demands

Why you can trust PC Gamer
Our expert reviewers spend hours testing and comparing products and services so you can choose the best for you. Find out more about how we test.

There’s nothing subtle about Nvidia’s GeForce RTX 4090 graphics card. It’s a hulking great lump of a pixel pusher, and while there are some extra curves added to what could otherwise look like a respin of the RTX 3090 shroud, it still has that novelty graphics card aesthetic. 

It looks like some semi-satirical plastic model made up to skewer GPU makers for the ever-increasing size of their cards. But it’s no model, and it’s no moon, this is the vanguard for the entire RTX 40-series GPU generation and our first taste of the new Ada Lovelace architecture. 

On the one hand, it’s a hell of an introduction to the sort of extreme performance Ada can deliver when given a long leash, and on the other, a slightly tone-deaf release in light of a global economic crisis that makes launching a graphics card for a tight minority of gamers feel a bit off.

This is a vast GPU that packs in 170% more transistors than even the impossibly chonk GA102 chip that powered the RTX 3090 Ti . And, for the most part, it makes the previous flagship card of the Ampere generation look well off the pace. That’s even before you get into the equal mix of majesty and black magic that lies behind the new DLSS 3.0 revision designed purely for Ada.

But with the next suite of RTX 40-series cards not following up until sometime in November, and even then being ultra-enthusiast priced GPUs themselves, the marker being laid for this generation points to extreme performance, but at a higher cost. You could argue the RTX 4090, at $1,599 (£1,699), is only a little more than the RTX 3090—in dollar terms at least—and some $400 cheaper than the RTX 3090 Ti.

Though it must be said, the RTX 3090 Ti released at a different time, and its pandemic pricing matched the then scarcity of PC silicon and reflected a world where GPU mining was still a thing. Mercifully, as 2022 draws to a close, ethereum has finally moved to proof-of-stake and the days of algorithmically pounding graphics cards to power its blockchain are over.

Now, we’re back to gamers and content creators picking up GPUs for their rigs, so what is the RTX 4090 going to offer them?

Nvidia RTX 4090 architecture and specs

(Image credit: Nvidia)

What’s inside the RTX 4090?

The RTX 4090 comes with the first Ada Lovelace GPU of this generation: the AD102. But it’s worth noting the chip used in this flagship card is not the full core, despite its already monstrous specs sheet. 

Still, at its heart are 16,384 CUDA cores arrayed across 128 streaming multiprocessors (SMs). That represents a 52% increase over the RTX 3090 Ti’s GA102 GPU, which was itself the full Ampere core. 

The full AD102 chip comprises 18,432 CUDA Cores and 144 SMs. That also means you’re looking at 144 third gen RT Cores and 576 fourth gen Tensor Cores. Which I guess means there’s plenty of room for an RTX 4090 Ti or even a Titan should Nvidia wish.

Memory hasn’t changed much, again with 24GB of GDDR6X running at 21Gbps, which delivers 1,008GB/sec of memory bandwidth.

Image 1 of 3

(Image credit: Nvidia)(Image credit: Nvidia)(Image credit: Nvidia)

Swipe to scroll horizontally

Header Cell — Column 0 GeForce RTX 4090 GeForce RTX 3090 Ti
Lithography TSMC 4N Samsung 8N
CUDA cores 16,432 10,752
SMs 128 84
RT Cores 128 84
Tensor Cores 512 336
ROPs 176 112
Boost clock 2,520MHz 1,860MHz
Memory 24GB GDDR6X 24GB GDDR6X
Memory speed 21Gbps 21Gbps
Memory bandwidth 1,008GB/s 1,008GB/s
L1 | L2 cache 16,384KB | 73,728KB 10,752KB | 6,144KB
Transistors 76. 3 billion 28.3 billion
Die Size 608.5mm² 628.5mm²
TGP 450W 450W
Price $1,599 | £1,699 $1,999 | £1,999

Almost a full 1GHz faster than the RTX 3090 of the previous generation.

On the raw shader side of the equation, things haven’t really moved that far along from the Ampere architecture either. Each SM is still using the same 64 dedicated FP32 units, but with a secondary stream of 64 units that can be split between floating point and integer calculations as necessary, the same as was introduced with Ampere. 

You can see how similar the two architectures are from a rasterisation perspective when looking at the relative performance difference between an RTX 3090 and RTX 4090. 

If you ignore ray tracing and upscaling there is a corresponding performance boost that’s only a little higher than you might expect from the extra number of CUDA Cores dropped into the AD102 GPU. The ‘little higher’ than commensurate performance increase though does show there are some differences at that level.

Part of that is down to the new 4N production process Nvidia is using for its Ada Lovelace GPUs. Compared with the 8N Samsung process of Ampere, the TSMC-built 4N process is said to offer either twice the performance at the same power, or half the power with the same performance.

This has meant Nvidia can be super-aggressive in terms of clock speeds, with the RTX 4090 listed with a boost clock of 2,520MHz. We’ve actually seen our Founders Edition card averaging 2,716MHz in our testing, which puts it almost a full 1GHz faster than the RTX 3090 of the previous generation.

And, because of that process shrink, Nvidia’s engineers working with TSMC have crammed an astonishing 76.3 billion transistors into the AD102 core. Considering the 608.5mm² Ada GPU contains so many more than the 28.3 billion transistors of the GA102 silicon, it’s maybe surprising it’s that much smaller than the 628. 4mm² Ampere chip.

(Image credit: Future)

The fact Nvidia can keep on jamming this ever-increasing number of transistors into a monolithic chip, and still keep shrinking its actual die size, is testament to the power of advanced process nodes in this sphere. For reference, the RTX 2080 Ti’s TU102 chip was 754mm² and held just 18.6 billion 12nm transistors.

That doesn’t mean the monolithic GPU can continue forever, unchecked. GPU rival, AMD, is promising to shift to graphics compute chiplets for its new RDNA 3 chips launching in November. Given the AD102 GPU’s complexity is second only to the 80 billion transistors of the advanced 814mm² Nvidia Hopper silicon, it’s sure to be an expensive chip to produce. The smaller compute chiplets, however, ought to bring costs down, and drive yields up.

For now, though, the brute force monolithic approach is still paying off for Nvidia.

What else do you do when you want more speed and you’ve already packed in as many advanced transistors as you can? You stick some more cache memory into the package. This is something AMD has done to great effect with its Infinity Cache and, while Nvidia isn’t necessarily going with some fancy new branded approach, it is dropping a huge chunk more L2 cache into the Ada core.

The previous generation, GA102, contained 6,144KB of shared L2 cache, which sat in the middle of its SMs, and Ada is increasing that by 16 times to create a pool of 98,304KB of L2 for the AD102 SMs to play with. For the RTX 4090 version of the chip that drops to 73,728KB, but that’s still a lot of cache. The amount of L1 hasn’t changed per SM, but because there are now so many more SMs inside the chip in total, that also means there is a greater amount of L1 cache compared with Ampere, too.

But rasterisation isn’t everything for a GPU these days. However you might have felt about it when Turing first introduced real time ray tracing in games, it has now become almost a standard part of PC gaming. The same can be said of upscaling, too, so how an architecture approaches these two further pillars of PC gaming is vital to understanding the design as a whole.  

And now all three graphics card manufacturers (how weird it feels to be talking about a triumvirate now…) are focusing on ray tracing performance as well as the intricacies of upscaling technologies, it’s become a whole new theatre of the war between them.

(Image credit: Future)

This is actually where the real changes have occurred in the Ada streaming multiprocessor. The rasterised components may be very similar, but the third-generation RT Core has seen a big change. The previous two generations of RT Core contained a pair of dedicated units—the Box Intersection Engine and Triangle Intersection Engine—which pulled a lot of the RT workload from the rest of the SM when calculating the bounding volume hierarchy (BVH) algorithm at the heart of ray tracing.

Ada introduces another two discrete units to offload even more work from the SM: the Opacity Micromap Engine and Displaced Micro-Mesh Engine. The first drastically accelerates calculations when dealing with transparencies in a scene, and the second is designed to break geometrically complex objects down to reduce the time it takes to go through the whole BVH calculation.

Added to this is something Nvidia is calling «as big an innovation for GPUs as out-of-order execution was for CPUs back in the 1990s.» Shader Execution Reordering (SER) has been created to switch up shading workloads, allowing the Ada chips to greatly improve the efficiency of the graphics pipeline when it comes to ray tracing by rescheduling tasks on the fly. 

Intel has been working on a similar feature for its Alchemist GPUs , the Thread Sorting Unit, to help with diverging rays in ray traced scenes. And its setup reportedly doesn’t require developer input. For now, Nvidia requires a specific API to integrate SER into a developer’s game code, but Nvidia says it’s working with Microsoft, and others, to introduce the feature into standard graphics APIs such as DirectX 12 and Vulkan.

(Image credit: Nvidia )

It’s voodoo, it’s black magic, it’s the dark arts, and it’s rather magnificent.

Lastly, we come to DLSS 3, with its ace in the hole: Frame Generation. Yes, DLSS 3 is now not just going to be upscaling, it’s going to be creating entire game frames all by itself. Not necessarily from scratch, but by using the power of AI and deep learning to take a best guess at what the next frame would look like if you were actually going to render it. It then injects this AI generated frame in before the next genuinely rendered frame.

It’s voodoo, it’s black magic, it’s the dark arts, and it’s rather magnificent. It uses enhanced hardware units inside the fourth-gen Tensor Cores, called Optical Flow Units, to make all those in-flight calculations. It then takes advantage of a neural network to pull all the data from the previous frames, the motion vectors in a scene, and the Optical Flow Unit, together to help create a whole new frame, one that is also able to include ray tracing and post processing effects.

(Image credit: Nvidia )

Working in conjunction with DLSS upscaling (now just called DLSS Super Resolution), Nvidia states that in certain circumstances AI will be generating three-fourths of an initial frame through upscaling, and then the entirety of a second frame using Frame Generation. In total then it estimates the AI is creating seven-eighths of all the displayed pixels.

And that just blows my little gamer mind.

Nvidia RTX 4090 performance

(Image credit: Future)

How does the RTX 4090 perform?

Look, it’s quick, okay. With everything turned on, with DLSS 3 and Frame Generation working its magic, the RTX 4090 is monumentally faster than the RTX 3090 that came before it. The straight 3DMark Time Spy Extreme score is twice that of the big Ampere core, and before ray tracing or DLSS come into it, the raw silicon offers twice the 4K frame rate in Cyberpunk 2077, too.

But if you’re not rocking a 4K monitor then you really ought to think twice before dropping $1,600 on a new GPU without upgrading your screen at the same time. That’s because the RTX 4090 is so speedy when it comes to pure rasterising that we’re back to the old days of being CPU bound in a huge number of games.

Synthetic performance

Image 1 of 3

(Image credit: Future)(Image credit: Future)(Image credit: Future)

With ray tracing enabled you can be looking at 91% higher frame rates at 4K.

Therefore, the performance boost over the previous generation is often significantly lower when you look at the relative 1080p or even 1440p gaming performance. In Far Cry 6 at those lower resolutions the RTX 4090 is only 3% faster than the RTX 3090, and across 1080p and 4K, there is only a seven frames per second delta.

In fact, at 1080p and 1440p the RX 6950 XT is actually the faster gaming card.

That’s a bit of an outlier in terms of just how constrained Far Cry 6 is, but it is still representative of a wider trend in comparative gaming performance at the lower resolutions. Basically, if you had your heart set on nailing 360 fps in your favourite games on your 360Hz 1080p gaming monitor then you’re barking up the wrong idiom. 

1440p gaming performance

Image 1 of 8

(Image credit: Future)(Image credit: Future)(Image credit: Future)(Image credit: Future)(Image credit: Future)(Image credit: Future)(Image credit: Future)(Image credit: Future)

At 4K the performance uplift, generation-on-generation, is pretty spectacular. Ignoring Far Cry 6’s limited gaming performance, you’re looking at a minimum 61% higher performance over the RTX 3090. That chimes well with the increase in dedicated rasterising hardware, the increased clock speed, and more cache. Throw in some benchmarks with ray tracing enabled and you can be looking at 91% higher frame rates at 4K.

4K gaming performance

Image 1 of 8

(Image credit: Future)(Image credit: Future)(Image credit: Future)(Image credit: Future)(Image credit: Future)(Image credit: Future)(Image credit: Future)(Image credit: Future)

But rasterisation is only part of modern games; upscaling is now an absolutely integral part of a GPU’s performance. We use our comparative testing to highlight raw architectural differences between graphics card silicon, and so run without upscaling enabled. It’s almost impossible otherwise to grab apples vs. apples performance comparisons.

It’s important to see just what upscaling can deliver, however, especially with something so potentially game-changing as DLSS 3 with Frame Generation. And with a graphically intensive game such as Cyberpunk 2077 able to be played at 4K RT Ultra settings at a frame rate of 147 fps, it’s easy to see the potential it offers.

You’re looking at a performance uplift over the RTX 3090 Ti, when that card’s running in Cyberpunk 2077’s DLSS 4K Performance mode itself, of around 145%. When just looking at the RTX 4090 on its own, compared with its sans-DLSS performance, we’re seeing a 250% performance boost. That’s all lower for F1 22, where there is definite CPU limiting—even with our Core i9 12900K —but you will still see a performance increase of up to 51% over the RTX 3090 Ti with DLSS enabled. 

DLSS performance

Image 1 of 5

(Image credit: Future)(Image credit: Future)(Image credit: Future)(Image credit: Future)(Image credit: Future)

Again, if you just take the RTX 4090 running without upscaling compared with it enabled at 4K you’re then looking at a 150% increase in frame rates.

On MS Flight Sim, which we’ve also tested with an early access build supporting DLSS 3, that incredibly CPU bound game responds unbelievably well to Frame Generation. In fact, because it’s so CPU limited there is no actual difference between running with or without DLSS enabled if you don’t have Frame Generation running. But when you do run with those faux frames in place you will see an easy doubling of stock performance, to the tune of 113% higher in our testing.

The other interesting thing to note is that Nvidia has decoupled Frame Generation from DLSS Super Resolution (essentially standard DLSS), and that’s because interpolating frames does increase latency. 

Just enabling DLSS on its own, however, will still radically decrease latency, and so with the minimal increase that enabling Frame Generation adds on you’re still better off than when running at native resolution. That’s also because Nvidia Reflex has been made an integral part of DLSS 3, but still, if you’re looking for the lowest latency for competitive gaming, then not having Frame Generation on will be the ideal.

Native 4K | DLSS Performance | DLSS Performance with Frame Generation (Image credit: CDPR)

But for singleplayer gaming, Frame Generation is stunning. For me, it looks better than a more blurry, low frame rate scene, and cleans up the sometimes-aliased look of a straight DLSS Super Resolution image.

Personally, I’m going to be turning Frame Generation on whenever I can. 

Which admittedly won’t be that often to begin with. It will take time for developers to jump on board the new upscaling magic, however easy Nvidia says it is to implement. It is also restricted to Ada Lovelace GPUs, which means the $1,600 RTX 4090 at launch, and then the $1,200 RTX 4080 and $900 RTX 4080 following in November.

In other words, it’s not going to be available to the vast majority of gamers until Nvidia decides it wants to launch some actually affordable Ada GPUs. Those which might arguably benefit more from such performance enhancements.

Power and thermals

Image 1 of 3

(Image credit: Future)(Image credit: Future)(Image credit: Future)

PCG test rig

CPU: Intel Core i9 12900K
Motherboard: Asus ROG Z690 Maximus Hero
Cooler: Corsair h200i RGB
RAM: 32GB G. Skill Trident Z5 RGB DDR5-5600
Storage: 1TB WD Black SN850, 4TB Sabrent Rocket 4Q
PSU: Seasonic Prime TX 1600W
OS: Windows 11 22h3
Chassis: DimasTech Mini V2
Monitor: Dough Spectrum ES07D03

And what of power? Well, it almost hit 500W running at stock speeds. But then that’s the way modern GPUs have been going—just look at how much power the otherwise efficient RDNA 2 architecture demands when used in the RX 6950 XT —and it’s definitely worth noting that’s around the same level as the previous RTX 3090 Ti, too. Considering the performance increase, the negligible increase in power draw speaks to the efficient 4N process.

The increase in frame rates does also mean that in terms of performance per watt the RTX 4090 is the most efficient modern GPU on the market. Which seems a weird thing to say about a card that was originally rumoured to be more like a 600W TGP option.

Nvidia RTX 4090 analysis

(Image credit: Future)

How does the RTX 4090 stack up?

There really is no denying that the RTX 4090, the vanguard of the coming fleet of Ada Lovelace GPUs, is a fantastically powerful graphics card. It’s ludicrously sized, for sure, which helps both the power delivery and thermal metrics. But it’s also a prohibitively priced card for our first introduction to the next generation of GeForce GPUs.

Sure, it’s only $100 more than the RTX 3090 was at launch, and $400 less than the RTX 3090 Ti, which could even make it the best value RTX 40-series GPU considering the amount of graphics silicon on offer. But previous generations have given the rest of us PC gamers an ‘in’ to the newest range of cards, even if they later introduced ultra-enthusiast GPUs for the special people, too.

At a time of global economic hardship, it’s not a good look to only be making your new architecture available to the PC gaming ‘elite’.

But the entire announced RTX 40-series is made of ultra-enthusiast cards, with the cheapest being a third-tier GPU—a distinctly separate AD104 GPU, and not just a cut-down AD102 or AD103—coming in with a nominal $899 price tag. Though as the 12GB version of the RTX 4080 doesn’t get a Founders Edition I’ll be surprised if we don’t see $1,000+ versions from AIBs at launch.

That all means no matter how impressed I am with the technical achievements baked into the Ada Lovelace architecture—from the magic of DLSS 3 and Frame Generation, to the incredible clock speed uptick the TSMC 4N process node delivers, and the sheer weight of all those extra transistors—it’s entirely limited to those with unfeasibly large bank accounts.

And, at a time of global economic hardship, it’s not a good look to only be making your new architecture available to the PC gaming ‘elite’.

(Image credit: Future)

Nvidia will argue there is silicon enough in the current suite of RTX 30-series cards to cater to the lower classes, and that eventually, more affordable Ada GPUs will arrive to fill out the RTX 40-series stack in the new year. But that doesn’t change the optics of this launch today, tomorrow, or in a couple of months’ time.

Which makes me question exactly why it’s happening. Why has Nvidia decided that now is a great time to change how it’s traditionally launched a new GPU generation, and stuck PC gamers with nothing other than an out-of-reach card from its inception?

Do the Ada tricks not seem as potent further down the stack? Or is it purely because there are just so many RTX 3080-and-below cards still in circulation?

I have a so-far-unfounded fear that the RTX 4080, in either guise, won’t have the same impact as the RTX 4090 does, which is why Nvidia chose not to lead with those cards. Looking at the Ada whitepaper (PDF warning) , particularly the comparisons between the 16GB and 12GB RTX 4080 cards and their RTX 3080 Ti and RTX 3080 12GB forebears, it reads like the performance improvement in the vast majority of today’s PC games could be rather unspectacular.

Both Ada GPUs have fewer CUDA cores, and far lower memory bandwidth numbers, compared with the previous gen cards. It looks like they’re relying almost entirely on a huge clock speed bump to lift the TFLOPS count, and the magic of DLSS 3 to put some extra gloss on their non-RT benchmarks.

In a way it might seem churlish to talk about fears over the surrounding cards, their GPU make up, and release order in a review of RTX 4090. I ought to be talking about the silicon in front of me rather than where I’d want it to exist in an ideal world, because this is still a mighty impressive card from both a gen-on-gen point of view and from what the Ada architecture enables.

(Image credit: Future)

I said at the top of the review there’s nothing subtle about the RTX 4090, but there is a level of finesse here to be applauded. The RT Core improvements take us ever further along the road to a diminished hit when enabling the shiny lighting tech, and the Tensor Cores give us the power to create whole game frames without rendering. 

Seriously, DLSS with Frame Generation is stunning. 

I’m sure there will be weird implementations as developers get to grips (or fail to get to grips) with enabling the mystical generative techniques in their games, where strange visual artefacts ruin the look of it and create memes of their own, but from what I’ve experienced so far it looks great. Better than 4K native, in fact.

And while the $1,600 price tag might well be high, it is worth noting that spending big on a new generation of graphics cards is probably best done early in its lifespan. I mean, spare a thought for the people who bought an RTX 3090 Ti in the past seven months. For $2,000. They’re going to be feeling more than a little sick right now, looking at their overpriced, power-hungry GPU that is barely able to post half the gaming performance of this cheaper, newer card.

It feels somewhat akin to the suffering of Radeon VII owners once the RX 5700 XT came out. Only more costly.

Nvidia RTX 4090 verdict

(Image credit: Future)

Should you buy an RTX 4090?

The RTX 4090 is everything you would want from an ultra high-end graphics card. It makes the previous top card of the last generation look limp by comparison, brings unseen new technology to gamers, and almost justifies its cost by the sheer weight of silicon and brushed aluminium used in its construction.

There’s no other GPU that can come near it right now.

It’s the very epitome of that Titan class of GPU; all raw power and top-end pricing.

Which would be fine if it had launched on the back of a far more affordable introduction to the new Ada Lovelace architecture. But that’s something we’re not likely to see until the dawn of 2023 at the earliest. The 2022 Ada lineup starts at $899, and that too is prohibitively expensive for the majority of PC gamers.

There’s no denying it is an ultra-niche ultra-enthusiast card, and that almost makes the RTX 4090 little more than a reference point for most of us PC gamers. We’re then left counting the days until Ada descends to the pricing realm of us mere mortals.

In itself, however, the RTX 4090 is an excellent graphics card and will satisfy the performance cravings of every person who could ever countenance spending $1,600 on a new GPU. That’s whether they’re inconceivably well-heeled gamers, or content creators not willing to go all-in on a Quadro card. And it will deservedly sell, because there’s no other GPU that can come near it right now.

Read our review policy

Nvidia RTX 4090

The RTX 4090 may not be subtle but the finesse of DLSS 3 and Frame Generation, and the raw graphical grunt of a 2.7GHz GPU combine to make one hell of a gaming card.

Dave has been gaming since the days of Zaxxon and Lady Bug on the Colecovision, and code books for the Commodore Vic 20 (Death Race 2000!). He built his first gaming PC at the tender age of 16, and finally finished bug-fixing the Cyrix-based system around a year later. When he dropped it out of the window. He first started writing for Official PlayStation Magazine and Xbox World many decades ago, then moved onto PC Format full-time, then PC Gamer, TechRadar, and T3 among others. Now he’s back, writing about the nightmarish graphics card market, CPUs with more cores than sense, gaming laptops hotter than the sun, and SSDs more capacious than a Cybertruck.

is the most beautiful? Page 1

::>Video cards
> Palit GeForce RTX 3080 Ti GameRock OC


Page 1
Page 2
One page

The release of the NVIDIA GeForce RTX 3080 Ti video card took place in early June, and partners have already brought an impressive number of their own modifications to the market. Manufacturers go different ways to highlight their model against their background. Someone focuses on the highest possible level of factory overclocking, someone invests in the development of productive and quiet cooling systems. Some focus on casing materials, while others try to attract with the most original and stylish design, complemented by non-standard lighting. nine0003

Today, we have just such a version — Palit GeForce RTX 3080 only standard horizontal position is a crime. Perhaps its main highlight is the design of the casing of the cooling system — you need to admire them through the transparent side wall of the case. However, first we offer a look at the detailed characteristics. nine0003





Latest drivers can be downloaded from the Palit website or the GPU manufacturer’s website


Palit GeForce RTX 3080 Ti GameRock OC



NVIDIA GA102-225



Technical process, nm


Number of CUDA cores

10 240

Number of texture units


Number of raster blocks

Number of tensor cores


Number of cores RT


Rated / dynamic frequency of the graphics core, MHz

1365 / 1725

nine0002 Memory type


Memory size, GB


Effective memory frequency, MHz

19 000

Memory bus width, bit


Memory bandwidth, GB/s


Internal interface

PCI Express 4. 0 x16

External interfaces

1 x HDMI 2.1
3 x DisplayPort 1.4a

Recommended power supply unit, W


Additional PCIe power connectors

3 x 8-pin

Dimensions (measured in our test lab), mm

304 x 136 x 60
(300 x 132 x 59)


Manufacturer website


Packaging and Contents

The Palit GeForce RTX 3080 Ti GameRock OC video card comes in a large cardboard box with excellent information content and pleasant design in bright colors. But there are no system requirements on the package, although in this case they are very important. The graphics card requires three 8-pin PCIe cables and a power supply with a recommended power of 850W. nine0003

In the kit we found standard documentation, a software CD, an adapter from two 6-pin to 8-pin, as well as a transparent plastic stop and a backlight synchronization cable.


Taking the video card out of the box, the design of the cooling system casing immediately catches your eye. This is one of the most original approaches to graphics card design. The central part of the casing is made of translucent plastic in the form of a scattering of crystals. Some may not like this style, but fans of cases with a transparent side panel and vertical installation of video cards will definitely appreciate this design. nine0003

First of all, translucent plastic is needed for greater effect when the backlight is on. Its operation can be synchronized with other components using the supplied cable.

The reverse side of the graphics card is protected by a metal support plate. It increases the rigidity of the structure and is involved in heat dissipation.

There is also a small switch between two BIOS firmwares: «Silent» and «Performance». The first is characterized by greater performance, and the second — by less noise. nine0003

The video card is powered by a PCI Express 4.0 x16 slot and three 8-pin PCIe connectors. Thanks to their good location, the cooler does not make it difficult to disconnect the power cables. Next to them is a connector for the backlight sync cable.

The following set of interfaces is available for image output:

  • 1 x HDMI 2.1;
  • 3 x DisplayPort 1.4a.

The maximum resolution is 7680 x 4320.

The video accelerator is based on the 8nm NVIDIA GA102-225 GPU. Its base frequency is 1365 MHz, and the dynamic rises to 1725 MHz, which is 60 MHz or 4% higher than the reference.

The video memory is made up of Micron GDDR6X chips with a total capacity of 12 GB. They operate at a reference effective frequency of 19,000 MHz. Data exchange between the graphics core and memory is carried out through a 384-bit bus, which is capable of passing 912 GB of information per second. nine0003

Cooling system

Palit GeForce RTX 3080 Ti GameRock OC graphics card with installed cooling system occupies 2.7 expansion slots and has a total length of 304 mm.

The cooler consists of a large two-section aluminum radiator and six nickel-plated copper heat pipes with a diameter of 6 mm. A nice bonus is the cooling of video memory chips by the main radiator. In turn, the elements of the power subsystem are in contact with the black heat-removal plate through thermal pads. nine0003

The active part is represented by three fans with a blade diameter of 86 mm. They are based on a durable double ball bearing. They also support a passive mode of operation at low load, for example, when watching a video.

When automatically adjusting the speed of rotation of the fan blades, in the maximum load mode, the graphics core heated up to 72°C with a critical value of 93°C. The propellers were running at just over 2000 rpm. Subjectively, the noise was at an average level and was quite comfortable for everyday use. nine0003

For comparison, let’s recall the performance of the recently reviewed ASUS TUF Gaming GeForce RTX 3080 Ti OC Edition. Its 3-fan cooler cooled the GPU to 66°C. And at the same time, its frequency was higher: 1710 versus 1665 MHz.

In the maximum fan speed mode (~3000 rpm), the GPU temperature dropped to 65°C. The noise in this mode exceeded the average level, but remained comfortable, since the fans themselves are quiet, and only the movement of air flows is heard. nine0003

The opponent’s cooler in the same mode was able to cool the GPU down to 61°C. The frequency of his work was again higher: 1710 versus 1695 MHz.

Under real game load in Assassin’s Creed Odyssey in Full HD resolution, the temperature was 65 ° C with the turntables operating at about 1700 rpm. The frequency of the GPU rose to 1935 MHz.

The transition to 4K resolution brought a rise in temperature to 73°C. The GPU frequency exceeded 1900 MHz, and the rotation speed of the turntables almost reached 1950 rpm

In the absence of load, the frequencies of the graphics core and memory were automatically reduced, allowing to reduce power consumption and heat dissipation of the video accelerator as a whole. In this mode, the GPU temperature dropped to 58°C, and the cooler went into passive mode.

In general, the cooling system proved to be worthy. It has a good margin of performance and even at maximum frequency it will not disturb you with excessive noise, because the fans rotate quietly. There was no throttle noise.

RTX 2060 Graphics Cards — Top 6 Models real-time (RT) and smoothing with deep learning algorithms (DLSS).

DLSS is perhaps the most exciting feature to debut in Nvidia’s Turing cards, thanks to its promise of massive framerate increases in exchange for a slight drop in image quality. Whether used to maximize frame rates or to offset the added performance cost of real-time ray tracing, DLSS definitely has potential. This new feature works by rendering a lower resolution image, which is then scaled up by an efficient deep learning algorithm, which in turn produces high resolution images of the game in question. This allows the RTX hardware to produce a final image that looks like a standard full size image using around 50% shading power, but it does require Nvidia to be supported in every game. So far, DLSS is of limited use, but we hope it will become a more widespread standard. nine0003

Since some early implementations of DLSS look worse than the actual resolution drop, Nvidia will have to keep improving this feature to be a real benefit for these cards. DLSS may be the RTX 2060’s secret weapon that allows it to challenge the GTX 1080 and even the GTX 1080 Ti in some games, but DLSS needs to be better implemented and available in more games.

In addition to including RTX and DLSS, Nvidia Turing GPUs include other improvements over their predecessors. Variable rate shading is one of the great inclusions, as this technology reduces the processing of scene elements that don’t need much attention. At the most aggressive settings, this can result in a frame rate increase of around 15 percent. All these technologies are potentially interesting for virtual reality as well. nine0003

Speaking of VR, some RTX 2060 models also include one USB-C port that supports the new VirtualLink standard. This means you’ll be able to connect future VR headsets with just a single USB-C cable, rather than the usual clunky HDMI and USB 3.0 cables. DisplayPort 1.4a is also included in case you need to connect an 8K 60Hz display with just one cord.

Finally, content creators such as streamers and YouTubers will be able to use the updated NVENC encoder, which allows higher resolutions and supports a wider range of video standards. More importantly, the new version of the encoder delivers better quality with less CPU usage, making 4K streaming possible on regular PCs, which is especially interesting for RTX 2060 owners who are more likely to use low or mid-range PCs. nine0003

Finally, it’s worth mentioning that you’ll get G-Sync, including G-Sync Ultimate and G-Sync Compatible, giving you a wide variety of monitors with variable refresh rates. You’ll also have the option to use the GeForce Experience software, including easy options for streaming and recording gameplay via ShadowPlay. Nvidia cards are more popular than their AMD counterparts, so you’ll find that GeForce cards are well supported by aftermarket accessories, online help forums, and more. The RTX card is newer, though, so its ecosystem is understandably smaller. nine0003

Given the RTX 2060’s strong raw performance lead over the GTX 1070 (and often the GTX 1070 Ti) and the potential for new technologies like RT and DLSS, the new model seems like a better value. Although the RTX 2060 comes with less VRAM than the GTX 1070, its memory is faster and the card overall is stronger. That’s why we recommend the RTX 2060 as one of the best graphics cards on the market.


Average FPS in games GTX 2060 (Core i7 8700k)

Top 6 GTX 2060 graphics cards in 2020


Average FPS in GTX 2060 games (Core i7 8700K)

Graphic settings maximum

PUBG: 1080p — 120 FPS , 85 FPS
Battlefield 5: Battlefield 5: BattlefielD — 80 FPS , DXR VKL — 55 FPS , 1440p (DXR OKL) — 70 FPS , 1440P (DXR VCL) — 37 FPS , 1080P — 1080P — 1080P — 1080P — 1080p — 45 fps

The first in this list of strong graphics cards is the ZOTAC Gaming GeForce RTX 2060. The card is available in two versions: a standard version with two fans and a factory overclocked AMP Edition. The AMP version will put out a little more power and has a white LED logo on it, unlike the standard version.

If you like overclocking, the AMP Edition is well worth the small price increase. That said, the Standard Edition is a decent card on its own, though in terms of raw numbers, it’s the weakest card on our list. If you want to save some money, the standard version is great. nine0003

ZOTAC boasts its own Ice Storm 2.0 cooling hardware that works surprisingly well. The card cools well, and besides, it is compact, which means it fits well in most cases.

It’s not the loudest card, but it’s not the quietest either. However, in most installations, the sound of this card should not be noticeable.

ZOTAC’s standard dual-fan graphics card is the cheapest on this list, but still competitive. If you’re willing to shell out a little for the AMP version, you’ll see a noticeable increase in clock speed because it comes factory-overclocked to 1680MHz, and can be boosted up to 1800MHz. nine0003

Although the design of this card may leave something to be desired, if you are not a fan of angular, spiky, silver on black. But this is her own original look and with such a low price that you simply will not notice it.

Also, if you’re looking for something a little cooler, the ZOTAC RTX 2060 comes in an Extreme version with three fans.

Specifications ZOTAC GeForce RTX 2060 AMP

CUDA Core: 1920


  • Angled design
  • No backlight (white LEDs on AMP version)


You are probably familiar with the ASUS ROG STRIX line of graphics cards — they are longtime players. ROG STRIX is higher class and more expensive than the previous model.

This card cools well thanks to a three-fan setup. It also has the highest overclocking capability of any of the others on our list. However, even without these specs, users will appreciate the attractive design of the ROG STRIX (those with room for it). nine0003

Even with three fans, this card is quiet and predictably performs well. Plus, ROG STRIX features fully customizable RGB lighting that can be synchronized with Aura Sync software, meaning you can achieve almost any color combination, even rainbows!

The ROG STRIX fans don’t run when the card is below 55 degrees Celsius and it remains fairly quiet despite its size.

This graphics card makes the most of its technology. You get all the bells and whistles, but you also pay for them accordingly. This is the most expensive card in the list of best RTX 2060 graphics cards 2019years, although the price reflects the quality and design. And a lot of people fall for the look.

However, there is a major problem. More precisely, the problem is the size of an elephant. Before buying this card, make sure there is enough room in the case to fit this giant!

All in all, this card does what it promises and is priced right, but it’s not that bad. You can try to catch this card at a discount, the main thing is to be prepared!

Specifications ASUS ROG STRIX GeForce RTX 2060 GAMING

  • Excellent, quiet cooling
  • High overclocking potential
  • Cons

    • Price
    • Large size, takes up two slots, which must be considered when assembling

    EVGA GeForce RTX 2060 SC Ultra Gaming

    If you’re familiar with EVGA, you’ll know that they consistently release well-performing cards at a reasonable price. SC Ultra Gaming is no exception. This model is standard size with two fans. By the way, for the first time the fan on the hydrodynamic bearing is used. It sits in the middle of our list in terms of performance and price. nine0003

    The junior version of the card, SC Gaming, is a 2.75 slot model with a single fan, which is a great solution for tight spaces. It uses a specially designed socket for a thicker heat sink, which provides enough effective cooling for a single fan. Of course, the fan in this model can be a bit loud and the card can get a little warm, but that’s logical for a single-fan card with so much power. This is a small source of power that is ideal for small form factor cases. nine0003

    I like that this card comes from a large family. SC Gaming is not the only brother of SC Ultra Gaming. There are also XC models and just Gaming. EVGA GeForce RTX 2060 graphics cards offer a wide range of low-noise fans, heatsink sizes, and cooling technologies, so you can choose a card that’s perfect for any chassis.

    Both models are quite affordable, although the SC Gaming is of course a bit cheaper. This card is also the only one on our list with a DVI port, although that probably won’t be a win-win for most users. nine0003

    This card seems to lack some bells and whistles in the form of highlighting. However, this is not uncommon for a mid-range card. Besides, it’s not that bad. The card does not contain any bells and whistles, works well, fits into small cases, and all this at an affordable price. This is a solid and reliable choice.

    Specifications EVGA GeForce RTX 2060 SC Ultra Gaming

    CUDA Core: 1920
    Core Clock:


    • No bells and whistles (namely LEDs)

    MSI GeForce RTX 2060 GAMING Z

    Once again, MSI has managed to create a high-end, great graphics card at an affordable price. The GAMING Z card comes with all the extras in addition to solid performance.

    This card comes in a surprisingly small form factor in addition to other benefits, and it doesn’t seem to affect its cooling. It will be an excellent choice for small boxes or boxes that are already lightly filled. nine0003

    It could easily top the three-fan models in terms of price, but for the money you get, you get a compelling package. In general, it’s hard to find something bad in this video card. This is simply a work of art.

    The disadvantages include the fact that it does not come with three fans. Although, not every PC builder is looking for three fans. The price of this model is above average, but it is acceptable for the quality you get.

    We should note that the parts of the card that look silver on the packaging are actually more of a steel grey. It matches the color of the back panel. The silver accents on the GAMING Z are stylish, and the RBG LEDs are strategically placed. It can be said that a lot of time was devoted to art design when designing the map.

    Forget design, because the graphics card puts out all the performance you can get from a dual-fan RTX 2060 card. While it can’t beat triple-fan models, it comes very close. nine0269


    • Great performance and design all in one


    • Good price

    GIGABYTE GeForce RTX 2060 Gaming OC Pro

    The GIGABYTE Gaming OC Pro is a great value tri-fan card. Unfortunately, the fans are a little loud when the card is at its full potential, but that rarely happens. Luckily, three long-lasting 76mm axial fans, a three-section aluminum cross-plate heatsink, four 6mm copper tubes, and a separate low-profile VRM heatsink keep the card very well, even when it’s working hard. nine0003

    The card has a black, slightly plain, and white version. This is the only white graphics card on our list, and it’s generally not very common among video cards. Both black and white versions feature RGB LED lighting for the GIGABYTE lettering on the front of the card. While it may seem out of place on the black card, it actually looks good on the white version.

    Functionally and constructively, this is an excellent card, one of the most affordable video cards with full support for the advanced graphics technologies of the Turing microarchitecture. nine0055


    • Excellent performance
    • Affordable graphics card with three fans
    • Available in white and black


    • Virtually none

    MSI GeForce RTX 2060 VENTUS OC

    Another card released by MSI, VENTUS OC, completes our list of RTX 2060 cards. account of this and savings. nine0003

    An interesting shortcoming of this card actually lies in its programming. Due to the low fan curve out of the box, this card can get warm. However, this can be easily fixed by adjusting the fan curve in the settings. This should not be a problem for those who know how to build a computer!

    MSI VENTUS OC is one of the most affordable high performance RTX 2060 graphics cards.

    2023 © All rights reserved