What is the best graphic card for gaming: Best Graphics Cards 2023 — Top Gaming GPUs for the Money

Intel Arc A770 Limited Edition Review: Bringing Back Midrange GPUs

Skip to main content

When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works.

Intel’s first legitimate discrete GPU in forever

Tom’s Hardware Verdict

The Intel Arc A770 LE delivers a compelling midrange option for anyone willing to take a chance on continued driver support. You get 16GB of VRAM, an attractive and understated design, and performance that easily rivals the RTX 3060.

Pros
  • +

    Great price-to-performance ratio

  • +

    Excellent video encoding hardware

  • +

    16GB VRAM, XeSS, AV1 support

  • +

    Welcome to the GPU party, Intel!

Cons
  • Drivers remain finicky at times

  • Arrived far later than initially expected

  • Uses more power than the direct competition

  • XeSS adoption will be an uphill battle

Why you can trust Tom’s Hardware
Our expert reviewers spend hours testing and comparing products and services so you can choose the best for you. Find out more about how we test.

The Intel Arc A770 graphics card has finally arrived, along with its little brother, the Intel Arc A750. After a rather disappointing Arc A380 review last month, Intel has a lot to prove with the bigger and far more potent A770. And it mostly succeeds! While there are certainly caveats — mostly about drivers, XeSS adoption, and long-term support — Intel clearly wants to prove it can compete with the likes of AMD and Nvidia, perhaps even laying claim to a seat at the table among the best graphics cards.

It’s been a long road to Intel re-entering the dedicated graphics card market. It first attempted to make a dedicated graphics card with the i740 back in the late 90s — that would be «Gen1 Graphics» if you’re wondering — before giving up. Larabee was another attempt at a potential GPU, but it got mired in internal politics. Now, despite claims of Arc being dead in the water, we have cards in hand and can finally see what Intel’s best-ever GPU brings to the table.

How does Arc A770 stack up against the AMD and Nvidia competition? Is XeSS a true DLSS competitor, and what about all the AV1 video hype? Could Intel fix its drivers to the point where it won’t be one of the first things people warn you about? We’ll look to answer all these questions and more, but let’s start with the high-level overview.

Arc Alchemist Architecture Recap

(Image credit: Tom’s Hardware)

We’ve had plenty of details on Intel’s Arc Alchemist architecture, which we’ve known about since last year. We now have final clocks and specs, as well as pricing, but it’s worth revisiting the road Intel has traveled. The first time we heard about Arc GPUs, we anticipated a late 2021 or early 2022 launch. By that metric, Intel is nearly a year late to the party — blame Covid, supply chain issues, and even the Russian invasion of Ukraine if you’re looking for reasons.

On one level, Arc Alchemist would be the 13th iteration of Intel Graphics. At the same time, we’d also mark this down as generation one. There are enough major overhauls of the fundamental design, features, and functionality that a card like the A770 has very little in common with Intel’s current 12th-Gen integrated Xe Graphics. There are several new additions to the GPU that provide a clear demarcation between the pre-Arc and the post-Arc world of Intel GPUs.

Starting with the main building block, the Xe-Core, Intel is apparently waffling a bit on naming conventions. The «Xe Vector Engines (XVE)» are still referred to as «Execution Units (EUs)» at times — a rose by any other name would be as fast, I suppose. Intel groups 16 XVEs into a single Xe-Core, which also includes other functionality like the Xe Matrix Engines (XMX) blocks, Load/Store unit, and L1 cache. Attached to each Xe-Core (but not directly a part of it) are the Thread Sorting Unit (TSU), BVH Cache (for ray tracing), and the other ray tracing hardware for BVH ray/box intersection and ray/triangle intersection.

(Image credit: Tom’s Hardware)

Each XVE can compute eight FP32 operations per cycle. That gets loosely translated into «GPU cores,» though we prefer to call them «GPU shaders,» and each is roughly (very roughly!) equivalent to an AMD or Nvidia shader. Each Xe-Core thus has 128 shader cores and sort of maps to an (upcoming) AMD RDNA 3 Compute Unit (CU) or an Nvidia Streaming Multiprocessor (SM) — both of which will also have 128 GPU shaders. They’re all SIMD (single instruction multiple data) designs, and Arc Alchemist has enhanced the shaders to meet the full DirectX 12 Ultimate feature set.

The XVEs also have additional functionality, like INT and EM support. There are a lot of memory address calculations in graphics workloads, which is where the INT functionality often helps, though such things can also be used for cryptographic hashing. EM stands for «Extended Math» and provides access to more complex stuff like transcendental functions — log, exponent, sine, cosine, etc. The INT execution port is shared with the EM functionality, which tends to be used less frequently than INT and FP calculations.

Meanwhile, the XMX blocks are comparable to Nvidia’s Tensor cores. Each XMX unit can handle either FP16/BF16 (16-bit floating point/brain floating point), INT8 (8-bit integer), or INT4/INT2 (4-bit/2-bit integer) data. These blocks are useful with deep learning workloads, including Intel’s XeSS (Xe Super Sampling) upscaling algorithm. They can be used in any workload that just needs a lot of lower-precision number crunching, and each XMX block can do either 128 FP16, 256 INT8, or 512 INT4/INT2 operations per clock.

The processor can co-issue instructions to all three execution ports — FP, INT/EM, and XMX — at the same time, and all three execution blocks can be active at the same time. Again, that’s loosely similar to Nvidia’s architectures, though it’s important to note that Nvidia has a shared INT/FP port, so if the INT aspect is active, it cuts the potential FP throughput in half.

(Image credit: Tom’s Hardware)

Stepping up one level, Intel has what it calls a render slice, which is sort of analogous to Nvidia’s Graphics Processing Cluster (GPC). The render slice consists of four Xe-Cores, and then adds texture units («Sampler» in the above image) and Render Outputs (ROPs, the «Pixel Backend» in the image), plus some other hardware.

Intel has two main designs with Arc Alchemist, one with two render slices and up to eight Xe-Cores, and the other is a larger design with up to eight render slices. The Arc A770 represents the fully enabled larger design and thus has the full 32 Xe-Cores, while the Arc A380 uses the fully enabled smaller design — with other variants (including mobile models) using partially enabled chips.

(Image credit: Tom’s Hardware)

Here’s the full block diagram for the larger ACM-G10 die, and the A770. Along with all the render slices and other functionality, there’s a large chunk of L2 cache and the memory fabric linking all of the slices together, up to 16MB in the A770. That might not seem that large, considering Nvidia’s Ada Lovelace and the new AD102 has up to 96MB of L2 cache, but keep in mind the different performance and price levels, plus the fact that Nvidia’s soon-to-be-previous-gen GA102 chip (used in the RTX 3090 Ti and other cards) only had up to 6MB of L2 cache.

The block diagram also shows the Xe Media Engine, which handles various video/image codecs, including AV1, VP9, HEVC, and H.264. There are two full media engines (MFX) in both the larger and smaller Arc GPUs. These can work on separate streams or combine their computational power to double their encoding throughput.

On the left of the block diagram — and it’s important to note that this is not a representation of the actual chip layout — Intel shows the four display engine blocks, PCI Express link, Copy Engine, and Memory Controllers. The GDDR6 controllers are actually positioned around the outside of the GPU, linking up to the external memory, and the ACM-G10 has up to eight 32-bit memory controllers.

It’s not clear if Intel can disable those individually on the larger GPU or if it needs to be done in pairs, though we do know on the smaller ACM-G11 that the three 32-bit controllers can be individually disabled. Meanwhile, the mobile A730M has a 192-bit interface with two of the controllers fused off, while the mobile A550M has a 128-bit interface with four controllers disabled.

(Image credit: Tom’s Hardware)

Intel Arc A770 Specifications

With that overview of the architecture out of the way, here are the full specifications for the Arc A770, A750, (upcoming) A580, and A380. Performance potential here is theoretical teraflops / teraops (trillions of operations per second), but remember that not all teraflops and teraops are created equal. We need real-world testing to see what sort of actual performance the architecture can deliver, but we’ll get to that soon enough.

Swipe to scroll horizontally

Intel Arc GPU Specifications
Graphics Card Arc A770 16GB Arc A770 8GB Arc A750 Arc A580 Arc A380
Architecture ACM-G10 ACM-G10 ACM-G10 ACM-G10 ACM-G11
Process Technology TSMC N6 TSMC N6 TSMC N6 TSMC N6 TSMC N6
Transistors (Billion) 21. 2) 406 406 406 406 157
Xe-Cores 32 32 28 24 8
GPU Shaders 4096 4096 3584 3072 1024
Matrix Cores 512 512 448 384 128
Ray Tracing Units 32 32 28 24 8
Boost Clock (MHz) 2100 2100 2050 1700 2000
VRAM Speed (Gbps) 17. 5 16 16 16 15.5
VRAM (GB) 16 8 8 8 6
VRAM Bus Width 256 256 256 256 96
L2 Cache 16 16 16 16 6
ROPs 128 128 128 128 32
TMUs 256 256 224 192 64
TFLOPS FP32 17.2 17.2 14.7 10. 4 4.1
TFLOPS FP16 (INT8) 138 (275) 138 (275) 118 (235) 84 (167) 33 (66)
Bandwidth (GB/s) 560 512 512 512 186
TDP (watts) 225 225 225 175 75
Launch Date October 2022 October 2022 October 2022 ? June 2022
Launch Price $349 $329 $289 ? $139

Intel’s Arc A770 looks quite potent on paper, and clearly, Intel is gunning for its share of the midrange market with aggressive pricing on the A750 and A770. Both end up competing against the Nvidia RTX 3060, with the A770 coming at a similar theoretical price, while the A750 clearly undercuts the 3060 — though Nvidia’s GPU still tends to sell for closer to $370 at retail . On the AMD side of the fence, things are a bit different. The RX 6700 XT potentially delivers more performance at a higher $400 price point , while the RX 6650 XT still sells for around $300 (give or take).

That’s where GPU prices stand right now, at least — and we expect everything to continue to fall in the coming months, though probably not too much further before parts start getting discontinued and replaced by newer models.

In terms of specs and features, Intel’s Arc GPUs end up looking a lot more like Nvidia’s GPUs than AMD’s offerings, with the extra matrix cores and a much bigger emphasis on ray tracing hardware. Intel is also the only GPU company that currently has AV1 and VP9 hardware accelerated video encoding. AMD and Nvidia will add AV1 support to their upcoming RDNA 3 and Ada architectures, but those may not come to the midrange market any time soon. If you’re interested in AV1 and don’t want to spend more than $350, the Arc A770, A750, and A580 might be your best and only options this side of 2023.

The three midrange Arc cards deliver theoretical compute performance of 10.4 to 17.2 teraflops of FP32. (Note that Intel uses «typical» game clocks, though, in our testing, we’ve seen a lot of cases where the clock speed is far higher than the nominal 2.1 GHz listed above.) By comparison, Nvidia’s RTX 3060 offers 12.7 teraflops of graphics compute, while AMD’s RX 6650 XT sits at 10.8 teraflops. However, we’re definitely getting into the realm of apples and oranges, as — spoiler alert! — the RX 6650 XT tends to lead the RTX 3060 by 10–20 percent in traditional gaming benchmarks. Ray tracing games flip the tables, however, with the 3060 beating the 6650 XT by around 25 percent.

(Image credit: Tom’s Hardware)

Arc’s ray tracing capabilities have been a bit difficult to pin down up to now. The A380 did deliver better RT performance than the RX 6500 XT, but that’s hardly praiseworthy. With four times the cores and hardware, we’re expecting a lot more from the A770 and A750 — and Intel has even shown benchmarks where the A770 clearly beat the RTX 3060 with ray tracing enabled.

That’s a nice change of pace from AMD, which to date hasn’t done much with ray tracing and tends to downplay its importance. And to be fair, AMD has a point: the visual fidelity gains that come from enabling ray tracing are often far outweighed by the loss of performance. Especially on AMD’s GPUs.

Intel’s RTUs have dedicated BVH traversal hardware and a BVH cache that reduces memory bottlenecks, and it says each RTU can do up to 12 ray/box BVH intersections per clock, along with one ray/triangle intersection. By comparison, AMD’s RDNA 2 GPUs only do up to four ray/box intersections per clock, and that’s done using enhanced texture units, which means the hardware gets shared and ends up with some resource contention.

Nvidia’s RT cores are more like a black box that Nvidia doesn’t want to discuss in too much detail. Turing could do one ray/triangle intersection per clock, and an indeterminate number of ray/box intersections. Nvidia apparently determined the BVH side of things was running ahead of the ray/triangle hardware, so in Ampere, it added a second ray/triangle intersection unit per RT core. Looking forward, Ada doubles the ray/triangle throughput and includes other BVH enhancements, and it seems unlikely that any contemporary GPUs will match Ada in ray tracing performance. But Intel does seem to be at least relatively competitive with Ampere in terms of RT capabilities.

The A770 maxes out at 32 RTUs, which is more than the RTX 3060’s 28 RT cores but less than the RTX 3060 Ti’s 38 RT cores. Meanwhile, AMD’s RX 6650 XT has 32 ray accelerators, while the RX 6700 XT has 40 ray accelerators… but again, AMD’s ray tracing hardware appears to be the weakest of the three GPU vendors. Check our ray tracing performance results a few pages on to see where the chips fall in real-world testing.

  • MORE: Best Graphics Cards
  • MORE: GPU Benchmarks and Hierarchy
  • MORE: All Graphics Content
  • 1

Current page:
Intel Arc A770 Review

Next Page Meet the Intel Arc A770 Limited Edition Card

Jarred Walton is a senior editor at Tom’s Hardware focusing on everything GPU. He has been working as a tech journalist since 2004, writing for AnandTech, Maximum PC, and PC Gamer. From the first S3 Virge ‘3D decelerators’ to today’s GPUs, Jarred keeps up with all the latest graphics trends and is the one to ask about game performance.

Tom’s Hardware is part of Future US Inc, an international media group and leading digital publisher. Visit our corporate site .

©
Future US, Inc. Full 7th Floor, 130 West 42nd Street,
New York,
NY 10036.

AMD Radeon RX 6750 XT Review: Cool-Headed Asus ROG Strix

Skip to main content

When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works.

(Image: © Tom’s Hardware)

Tom’s Hardware Verdict

The Asus RX 6750 XT ROG Strix delivers a minor upgrade in performance to AMD’s $500-class offerings. It also comes with a higher MSRP and slightly increased power draw, with a memory speed upgrade from 16Gbps to 18Gbps. Unfortunately, the memory speed boost and other updates don’t seem to help as much as expected, and the Strix price premium is difficult to justify.

Pros
  • +

    Good performance

  • +

    Capable cooling

  • +

    Lots of RGB bling

Cons
  • Not a significant upgrade from RX 6700 XT

  • Higher cost amid dropping GPU prices

  • Next-gen GPUs are coming «soon-ish»

Why you can trust Tom’s Hardware
Our expert reviewers spend hours testing and comparing products and services so you can choose the best for you. Find out more about how we test.

Today’s best Asus ROG Strix AMD Radeon RX 6750 XT deals

No price information

Check Amazon

Along with the Radeon RX 6950 XT, AMD recently launched two other new GPUs: the Radeon RX 6750 XT and RX 6650 XT. Like the 6950 XT, both come with faster 18Gbps GDDR6 memory, plus higher GPU clocks and slightly higher power consumption. However, prices are also slightly higher than the 6×00 XT models they replace, making the overall prospects a wash at best. You can see how the newcomers rank in our GPU benchmarks hierarchy, where they’re just slightly ahead of the existing models.

AMD didn’t provide samples of its new cards, so we turned to the AIB (add-in board) partners for review units. Asus sent us its ROG Strix 6750 XT, which looks identical to the ROG Strix 6700 XT other than that little «5» on the sticker. We actually have a Strix 6700 XT in hand as well, so we’ll get to see exactly how much more performance you get from the two factory overclocked variants. Note that most of the other cards we’ve reviewed, including Nvidia’s RTX 3070 Ti and RTX 3070, run reference clocks, so you can add a few percent in performance if you’re after like-for-like comparisons.

Here’s the breakdown of the specs for the AMD Navi 22 GPUs along with Nvidia’s competing 3070 and 3070 Ti. 2) 336 336 336 392.5 392.5 SMs / CUs 40 40 40 48 46 GPU Shaders 2560 2560 2560 6144 5888 Tensor Cores N/A N/A N/A 192 184 Ray Tracing Units 40 40 40 48 46 Boost Clock (MHz) 2643 2600 2581 1765 1725 VRAM Speed (Gbps) 18 18 16 19 14 VRAM (GB) 12 12 12 8 8 VRAM Bus Width 192 192 192 256 256 ROPs 64 64 64 96 96 TMUs 160 160 160 192 184 TFLOPS FP32 (Boost) 13. 5 13.3 13.2 21.7 20.3 Bandwidth (GBps) 432 432 384 608 448 TBP (watts) 250 250 230 290 220 Launch Date May-22 May-22 Mar-21 Jun-21 Oct-20 Official MSRP $649 $549 $479 $599 $499 Street Price $779 $539 $484 $699 $599

Asus bumps the GPU clock up by 43MHz relative to the reference 6750 XT, which, in turn, has a 19MHz «improvement» over the reference RX 6700 XT. Of course, the higher TBP (typical board power) on the new model means it may end up boosting a bit higher in practice, but we’ll get to that later. At least on paper, the main change is the switch to 18Gbps GDDR6. That’s 12.5% more bandwidth in theory, but we don’t know if other aspects of the memory like subtimings may reduce the real-world gains.

There’s good news in terms of the general availability of graphics cards. As we’ve noted recently, many GPUs can now be found in stock for prices close to the MSRP. Above, we’ve listed the best street prices we’ve been able to find for the various GPUs. That presents some difficulties for the Asus ROG Strix, a premium card that commands much higher prices. At the time of writing, the least expensive RX 6750 XT we could find costs $240 less than the Asus model, and you can shave off another $55 by opting for an RX 6700 XT.

It feels very much as though AMD and its partners came up with a pricing structure based on how much GPUs were selling for several months ago. In the meantime, cryptocurrency (and stock) prices plummeted, which now means the new products cost too much. Short of changes in supply or demand, we expect prices to continue to decline, and the 6750 XT really shouldn’t cost much more than the 6700 XT, which has us wondering why it even exists.

We know AMD is hard at work on its upcoming RDNA 3 architecture, and Nvidia is likewise working on its Ada architecture. We expect to see the first cards using those to arrive before the end of the year, perhaps as early as July for the RTX 40-series. Doing a relatively minor refresh with higher official prices less than six months before the next-gen cards arrive strikes us as odd. Perhaps the supply chain has finally started catching up with backorders, but there’s a real chance AMD and Nvidia could end up with a glut of «old» GPUs on their hands in the coming months, much like what happened with the RX 570 back in 2018. 

  • MORE: Best Graphics Cards
  • MORE: GPU Benchmarks and Hierarchy
  • MORE: All Graphics Content
  • 1

Current page:
Asus ROG Strix

Next Page Asus Radeon RX 6750 XT ROG Strix

Jarred Walton is a senior editor at Tom’s Hardware focusing on everything GPU. He has been working as a tech journalist since 2004, writing for AnandTech, Maximum PC, and PC Gamer. From the first S3 Virge ‘3D decelerators’ to today’s GPUs, Jarred keeps up with all the latest graphics trends and is the one to ask about game performance.

Tom’s Hardware is part of Future US Inc, an international media group and leading digital publisher. Visit our corporate site .

©
Future US, Inc. Full 7th Floor, 130 West 42nd Street,
New York,
NY 10036.

Best graphics card for gaming to be determined at the end of 2019


  • Best Graphics Card for Gaming — Radeon GPU Models
  • GIGABYTE EAGLE GTX 1600 & RTX 2000 (OC) Series
  • GeForce GPU Models AMD and NVIDIA and will be branded under the «EAGLE» nameplate. At the moment, GIGABYTE’s high-end gaming products are under the AORUS brand, so where the EAGLE series will fit into the product stack is questionable.

    GIGABYTE EAGLE RX 5500/5600/5700(OC) series

    GIGBAYTE’s EAGLE series will cover the entire RX 5000 line, including the recently released RX 5500, the announced 5600 and the existing 5700 GPU series. According to the listing from the EEC, GIGABYTE will include two variants of the 6GB RX 5600 XT, confirming the rumored 6GB VRAM for the RX 5600 XT.

    All graphics cards will be offered in base or OC variant, similar to the existing GIGABYTE AORUS lineup, and it is not yet clear which one will take the title of best graphics card for gaming. At the moment, it is not clear if the EAGLE series should replace AORUS or if it should be located somewhere below AORUS.

    According to VideoCardz, the EAGLE line has only recently been talked about, and for board partners, the RX 5600 XT is expected to be a cost-effective solution. The release date for the RX 5600 XT is expected to be within the January period, and a launch close to or during the CES period is not an unreasonable expectation.

    Best Graphics Card for Gaming — Radeon GPU Models

    RX 5500 (XT)

    • GIGABYTE RX 5500 XT 4GB EAGLE (GV-R55XTEAGLE-4GD)
    • GIGABYTE RX 5500 XT 4GB EAGLE OC (GV-R55XTEAGLE OC-4GD)

    RX 5600 XT

    • GIGABYTE RX 5600 XT 6GB EAGLE (GV-R56XTEAGLE-6GD)
    • GIGABYTE RX 5600 XT 6GB EAGLE OC (GV-R56XTEAGLE OC-6GD)

    RX 5700 (XT)

    • GIGABYTE RX 5700 8GB EAGLE (GV-R57EAGLE-8GD)
    • GIGABYTE RX 5700 8GB EAGLE OC (GV-R57EAGLE OC-8GD)
    • GIGABYTE RX 5700 XT 8GB EAGLE (GV-R57XTEAGLE-8GD)
    • GIGABYTE RX 5700 XT 8GB EAGLE OC (GV-R57XTEAGLE OC-8GD)

    GIGABYTE EAGLE GTX 1600 & RTX 2000 Series (OC)

    GIGABYTE has also registered a number of NVIDIA GPUs. GIGABYTE will offer NVIDIA GPUs ranging from the entry-level GTX 1650 to the RTX 2080 SUPER. Similar to the RX 5000 graphics cards, both a base variant and an OS variant will be offered for each GTX and RTX graphics card.

    GeForce GPU Models

    GTX 1650 (SUPER)

    • GIGABYTE GTX 1650 4GB EAGLE (GV-N1650EAGLE-4GD)
    • GIGABYTE GTX 1650 4GB EAGLE OC (GV-N1650EAGLE OC-4GD)
    • GIGABYTE GTX 1650 SUPER 4GB EAGLE (GV-N165SEAGLE-4GD)
    • GIGABYTE GTX 1650 SUPER 4GB EAGLE OC (GV-N165SEAGLE OC-4GD)

    GTX 1660 (Ti) (SUPER)

    • GIGABYTE GTX 1660 6GB EAGLE (GV-N1660EAGLE-6GD)
    • GIGABYTE GTX 1660 6GB EAGLE OC (GV-N1660EAGLE OC-6GD)
    • GIGABYTE GTX 1660 SUPER 6GB EAGLE (GV-N166SEAGLE-6GD )
    • GIGABYTE GTX 1660 SUPER 6GB EAGLE OC (GV-N166SEAGLE OC-6GD)
    • GIGABYTE GTX 1660 Ti 6GB EAGLE (GV-N166TEAGLE-6GD)
    • GIGABYTE GTX 1660 Ti 6GB EAGLE OC (GV-N166TEAGLE OC-6GD)

    RTX 2060 (SUPER)

    • GIGABYTE RTX 2060 6GB EAGLE (GV-N2060EAGLE-6GD)
    • GIGABYTE RTX 2060 6GB EAGLE OC (GV-N2060EAGLE OC-6GD)
    • GIGABYTE RTX 2060 SUPER 8GB EAGLE (GV-N206SEAGLE-8GD)
    • GIGABYTE RTX 2060 SUPER 8GB EAGLE OC (GV-N206SEAGLE OC-8GD)

    RTX 2070 (SUPER)

    • GIGABYTE RTX 2070 8GB EAGLE (GV-N2070EAGLE-8GD)
    • GIGABYTE RTX 2070 8GB EAGLE OC (GV-N2070EAGLE OC-8GD)
    • GIGABYTE RTX 2070 SUPER 8GB EAGLE (GV-N207SEAGLE-8GD)
    • GIGABYTE RTX 2070 SUPER 8GB EAGLE OC (GV-N207SEAGLE OC-8GD)

    RTX 2080 (SUPER)

    • GIGABYTE RTX 2080 8GB EAGLE (GV-N208SEAGLE-8GD)
    • GIGABYTE RTX 2080 8GB EAGLE OC (GV-N208SEAGLE OC-8GD)
    • GIGABYTE RTX 2080 SUPER 8GB EAGLE (GV-N208SEAGLE-8GC)
    • GIGABYTE RTX 2080 SUPER 8GB EAGLE OC (GV-N208SEAGLE OC-8GC)

    In addition, on our site we have an excellent set of programs and utilities for repairing and maintaining computers and laptops, an indispensable tool for system administrators and ordinary users, you can see Here

    The best video cards in a detailed test and comparison

    Unfortunately , the currently ubiquitous coronavirus (Covid 19) is also causing bottlenecks in graphics cards. But especially nowadays, when more and more people are sitting at home in front of their PCs and spending time playing computer games, video cards are in demand. But photographers and video producers also need the right graphics card to run smoothly. We test the best graphics cards from Nvidia and AMD at the moment and tell you which one can be invested in the future.

    content

    Why do I need a graphics card?

    The graphics card is an integral part of the computer. Your screen will remain black without a graphics chip. It controls the display of the image and connects the screens. Previously, more expensive graphics cards were mainly of interest to gamers, since almost all programs benefited only from a faster processor and more RAM. Today everything is different. Many programs use more and more graphics performance. Professional video cards are now becoming more and more interesting for photographers, graphic designers and video producers.

    Which video card manufacturers are there?

    There are currently two well-known video card companies: Nvidia and AMD. For years, the graphics card market has been largely dominated by Nvidia. However, over the past few years, AMD graphics cards have gotten better and better, and in the meantime are some serious competition. But what are the advantages and disadvantages of the respective providers?

    AMD Radeon

    graphics cards

    AMD graphics cards have gotten better and better over the past few years. They are often slightly cheaper than Nvidia graphics chips. The advantage of AMD Radeon graphics cards is that they come with some nice features. AMD graphics cards can be overclocked relatively easily with options that are already included with the driver. In addition, they are usually a little more economical thanks to the ability to use the so-called «Radeon Chill» to enable adaptive frame limiting. The latter makes them all the more interesting for laptops. AMD graphics cards are also often better suited for image processing and rendering.

    Nvidia Geforce graphics cards

    The undisputed king of graphics cards is still ahead when it comes to pure performance for years to come. There is currently no AMD graphics card that can match the performance of the Geforce RTX 2090 TI and Titan RX. Nvidia graphics cards are also the best choice if you want to use a G-Sync display. The new «ray tracing» technology is also of particular interest to gamers. Current games using this technology include Control, Metro Exodus, CoD Modern Warfare, and Battlefield. Ray tracing allows them to achieve maximum quality with Nvidia graphics cards. Therefore, Nvidia graphics cards are currently still the best choice for all hardcore gamers who want the best of the best.

    The most important criteria when buying a graphics card

    Graphics cards are complex technology and finding the right graphics card can be difficult. However, there are a few criteria to consider when buying a video card to make the decision easier.

    CPU and GPU compatibilityComputer space Power consumptionStromverbrauchCoolingVRAMValue for money

    CPU and GPU Kompatibilität

    Something that many do not know about, you should always consider your processor when upgrading your system or when buying a new video card. Both systems work in close cooperation, and it is not worth buying a video card for a thousand euros if it is slowed down by the processor. So make sure your graphics card and processor match.

    Computer space

    Another point that many easily overlook is the space that many video cards require. In particular, high-end graphics cards like the Geforce RTX 2080 are sometimes up to 33 centimeters long. This is more than an A4 page! Make sure you have enough space in your computer case. For example, a good case with sufficient space is the NCHT h540.

    power consumption

    Modern graphics cards require a lot of power. If you have an old power supply, this may not be enough. Sometimes the best graphics cards in our test require more than 500 watts. Therefore, if you want to use one of the best graphics cards, your power supply must have at least 600 watts.

    connections

    In addition to sufficient power, video cards also require various connections. Cheaper models usually only need an 8-pin connector. While mid-range models usually need a 6-pin and an 8-pin connector. Modern high-end models even require two 8-pin connectors.

    Cooling

    If the graphics card gets too hot, it will slow down. A cooler graphics card provides better performance. So make sure your graphics card has enough cooling. Of course, larger and better models require more cooling.

    VRAM

    Video RAM (VRAM) these days should be at least 6GB if possible. In fact, we even recommend 8 GB and GDDR6 memory.

    Value for money

    Some people don’t care. If you want the best performance, you will of course have to pay accordingly. However, if your budget isn’t limitless, it’s a good idea to make sure the graphics cards perform well for their price. It should be said in advance that AMD often at first glance often offers a lower price for the same hardware, but this is not always reflected in performance tests.

    How much does a video card cost

    How much does a video card cost in the end depends on what you need. In general, AMD graphics cards are slightly cheaper, and a good budget graphics card can be found for around 150 euros. However, for maximum performance, you will save more than 1000 euros. As you can see, the price range is relatively large. It is all the more important to know in advance exactly what to expect from a good graphics card.

    The best graphics cards in comparison

    If you only do clean tests, finding the best graphics card is relatively easy. Unfortunately, this is often not very realistic. Different users expect different things from their graphics card. Budgets also differ from person to person. Therefore, instead of simply evaluating the performance of video cards, we decided to present you the test winner in the most important different categories. Categories:

    • Best Performance
    • Best graphics card for VR
    • Best value for money
    • Best graphics card for photo editing
    • Best Budget Graphics Card
    • Best graphics card for Fortnite and Valorant
    • The best graphics card for cryptocurrency mining

    Nvidia Geforce RTX 2080 Super — Best graphics card in performance test

    Current flagship graphics card. The letter «R» stands for real-time ray tracing. A feature that is especially suitable for newer games and looks fantastic. The RTX 2080 uses an 8GB VRM with the latest GDDR6 memory. The RTX 2080 is the successor to the GTX 1080 TI and delivers 50% more performance than the previous model. Unfortunately, performance isn’t the only thing that’s increased over the previous model. Many users were very disappointed that Nvidia set the price higher than the GTX 1080 when it was released.

    In fact, there are two graphics cards that provide even better performance than the RTX 2080, namely the RTX 2080 TI and TITAN RTX (T-Rex). However, the price of the RTX 2080 TI is still well over €1,000, with the TITAN RTX even priced up to €2,700. But don’t worry, the RTX 99 delivers more than enough performance for 2080% of all users and is also a solid investment for the next few years.

    Benefits :

    • Ray tracing in real time
    • Absolutely unrivaled performance
    • Current 4K games possible
    • Latest GDDR6 memory

    Cons:

    • Very expensive

    [amazon box=»B07VN6D5JB»]

    Nvidia Geforce RTX 2070 is the best graphics card for VR in the test

    It is also briefly mentioned again here, of course, TITAN RTX is theoretically the «best» graphics card for VR if you only focus on performance. However, the RTX 2070 delivers enough power for most VR games at a significantly lower price and is therefore a test winner. If you want a little cheaper, the RTX 2060 is also a good choice for VR.

    The RTX 2070 supports ray tracing at an affordable price and is therefore the graphics card of the future. Current Triple-A games can be played at 1440p and even 4k. What makes the graphics card special is the latest VR games like Half-Life: Alyx and The Walking Dead: Saints and Sinners run smoothly and just look good! If, on the other hand, it’s an AMD graphics card, our price-to-performance test winner, the RX 5700, can also convince us of VR.

    Benefits of :

    • Good value for money
    • ray tracing
    • Latest VR games run smoothly

    Cons:

    • No SLI option
    • Future VR games may not work well

    [amazon box=»B07TWX22ZQ»]

    AMD RX 5700 XT is the best graphics card tested for value for money

    As expected, AMD wins our value for money test. The RX 5700 XT convinces with good performance at an affordable price. For those who absolutely need an Nvidia graphics card, we recommend the Zotac Gaming GeForce RTX 2060 Super Mini with 8GB of DDR6 memory, which can also be found for around 400 euros and even provides the latest ray tracing.

    RX 5700 XT is available in many different versions. We recommend the Sapphire Radeon RX 5700 XT Nitro+. It offers good performance and, unlike other versions of the RX 5700, it stays quiet even during demanding games and doesn’t get excessively hot. We also really like the RGB lighting on this variant. Modern games are possible in HD resolution without problems. Many games also run at 1440p at 60 frames per second. The RX 5700 XT supports Freesync to dynamically sync your graphics card and monitor. This is comparable to Nvidia’s GSync.

    Benefits of :

    • Good value for money
    • GDDR6 memory
    • Current games at 1440P with over 60 FPS

    Cons:

    • High power consumption
    • No 4K games

    [amazon box=»B07WP6TYQ3″]

    Nvidia GeForce GTX 1650 — Best Graphics Card in Imaging Test

    It’s not just gamers who need a graphics card these days. Many image editing programs such as Adobe Lightroom and Photoshop are using graphics chips more and more.

    Most users should still pay more attention to the processor and memory, but the graphics card should not be neglected.

    An inexpensive Nvidia GeForce GTX 1650, which can be found from 150 euros, is enough for image processing. The GTX 1650 is definitely an entry-level graphics card and not suitable for demanding games. The video card has 4 GB of video memory, which is quite enough for image processing. The graphics card is very power efficient and relatively quiet.

    Benefits :

    • Energy efficient
    • quiet
    • favorable

    Cons:

    • Only 4 GB VRAM
    • Not suitable for current Triple A titles

    [amazon box=»B07QQZT36M»]

    AMD Radeon RX 570 8GB — Best Budget Card in Test

    Once again, AMD wins on price. The Radeon RX 570 is a few years old, but thanks to 8GB VRM, even current games still run in Full HD at 30+ fps. Second place among budget graphics cards goes to the aforementioned Nvidia GeForce GTX 1650, which may seem convincing in terms of image processing. The RX 570 has more frames per second than the competition from Nvidia in most games. The RX 570 is also impressive in terms of volume, it is only audible at full load. If you’re not sure which RX 570 is the best, we recommend the RX 570+ Nitro.

    It should also be noted that the RX 570 is a little older and is gradually being replaced by AMD with newer models. This can make finding the RX 570 difficult.

  • XRUMXGB VRAM
  • Very quiet during normal operation
  • Live games in Full HD possible
  • Cons:

    • Sometimes hard to find
    • Future games will most likely not run smoothly
    • 1440p or 4K games are not possible
    • Not fast enough for VR

    [amazon box=»B07MJWPSCY»]

    Nvidia GTX 1080 Ti is the best graphics card for Fortnite and Valorant

    Shooters like Fortnite and Valorant are extremely popular these days. The great thing is that these games are often less graphic than other Triple-A games. Fortnite or Valorant like CS:GO can sometimes be played with incredibly cheap graphics cards like the Nvidia GeForce GTX 1650. However, if you want to play at 1440p or 4K and 144fps, we recommend the Nvidia GTX 1080 TI.

    1080 TI is the flagship graphics card of the latest generation. The performance of the 1080 TI is still incredibly good, and the price has dropped thanks to new graphics cards like the RTX 2080. The specs are also impressive, 8GB of VRAM, but sadly in GDDR5 rather than GDDR6. In addition to Fortnite and Valorant, current Triple A games can also be played in 4k resolution.

    Benefits of :

    • XRUMXGB VRAM
    • Very good performance
    • 144 FPS at 1440p in Fortnite and Valorant
    • Current games run smoothly

    Cons:

    • Old GDDR5 instead of GDDR6
    • Still relatively expensive

    [amazon box=»B07D184HLJ»]

    AMD Radeon VII is the best graphics card for cryptocurrency mining

    Still a niche topic for many users. Crypto mining from home works with the help of video cards. AMD is still considered the king of mining graphics cards. The AMD Radeon VII outperforms the much more expensive Nvidia Titan V. It offers a hash rate of 90MH/s and does not overheat. AMD Radeon is not a classic graphics card, but a so-called workstation graphics card. A few days ago, AMD introduced a professional successor to AMD Radeon VII — AMD Radeon VII Pro, which, however, will be three times more expensive.

    These graphics cards can also be used traditionally, but they are specially optimized for work like crypto mining. If you’re wondering, AMD Radeon VII is on top.

    Benefits of :

    • inexpensive
    • Lots of VRAM
    • Off

    Cons:

    • High power consumption

    [amazon box=»B07NFGDZWQ»]

    General questions about video cards

    Which video card is good and cheap?

    In our test, the AMD Radeon RX 570 and Nvidia GeForce GTX 1650 convince as inexpensive graphics cards with good performance.

    Which graphics card do I need for VR?

    The Oculus Rift has a relatively low entry barrier for current VR systems. Even the Nvidia GTX 1080 TI can play simple VR games. However, we recommend a graphics card like the Nvidia GeForce RTX 2070 to be able to smoothly play current games like Half-Life: Alyx.

    What is the best graphics card?

    It all depends on what category you’re looking for: For performance: RTX 2080 Super For VR: Nvidia RTX 2070 For good value: AMD RX 5700 XT For imaging: Nvidia GeForce GTX 1650 For a small budget: AMD Radeon RX 570 Fortnite or Valorant: Nvidia GTX 1080 TI For cryptocurrency mining: AMD Radeon VII

    Which is better: Nvidia or AMD?

    Nvidia is currently winning the best graphics card performance comparison with the GeForce RTX 2080 Super. However, in terms of price-quality ratio, AMD is still ahead.

    Is an Nvidia graphics card compatible with an AMD processor?

    Yes. It doesn’t matter which GPU manufacturer.