Rtx 2080 ti release date: NVIDIA GeForce RTX 2080 Ti Specs

RTX 2080 Ti & 2080 on Sept. 20th, RTX 2070 in October

NVIDIA’s Gamescom 2018 keynote just wrapped up, and as many have been expecting since it was announced last month, NVIDIA is getting ready to launch their next generation of GeForce hardware. Announced at the event and going on sale starting September 20th is NVIDIA’s GeForce RTX 20 series, which is succeeding the current Pascal-powered GeForce GTX 10 series. Based on NVIDIA’s new Turing GPU architecture and built on TSMC’s 12nm “FFN” process, NVIDIA has lofty goals, looking to drive an entire paradigm shift in how games are rendered and how PC video cards are evaluated. CEO Jensen Huang has called Turing NVIDIA’s most important GPU architecture since 2006’s Tesla GPU architecture (G80 GPU), and from a features standpoint it’s clear that he’s not overstating matters.

As is traditionally the case, the first cards out of the NVIDIA stable are the high-end cards. But in a rather sizable break from tradition we’re not only going to get the x80 and x70 cards at launch, but also the x80 Ti card as well. Meaning the GeForce RTX 2080 Ti, RTX 2080, and RTX 2070 will all be hitting the streets within a month of each other. NVIDIA’s product stack is remaining unchanged here, so RTX 2080 Ti remains their flagship card, while RTX 2080 is their high-end card, and then RTX 2070 the slightly cheaper card to entice enthusiasts without breaking the bank.

All three cards will be launching over the next two months. First off will be the RTX 2080 Ti and RTX 2080, which will launch September 20th. The RTX 2080 Ti will start at $999 for partner cards, while the RTX 2080 will start at $699. Meanwhile the RTX 2070 will launch at some point in October, with partner cards starting at $499. On a historical basis, all of these prices are higher than the last generation by anywhere between $120 and $300. Meanwhile NVIDIA’s own reference-quality Founders Edition cards are once again back, and those will carry a $100 to $200 premium over the baseline pricing.

Unfortunately, NVIDIA is already taking pre-orders here, so consumers are essentially required to make a “blind buy” if they want to snag a card from the first batch. NVIDIA has offered surprisingly little information on performance and we’d suggest waiting for trustworthy third-party reviews (i.e. us), however I have to admit that I don’t imagine there’s going to be much stock available by the time reviews hit the streets.

NVIDIA GeForce Specification Comparison
  RTX 2080 Ti RTX 2080 RTX 2070 GTX 1080
CUDA Cores 4352 2944 2304 2560
Core Clock 1350MHz 1515MHz 1410MHz 1607MHz
Boost Clock 1545MHz 1710MHz 1620MHz 1733MHz
Memory Clock 14Gbps GDDR6 14Gbps GDDR6 14Gbps GDDR6 10Gbps GDDR5X
Memory Bus Width 352-bit 256-bit 256-bit 256-bit
Single Precision Perf. 13.4 TFLOPs 10.1 TFLOPs 7.5 TFLOPs 8.9 TFLOPs
Tensor Perf. 440T OPs
? ? N/A
Ray Perf. 10 GRays/s 8 GRays/s 6 GRays/s N/A
«RTX-OPS» 78T 60T 45T N/A
TDP 250W 215W 175W 180W
GPU Big Turing Unnamed Turing Unnamed Turing GP104
Transistor Count 18. 6B ? ? 7.2B
Architecture Turing Turing Turing Pascal
Manufacturing Process TSMC 12nm «FFN» TSMC 12nm «FFN» TSMC 12nm «FFN» TSMC 16nm
Launch Date 09/20/2018 09/20/2018 10/2018 05/27/2016
Launch Price MSRP: $999
Founders $1199
MSRP: $699
Founders $799
MSRP: $499
Founders $599
MSRP: $599
Founders $699

NVIDIA’s Turing Architecture: RT & Tensor Cores

So what does Turing bring to the table? The marquee feature across the board is hybrid rendering, which combines ray tracing with traditional rasterization to exploit the strengths of both technologies. This announcement is essentially a continuation of NVIDIA’s RTX announcement from earlier this year, so if you thought that announcement was a little sparse, well then here is the rest of the story.

The big change here is that NVIDIA is going to be including even more ray tracing hardware with Turing in order to offer faster and more efficient hardware ray tracing acceleration. New to the Turing architecture is what NVIDIA is calling an RT core, the underpinnings of which we aren’t fully informed on at this time, but serve as dedicated ray tracing processors. These processor blocks accelerate both ray-triangle intersection checks and bounding volume hierarchy (BVH) manipulation, the latter being a very popular data structure for storing objects for ray tracing.

NVIDIA is stating that the fastest GeForce RTX part can cast 10 Billion (Giga) rays per second, which compared to the unaccelerated Pascal is a 25x improvement in ray tracing performance.

The Turing architecture also carries over the tensor cores from Volta, and indeed these have even been enhanced over Volta. The tensor cores are an important aspect of multiple NVIDIA initiatives. Along with speeding up ray tracing itself, NVIDIA’s other tool in their Turing bag of tricks is to reduce the amount of rays required in a scene by using AI denoising to clean up an image, which is something the tensor cores excel at. Of course that’s not the only feature tensor cores are for – NVIDIA’s entire AI/neural networking empire is all but built on them – so while not a primary focus for the Gamescom crowd, this also confirms that NVIDIA’s most powerful neural networking hardware will be coming to a wider range of GPUs.

Looking at hybrid rendering in general, it’s interesting that despite these individual speed-ups, NVIDIA’s overall performance promises aren’t quite as extreme. All told, the company is promising a 6x performance boost versus Pascal, and this doesn’t specify against which parts. Time will tell if even this is a realistic assessment, as even with the RT cores, ray tracing in general is still quite the resource hog.

As for gaming matters in particular, the benefits of hybrid rendering are potentially significant, but it’s going to depend heavily on how developers choose to use it. From performance standpoint I’m not sure there’s much to say here, and that’s because ray tracing & hybrid rendering are ultimately features to improve rendering quality, not improve the performance of today’s algorithms. Granted, if you tried to do ray tracing on today’s GPUs it would be extremely slow – and Turing an incredible speedup as a result – but no one uses slow path tracing systems on current hardware for this reason. So hybrid rendering is instead about replacing the approximations and hacks of current rasterization technology with more accurate rendering methods. In other words, less “faking it” and more “making it.”

Those quality benefits, in turn, are typically clustered around lighting, shadows, and reflections. All three features are inherently based on the properties of light, which in simplistic terms moves as a ray, and which up to now various algorithms have been faking the work involved or “pre-baking” scenes in advance. And while current algorithms are quite good, they still aren’t close to accurate. So there is clear room for improvement.

NVIDIA for their part is particularly throwing around global illumination, which is one of the harder tasks. However there are other lighting methods that benefit as well, not to mention reflections and shadows of those lit objects. And truthfully this is where words are a poor tool; it’s difficult to describe how a ray traced shadow looks better than a fake shadow with PCSS, or real-time lighting over pre-baked lighting. Which is why NVIDIA, the video card company, is going to be pushing the visual aspects of all of this harder than ever.

Overall then, hybrid rendering is the lynchpin feature of the GeForce RTX 20 series. Going by their Gamescom and SIGGRAPH presentations, it’s clear that NVIDIA has invested heavily into the field, and that they have bet the success of the GeForce brand over the coming years on this technology. RT cores and tensor cores are semi-fixed function hardware; they can’t be used for rasterization, and the transistors allocated to them are transistors that could have been dedicated to more rasterization hardware otherwise. So NVIDIA has made an incredibly significant move here in terms of opportunity cost by going the hybrid rendering route rather than building a bigger Pascal.

As a result, NVIDIA is attempting a paradigm shift in consumer rendering, one that we’ve really only see before with the introduction of pixel and vertex shaders (DX8 & DX9 era tech) all the way back in 2001 & 2002. Which is why Microsoft’s DirectX Raytracing (DXR) initiative is so important, as are NVIDIA’s other developer and consumer initiatives. NVIDIA needs to sell consumers and developers alike on this vision of mixing rasterization with ray tracing to provide better image quality. And more so than that, they need to ease developers into the idea of working with more specialized, fixed function units as Moore’s Law continues to slow down and fixed function hardware becomes a means to achieve greater efficiency.

NVIDIA hasn’t bet the farm on hybrid rendering, but they’ve never attempted to move the market in this fashion. So if it seems like NVIDIA is hyper-focused on hybrid rendering and ray tracing, that’s because they are. It’s their vision of the future, and now they need to get everyone else on board.

Turing SM: Dedicated INT Cores, Unified Cache, Variable Rate Shading

Alongside the dedicated RT and tensor cores, the Turing architecture Streaming Multiprocessor (SM) itself is also learning some new tricks. In particular here, it’s inheriting one of Volta’s more novel changes, which saw the Integer cores separated out into their own blocks, as opposed to being a facet of the Floating Point CUDA cores. The advantage here – at least as much as we saw in Volta – is that it speeds up address generation and Fused Multiply Add (FMA) performance, though as with a lot of aspects of Turing, there’s likely more to it (and what it can be used for) than we’re seeing today.

The Turing SM also includes what NVIDIA is calling a “unified cache architecture. ” As I’m still awaiting official SM diagrams from NVIDIA, it’s not clear if this is the same kind of unification we saw with Volta – where the L1 cache was merged with shared memory – or if NVIDIA has gone one step further. At any rate NVIDIA is saying that it offers twice the bandwidth of the “previous generation” which is unclear if NVIDIA means Pascal or Volta (with the latter being more likely).

Finally, also tucked away in the SIGGRAPH Turing press release is the mention of support for variable rate shading. This is a relatively young and upcoming graphics rendering technique that there’s limited information about (especially as to how exactly NVIDIA is implementing it). But at a very high level it sounds like the next generation of NVIDIA’s multi-res shading technology, which allows developers to render different areas of a screen at various effective resolutions, in order to concentrate quality (and rendering time) in to the areas where it’s the most beneficial.

Feeding the Beast: GDDR6 Support

As the memory used by GPUs is developed by outside companies, there are no big secrets here. The JEDEC and its big 3 members Samsung, SK Hynix, and Micron, have all been developing GDDR6 memory as the successor to both GDDR5 and GDDR5X, and NVIDIA ha confirmed that Turing will support it. Depending on the manufacturer, first-generation GDDR6 is generally promoted as offering up to 16Gbps per pin of memory bandwidth, which is 2x that of NVIDIA’s late-generation GDDR5 cards, and 40% faster than NVIDIA’s most recent GDDR5X cards.

GPU Memory Math: GDDR6 vs. HBM2 vs. GDDR5X
  NVIDIA GeForce RTX 2080 Ti
NVIDIA GeForce RTX 2080
NVIDIA GeForce GTX 1080 Ti NVIDIA GeForce GTX 1080
Total Capacity 11 GB 8 GB 12 GB 12 GB 11 GB 8 GB
B/W Per Pin 14 Gb/s 1. 7 Gb/s 11.4 Gbps 11 Gbps
Chip capacity 1 GB (8 Gb) 4 GB (32 Gb) 1 GB (8 Gb)
No. Chips/KGSDs 11 8 3 12 11 8
B/W Per Chip/Stack 56 GB/s 217.6 GB/s 45.6 GB/s 44 GB/s
Bus Width 352-bit 256-bit 3092-bit 384-bit 352-bit 256-bit
Total B/W 616 GB/s 448GB/s 652. 8 GB/s 547.7 GB/s 484 GB/s 352 GB/s
DRAM Voltage 1.35 V 1.2 V (?) 1.35 V

Relative to GDDR5X, GDDR6 is not quite as big of a step up as some past memory generations, as many of GDDR6’s innovations were already baked into GDDR5X. None the less, alongside HBM2 for very high end use cases, it is expected to become the backbone memory of the GPU industry. The principle changes here include lower operating voltages (1.35v), and internally the memory is now divided into two memory channels per chip. For a standard 32-bit wide chip then, this means a pair of 16-bit memory channels, for a total of 16 such channels on a 256-bit card. While this in turn means there is a very large number of channels, GPUs are also well-positioned to take advantage of it since they are massively parallel devices to begin with.

NVIDIA for their part has confirmed that the first GeForce RTX cards will run their GDDR6 at 14Gbps, which happens to be the fastest speed grade offered by all of the Big 3 members. We know that NVIDIA is exclusively using Samsung’s GDDR6 for their Quadro RTX cards – presumably because they need the density – however for the GeForce RTX cards the field should be open to all of the memory manufacturers. Though in the long run this leaves two avenues open to higher capacity cards: either moving up to 16Gb density chips, or going clamshell with the 8Gb chips they’re using now.

Odds & Ends: NVLink SLI, VirtualLink, & 8K HEVC

While this wasn’t mentioned in NVIDIA’s Gamescom presentation itself, NVIDIA’s GeForce 20 Series website confirms that SLI will once again be available for some high-end GeForce RTX cards. Specifically, both the RTX 2080 Ti and RTX 2080 will support SLI. Meanwhile the RTX 2070 will not support SLI; this being a departure from the 1070 which did offer it.

However the bigger aspect of that news is that NVIDIA’s proprietary cache coherent GPU interconnect, NVLink, will be coming to consumer cards. The GeForce GTX cards will be implementing SLI over NVLInk, with 2 NVLink channels running between each card. At a combined 50GB/sec of full-duplex bandwidth – meaning there’s 50GB of bandwidth available in each direction – this is a major upgrade over NVIDIA’s previous HB-SLI link. This is on top of NVLink’s other feature benefits, particularly cache coherence. And all of this comes at an important time, as inter-GPU bandwidth requirements keep rising with each generation.

Now the big question is whether this will reverse the ongoing decline of SLI, and at the moment I’m taking a somewhat pessimistic approach, but I’m eager to hear more from NVIDIA. 50GB/sec is a big improvement over HB-SLI, however it’s still only a fraction of the 448GB/sec (or more) of local memory bandwidth available to a GPU. So on its own it doesn’t fix the problems that have dogged multi-GPU rendering, either with AFR synchronization or effective workload splitting. In that respect it’s likely telling that NVIDIA doesn’t support NVLink SLI on the RTX 2070.

Meanwhile gamers something new to look forward to for VR, with the addition of VirtualLink support. The USB Type-C alternate mode was announced last month, and supports 15W+ of power, 10Gbps of USB 3.1 Gen 2 data, and 4 lanes of DisplayPort HBR3 video all over a single cable. In other words, it’s a DisplayPort 1.4 connection with extra data and power that is intended to allow a video card to directly drive a VR headset. The standard is backed by NVIDIA, AMD, Oculus, Valve, and Microsoft, so GeForce RTX cards will be the first of what we expect will ultimately be a number of products supporting the standard.

USB Type-C Alternate Modes
  VirtualLink DisplayPort
(4 Lanes)
(2 Lanes)
Base USB-C
Video Bandwidth (Raw) 32. 4Gbps 32.4Gbps 16.2Gbps N/A
USB 3.x Data Bandwidth 10Gbps N/A 10Gbps 10Gbps + 10Gbps
High Speed Lane Pairs 6 4
Max Power Mandatory: 15W
Optional: 27W
Optional: Up To 100W

Finally, while NVIDIA only briefly touched upon the subject, we do know that their video encoder block, NVENC, has been updated for Turing. The latest iteration of NVENC specifically adds support for 8K HEVC encoding. Meanwhile NVIDIA has also been able to further tune the quality of their encoder, allowing them to achieve similar quality as before with a 25% lower video bitrate.

Buy ASUS GeForce RTX 2080 Ti Dual-Fan OC Edition on Amazon.com

Previewing GeForce RTX 2080 Ti
Announcing The GeForce RTX 20 SeriesPreviewing GeForce RTX 2080 TiPreviewing RTX 2080, RTX 2070, & Pre-Orders

Nvidia RTX 2070, RTX 2080, RTX 2080 Ti GPUs revealed: specs, price, release date

Nvidia’s new high-end graphics cards are the GeForce RTX 2070, RTX 2080 and RTX 2080 Ti, the company announced today during a pre-Gamescom 2018 livestream from Cologne, Germany.

These new 20-series cards will succeed Nvidia’s current top-of-the-line GPUs, the GeForce GTX 1070, GTX 1080 and GTX 1080 Ti. While the company usually waits to launch the more powerful Ti version of a GPU, this time around, it’s releasing the RTX 2080 and RTX 2080 Ti at once.

They won’t come cheap. The Nvidia-manufactured Founders Edition versions will cost $599 for the RTX 2070, $799 for the RTX 2080 and $1,199 for the RTX 2080 Ti. The latter two cards are expected to ship “on or around” Sept. 20, while there is no estimated release date for the RTX 2070. Pre-orders are currently available for the RTX 2080 and 2080 Ti.

Nvidia CEO Jensen Huang announced different “starting at” prices during the keynote presentation. Huang’s presentation said the RTX 2070 will start at $499, the RTX 2080 at $699 and the RTX 2080 Ti at $999. Asked for clarification, an Nvidia representative told Polygon that these amounts reflect retail prices for third-party manufacturers’ cards.

You can see the base specifications for the three graphics cards below.

Nvidia RTX 2070

Spec RTX 2070 FE RTX 2070 GTX 1070
Spec RTX 2070 FE RTX 2070 GTX 1070
GPU architecture Turing Turing Pascal
Boost clock 1710 MHz (OC) 1620 MHz 1683 MHz
Frame buffer 8 GB GDDR6 8 GB GDDR6 8 GB GDDR5
Memory speed 14 Gbps 14 Gbps 8 Gbps

Source: Nvidia

Nvidia RTX 2080

Spec RTX 2080 FE RTX 2080 GTX 1080
Spec RTX 2080 FE RTX 2080 GTX 1080
GPU architecture Turing Turing Pascal
Boost clock 1800 MHz (OC) 1710 MHz 1733 MHz
Frame buffer 8 GB GDDR6 8 GB GDDR6 8 GB GDDR5X
Memory speed 14 Gbps 14 Gbps 10 Gbps

Source: Nvidia

Nvidia RTX 2080 Ti

Spec RTX 2080 Ti FE RTX 2080 Ti GTX 1080 Ti
Spec RTX 2080 Ti FE RTX 2080 Ti GTX 1080 Ti
GPU architecture Turing Turing Pascal
Boost clock 1635 MHz (OC) 1545 MHz 1582 MHz
Frame buffer 11 GB GDDR6 11 GB GDDR6 11 GB GDDR5X
Memory speed 14 Gbps 14 Gbps 11 Gbps

Source: Nvidia

The RTX 2070, 2080 and 2080 Ti will be the first consumer-level graphics cards based on Nvidia’s next-generation Turing architecture, which the company announced earlier this month at the SIGGRAPH computing conference. At that time, Nvidia also revealed its first Turing-based products: three GPUs in the company’s Quadro line, which is geared toward professional applications.

All three of the new RTX cards will feature built-in support for real-time ray tracing, a rendering and lighting technique for photorealistic graphics that gaming companies are starting to introduce this year. Nvidia announced a real-time ray tracing technology that it refers to as Nvidia RTX — hence the new naming scheme for the company’s upcoming GPUs — during the 2018 Game Developers Conference in March. Ray tracing is the standard for applications such as visual effects in the film industry, but it is extremely computationally intensive, which has meant that — at least until now — it has been impractical for gaming. In addition to real-time ray tracing, Nvidia’s RTX platform incorporates two existing technologies, programmable shaders and artificial intelligence.

Huang’s keynote featured a number of demonstrations and presentations to illustrate the potential of Nvidia RTX. The company itself produced an RTX tech demo, “Project Sol,” consisting of a cinematic scene rendered with real-time ray tracing on a Quadro RTX 6000:

A few game makers also appeared on stage to show off Windows PC games launching in the next six months or so that will support Nvidia RTX. EA DICE showed new footage of Battlefield 5 with RTX-based reflections; 4A Games showed RTX-based lighting in Metro Exodus; and Eidos Montreal showed RTX-based shadows in Shadow of the Tomb Raider.

Update (2:15 p. m. EDT): The prices for the Nvidia GeForce RTX 20-series cards have been updated per Nvidia CEO Jensen Huang’s Gamescom keynote.

Update 2 (3:25 p.m. EDT): We’ve updated the article with demonstrations of Nvidia RTX that the company showcased during its Gamescom event.

Update 3 (3:56 p.m. EDT): We’ve updated the article with a pricing clarification from Nvidia.

Nvidia GeForce RTX 2080 and RTX 2080 Ti release date, price, specs | PROCompy.ru

At an event on Monday in Cologne, Germany, shortly before the release of Gamescom, Nvidia finally announced what we’ve all been looking forward to — a new generation of top-end graphics cards based on the Turing «monster» architecture with ray tracing and deep learning capabilities. In particular, Nvidia introduced three new cards: GeForce RTX 2070 , GeForce RTX 2080 and yes, GeForce RTX 2080 Ti .

This is the first time Nvidia has announced a «Ti» variant at the same time as a new architecture. As a rule, Ti models come out a few months after Nvidia unveils the main product. This is impossible not to rejoice, but the prices, oh, these prices!

Prices for GeForce RTX 2080 series video cards:

GeForce RTX 2080

Price on the Player Price on Oldie Price on Citylink Price on 123.ru

GeForce RTX 2070

02 Nvidia takes pre-orders for GeForce RTX 2080 Ti and GeForce RTX 2080 is out now and will ship cards starting September 20th. The GeForce RTX 2070 is the only one of the three graphics cards not yet available for pre-order. Nvidia has stated that it will be available in October.

Specs GeForce RTX 2080 Ti, GeForce RTX 2080, GeForce RTX 2070

What’s interesting is how Nvidia is approaching its developer editions this time around. The FE cards were essentially reference designs with Pascal, but to run Turing, the FE models had boost clocks. That doesn’t mean we won’t see overclocked models from Nvidia’s hardware partners. However, Nvidia’s own FE variants, which it sells on its website, drive the price up to the $100 mark compared to reference cards. Nvidia says there are other reasons for the prices, including an improved cooling setup, but we’ll have to wait to test it out ourselves.

On the GeForce RTX 2080 Ti, the overclocked frequencies of the FE model are 1.635 MHz, which is 90 MHz higher than the reference. Similarly, Nvidia has topped up the frequencies on its GeForce RTX 2080 FE and GeForce RTX 2070 FE by 90MHz, respectively, to 1800MHz and 1710MHz.

Raw specs aside, Nvidia is heavily emphasizing the ray tracing capabilities from its Turing architecture that drives these new cards. Turing is built for ray tracing, says Nvidia, and offers 10 times better ray tracing performance than Pascal. Nvidia’s RTX cards are capable of producing better lighting effects than previous generation GPUs.

The Turing architecture also includes Tensor cores that can help with deep learning applications, and Nvidia has discussed a new DLSS algorithm that can help improve upscaling. Potentially games can run at 1080p and use DLSS to get around 4K without any real performance boost. With such graphics cards, you will have to choose a new monitor for yourself, as well as invest in the best processor, which will unlock the full potential of next-generation graphics cards.

Finally, Nvidia CEO Jensen Huang mentioned Turing’s ability to do 14 TFLOPS/TIPS simultaneous floating point and integer calculations. Graphics cards have to do a lot of address generation when textures are loaded, and FP+INT concurrent calculations can provide serious performance benefits even in games that don’t use ray tracing. Jensen mentioned that Turing is «1.5 times faster» than Pascal, at one point apparently referring to the parallel capabilities of FP and INT, and if this holds true in most games, even an RTX 2070 could match the performance of a GTX 1080ti. Again, it’s an «if» now, but Turing looks very strong.

The event was an opportunity to test some of the best traced RTX games. Metro Exodus was interesting in that it had an F1 key to switch between traditional lighting models and the new RTX ray tracing model. In most cases, the results were clearly better with RTX, although there were occasional visual glitches. The Battlefield V also showed impressive real-time reflections from fire, explosions, etc., and the visual difference was very noticeable. RTX’s ray tracing capabilities don’t look like some of Nvidia’s previous attempts, and it doesn’t look like DirectX 12 or Vulkan support. There is an immediate and distinct difference in the rendering of the result with RTX enabled.

This is a good step forward for Nvidia and all graphics cards in general.

Also read our selection articles:
— How to choose the best graphics card for gaming 2018
— How to choose the best processor
— How to choose a monitor
— How to choose a TV

GeForce RTX 2080 Ti graphics card [in 28 benchmarks] 9 0001


NVIDIA started GeForce RTX 2080 Ti sales on August 27, 2018 at a suggested price of $999. This is a top Turing architecture desktop card based on 12 nm manufacturing process, primarily aimed at gamers. It has 11 GB of GDDR6 memory at 14 GHz, and coupled with a 352-bit interface, this creates a bandwidth of 616.0 GB / s.

In terms of compatibility, this is a two-slot PCIe 3.0 x16 card. The length of the reference version is 267 mm. Two 8-pin additional power cables are required for connection, and the power consumption is 270 W.

It provides good performance in tests and games at the level of


from the leader, which is NVIDIA GeForce RTX 4090.

GeForce RTX 4090


General Information

Information about the type (desktop or laptop) and architecture of the GeForce RTX 2080 Ti, as well as when sales started and cost at that time.

901 12 24. 59

9011 2 date Exit
Performance ranking 23
Value for money
Architecture Turing (2018-2021)
GPU Turing TU102
Type Desktop
August 27, 2018 (4 years ago)
The price at the time of release 999 $
Price now $878 (0.9x) of 168889 (A100 PCIe 80 GB) 9011 3

Value for money

Performance to price ratio. The higher the better.


GeForce RTX 2080 Ti’s general performance parameters such as number of shaders, GPU core clock, manufacturing process, texturing and calculation speed. They indirectly speak about GeForce RTX 2080 Ti’s performance, but for an accurate assessment you have to consider its benchmark and gaming test results.

9 0112 Core clock
Number of stream processors 4352 of 20480 (Data Center GPU Max NEXT)
1350 MHz of 2610 (Radeon RX 6500 XT)
Boost frequency 1545 MHz of 3599 (Radeon RX 7990 XTX)
Number of transistors 18.600M of 14400 (GeForce GTX 1070 SLI Mobile))
Process 12nm 90 099 of 4 (GeForce RTX 4090)
Power Consumption (TDP) 270W of 2400 (Data Center GPU Max Subsystem) 99 of 969.9 (h200 SXM5 96 GB)

Compatibility and dimensions

Information on GeForce RTX 2080 Ti compatibility with other computer components. Useful for example when choosing the configuration of a future computer or to upgrade an existing one. For desktop video cards, these are the interface and connection bus (compatibility with the motherboard), the physical dimensions of the video card (compatibility with the motherboard and case), additional power connectors (compatibility with the power supply).

Interface PCIe 3.0 x16
Length 900 98 267 mm
Thickness 2 slots
Additional connectors power supply 2x 8-pin


Parameters of memory installed on GeForce RTX 2080 Ti — type, size, bus, frequency and bandwidth. For video cards built into the processor that do not have their own memory, a shared part of the RAM is used.

Memory type GDDR6
Maximum memory 9009 8 11 GB of 128 (Radeon Instinct MI250X)
Memory bus width 352 bit of 8192 (Radeon Instinct MI250X)
Memory frequency 14000 MHz out of 22400 (GeForce RTX 4080)
Memory bandwidth 616. 0 Gb/s of 3276 (Aldebaran)
Shared memory

Video outputs

Types and number of video connectors present on GeForce RTX 2080 Ti. As a rule, this section is relevant only for desktop reference video cards, since for laptop ones the availability of certain video outputs depends on the laptop model.

Video connectors 1x HDMI, 3x DisplayPort, 1x USB Type-C
HDMI 90 113

G-SYNC support +


Technology solutions and APIs supported by GeForce RTX 2080 Ti are listed here. You will need this information if your video card is required to support specific technologies.

VR Ready +

API Support

APIs supported by GeForce RTX 2080 Ti, including their versions.

90 112 6.5

DirectX 12 Ultimate (12_1)
Shader Model
OpenGL 4.6 9009OpenCL 2.0
Vulkan 90 113

CUDA 7.5 90 113

Tests in benchmarks

These are the results of the GeForce RTX 2080 Ti non-gaming benchmarks for rendering performance. The overall score is set from 0 to 100, where 100 corresponds to the fastest video card at the moment.

Overall benchmark performance

This is our overall performance rating. We regularly improve our algorithms, but if you find any inconsistencies, feel free to speak up in the comments section, we usually fix problems quickly.

RTX 2080 Ti


    This is a very common benchmark included in the Passmark PerformanceTest package. He gives the graphics card a thorough evaluation by running four separate tests for Direct3D versions 9, 10, 11 and 12 (the latter is done in 4K resolution if possible), and a few more tests using DirectCompute.

    Benchmark coverage: 25%

    RTX 2080 Ti

    3DMark Vantage Performance

    3DMark Vantage is an outdated DirectX 10 benchmark. It loads the graphics card with two scenes, one of a girl running away from some kind of military base located in a sea cave, and the other of a space fleet attacking defenseless planet. Support for 3DMark Vantage was discontinued in April 2017 and it is now recommended to use the Time Spy benchmark instead.

    Benchmark coverage: 16%

    RTX 2080 Ti

    3DMark 11 Performance GPU

    3DMark 11 is Futuremark’s legacy DirectX 11 benchmark. He used four tests based on two scenes: one is several submarines exploring a sunken ship, the other is an abandoned temple deep in the jungle. All tests make extensive use of volumetric lighting and tessellation and, despite being run at 1280×720, are relatively heavy. Support for 3DMark 11 ended in January 2020 and is now being replaced by Time Spy.

    Benchmark coverage: 16%

    RTX 2080 Ti

    3DMark Fire Strike Score

    Benchmark coverage: 13%

    RTX 2080 Ti

    3DMark Fire Strike Graphics

    Fire Strike is a DirectX 11 benchmark for gaming PCs. It features two separate tests showing a fight between a humanoid and a fiery creature that appears to be made of lava. Using resolution 1920×1080, Fire Strike shows quite realistic graphics and is quite demanding on hardware.

    Benchmark coverage: 13%

    RTX 2080 Ti

    3DMark Cloud Gate GPU

    Cloud Gate is a legacy DirectX 11 feature level 10 benchmark used to test home PCs and low-end laptops. It displays several scenes of some strange teleportation device launching spaceships into the unknown at a fixed resolution of 1280×720. As with the Ice Storm benchmark, it was deprecated in January 2020 and 3DMark Night Raid is now recommended instead.

    Benchmark coverage: 13%

    RTX 2080 Ti

    GeekBench 5 OpenCL

    Geekbench 5 is a widely used benchmark for graphics cards that combines 11 different test scenarios. All of these scenarios are based on the direct use of the processing power of the GPU, without the use of 3D rendering. This option uses the Khronos Group’s OpenCL API.

    Benchmark coverage: 9%

    RTX 2080 Ti

    3DMark Ice Storm GPU

    Ice Storm Graphics is an obsolete benchmark, part of the 3DMark package. Ice Storm has been used to measure the performance of entry-level laptops and Windows-based tablets. It uses DirectX 11 feature level 9 to render a battle between two space fleets near a frozen planet at 1280×720 resolution. Support for Ice Storm ended in January 2020, now the developers recommend using Night Raid instead.

    Benchmark coverage: 8%

    RTX 2080 Ti

    GeekBench 5 Vulkan

    Geekbench 5 is a widely used benchmark for graphics cards that combines 11 different test scenarios. All of these scenarios are based on the direct use of the processing power of the GPU, without the use of 3D rendering. This option uses the Vulkan API from AMD and the Khronos Group.

    Benchmark coverage: 5%

    RTX 2080 Ti

    GeekBench 5 CUDA

    Geekbench 5 is a widely used benchmark for video cards that combines 11 different test scenarios. All of these scenarios are based on the direct use of the processing power of the GPU, without the use of 3D rendering. This option uses NVIDIA’s CUDA API.

    Benchmark coverage: 4%

    RTX 2080 Ti

    SPECviewperf 12 — specvp12 maya-04

    Benchmark coverage: 3%

    RTX 2080 Ti

    SPECviewperf 12 — specvp12 sw-03

    Benchmark coverage: 3%

    RTX 2080 Ti

    SPECviewperf 12 — specvp12 snx-02

    Benchmark coverage: 3%

    RTX 2080 Ti

    SPECviewperf 12 — specvp12 medical-01

    Benchmark coverage: 3%

    RTX 2080 Ti

    SPECviewperf 12 — specvp12 catia-04

    Benchmark coverage: 3%

    RTX 2080 Ti

    SPECviewperf 12 — specvp12 creo-01

    Benchmark coverage: 3%

    RTX 2080 Ti

    SPECviewperf 12 — specvp12 showcase-01

    Benchmark coverage: 3%

    RTX 2080 Ti

    SPECviewperf 12 — specvp12 energy-01

    Benchmark coverage: 3%

    RTX 2080 Ti

    SPECviewperf 12 — Showcase

    Benchmark coverage: 2%

    RTX 2080 Ti

    SPECviewperf 12 — Maya

    This part of the SPECviewperf 12 workstation benchmark uses the Autodesk Maya 13 engine to render a superhero power plant with over 700,000 polygons in six different modes.

    Benchmark coverage: 2%

    RTX 2080 Ti

    SPECviewperf 12 — Catia

    Benchmark coverage: 2%

    RTX 2080 Ti

    SPECviewperf 12 — Solidworks

    Benchmark coverage: 2%

    RTX 2080 Ti

    SPECviewperf 12 — Siemens NX

    Benchmark coverage: 2%

    RTX 2080 Ti

    SPECviewperf 12 — Creo

    Benchmark coverage: 2%

    RTX 2080 Ti

    SPECviewperf 12 — Medical

    Benchmark coverage: 2%

    RTX 2080 Ti

    SPECviewperf 12 — Energy

    Benchmark coverage: 2%

    RTX 2080 Ti

    SPECviewperf 12 — specvp12 3dsmax-05

    Benchmark coverage: 1%

    RTX 2080 Ti

    SPECviewperf 12 — 3ds Max

    This part of the SPECviewperf 12 benchmark emulates 3DS Max by running eleven tests in various use cases, including architectural modeling and animation for computer games.

    Benchmark coverage: 1%

    RTX 2080 Ti

    GeForce RTX 2080 Ti in games

    FPS in popular games on the GeForce RTX 2080 Ti, as well as compliance with system requirements. Remember that the official requirements of the developers do not always match the data of real tests.

    Average FPS

    Here are the average FPS values ​​for a large selection of popular games at various resolutions:

    Full HD 0121

    1440p 121
    4K 90

    Popular games

    Relative performance

    Overall GeForce RTX 2080 Ti performance compared to its closest desktop competitors.

    AMD Radeon RX 6800

    NVIDIA GeForce RTX 3070

    NVIDIA GeForce RTX 4060 Ti
    102. 18

    NVIDIA GeForce RTX 2080 Ti

    NVIDIA GeForce RTX 4060 Ti 8G

    NVIDIA RTX A4500

    AMD Radeon RX 6750XT


    RTX 2080 Ti is NVIDIA’s second most powerful Turing-based card (after the flagship Titan RTX). This graphics card will be enough for any games in Full HD, 2560×1440, and for most games in 4K.

    Videos of popular games on 2080 Ti:

    Competitor from AMD

    We believe that the nearest equivalent to GeForce RTX 2080 Ti from AMD is Radeon RX 6750 XT, which is slower by 3% on average and lower by 3 positions in our rating.

    Radeon RX 6750 XT


    Here are some of AMD’s closest competitors to the GeForce RTX 2080 Ti:

    AMD Radeon RX 6800XT

    NVIDIA RTX 6000 Ada Generation
    107. 59

    AMD Radeon RX 6800

    NVIDIA GeForce RTX 2080 Ti

    AMD Radeon RX 6750XT

    NVIDIA RTX 4000 SFF Ada Generation

    AMD Radeon RX 6700XT

    Other video cards

    Here we recommend several video cards that are more or less similar in performance to the reviewed one.

    GeForce RTX 3070


    Radeon RX 6800

    9 0103 Compare

    GeForce RTX 3060 Ti


    GeForce RTX 3070 Ti Radeon RX 6700 XT


    GeForce RTX 2080 Super


    Recommended Processors

    Based on our statistics, these processors are most commonly used with the GeForce RTX 2080 Ti.