980 ti review: The NVIDIA GeForce GTX 980 Ti Review

Nvidia GeForce GTX 980 Ti Review

Verdict

Pros

  • Fantastic, consistent 4K performance
  • £250 cheaper than the Titan X
  • Reasonably cool operation

Cons

  • Overkill for 1080p and 1440p gaming

Key Specifications

  • Review Price: £550.00
  • 1,000MHz core clock
  • 1,753MHz 6GB GDDR5 memory
  • 8 billion transistors
  • 2,816 stream processors
  • Requires 1 x 6-pin and 1 x 8-pin power connectors
  • Manufacturer: Nvidia

What is the Nvidia GeForce GTX 980 Ti?

Updated: The GeForce GTX 980 Ti is the second most powerful graphics card from the last generation of Nvidia’s Maxwell graphics cards, sitting under the Titan X in terms of price and performance. With 4K ability and VR Ready certification, it’s a terrific graphics card that’s coming down in price all the time.

Since our original review of the 980 Ti, Nvidia has launched its first Pascal-based GPU, the GeForce GTX 1080. The GTX 980 Ti is one of the few GTX 1080 alternatives that actually comes off looking pretty good up against the new card. Part of this is down to its price; you can pick up a used GTX 980Ti for £400 or a new model for £550, and prices will surely tumble. Performance in the latest games running at 4K resolutions is still excellent, especially with an aftermarket card such as the overclocked version produced by EVGA. You can see a full list of the latest benchmarks in our GTX 1080 review, where an aftermarket GTX 980 Ti is also benchmarked.

With all of that said, if 980 Ti prices don’t drop a huge amount in the coming months it will start to look like a much poorer deal, so it’s worth hunting around for the best price if you want to save some money.

Below is our original review, written in July 2015.

Nvidia GeForce GTX 980 Ti – Under the Hood

This card’s Ti suffix suggests it shares DNA with the cheaper GTX 980, but this isn’t the case – the GTX 980 Ti is built with the same GM200 Maxwell core found inside the mighty Titan X.

That means the new card’s specification is far closer to Nvidia’s top GPU than the cheaper, standard GTX 980. It has 2,816 stream processors – which is only 256 behind the Titan – and six graphics processing clusters and 24 streaming multiprocessors. That latter figure is only two short of the Titan X but six more than the GTX 980.

SEE ALSO: 2015’s Best Games Unveiled

The recycling of the GM200 core means the GTX 980 Ti also has 8 billion transistors and a 601mm2 die – almost twice the size of the chip inside the GTX 980.

The GTX 980 Ti’s stock and boost clocks of 1,000MHz and 1,075MHz are the same as those found in the Titan, although this is one area where the 1,126MHz GTX 980 pulls ahead.

However, there’s one area in which the GTX 980 Ti falls behind the more expensive Titan X: memory. Nvidia’s barnstorming Titan X has 12GB of GDDR5 RAM, but the GTX 980 Ti makes do with half that amount. It’s still clocked to the same speed of 1,753MHz, and it’s still plenty – most cards don’t even have close to 6GB, let alone double that number.

At extemes it may make a difference, but it’s unlikely to be noticeable. It’s more for use in high-end computing applications, whereas the 980 Ti is more suited to gaming.

In practical terms, then, expect performance closer to the Titan X. On paper, this makes the 980 Ti seem like a bargain at £550: far closer to the £400 GTX 980 than the Titan X, which usually retails for more than £800.

On the outside, the GTX 980 Ti and Titan X are similar too. Both have Nvidia’s swish aluminium cooler design, and both require single six- and eight-pin power connectors.

The GTX 980 Ti demands a more sizeable case: Nvidia’s reference model is 267mm long. In contrast, the AMD Fury X is much shorter since it uses a separate liquid cooler.

TrustedReviews Awards 2015: Winners announced

AMD’s older range doesn’t have anything that can compete with the GTX 980 Ti, with its nearest competitor falling well short of even the GTX 980. However, the new R9 series is expected to put up more of a fight.

The closest challenger will be the AMD Radeon R9 Fury X, which will cost around £510. It’s set to be an intriguing battle: the AMD card has only 4GB of RAM, but it has more stream processors and transistors than Nvidia’s card, and a faster core too.

Nvidia GeForce GTX 980 Ti – How We Tested

We’ve loaded five games for this GPU test. Battlefield 4, BioShock Infinite and Crysis 3 all return from our previous reviews, and we’ve added Metro: Last Light and Batman: Arkham Origins to the mix. We’ve tested at 2,560 x 1,440 and 3,840 x 2,160 to see how the GTX 980 Ti will handle high-resolution single screens. We haven’t tested at 1080p, as we know this card is powerful enough to blast through any game at that lower resolution.

SEE ALSO: Best Gaming Headsets

We’ve used 3DMark’s Fire Strike test and four Unigine Heaven benchmarks to test theoretical performance, plus idle and load temperatures and power requirements have been taken to see which card is the coolest and most frugal.

Our test rig consists of an Asus X79-Deluxe motherboard, Intel Core i7-4960X processor, 16GB of RAM and a 1TB hard disk.

For prices, we visited the Scan website and noted down the cheapest stock-speed card we could find, although we will be referring to various overclocked and tweaked models available for each GPU – which will be more expensive – later on in the review.

Mike has worked as a technology journalist for more than a decade, writing for most of the UK’s most well-known websites and magazines. During his time writing about technology he’s developed obsessio…

Editorial independence

Editorial independence means being able to give an unbiased verdict about a product or company, with the avoidance of conflicts of interest. To ensure this is possible, every member of the editorial staff follows a clear code of conduct.

Professional conduct

We also expect our journalists to follow clear ethical standards in their work. Our staff members must strive for honesty and accuracy in everything they do. We follow the IPSO Editors’ code of practice to underpin these standards.

Gigabyte GeForce GTX 980 Ti Xtreme Gaming Windforce Review — Tom’s Hardware

Skip to main content

Tom’s Hardware is supported by its audience. When you purchase through links on our site, we may earn an affiliate commission. Here’s why you can trust us.

Gigabyte’s GeForce GTX 980 Ti Xtreme Gaming might just be the fastest single-GPU graphics card we’ve ever tested. It features a custom PCB, binned and overclocked processor, and robust cooling.

Early Verdict

Gigabyte’s GTX 980 Ti Xtreme Gaming is staggeringly powerful. It’s the first graphics card we’ve tested that delivers a satisfying gaming experience at 3840×2160. It’s easily the fastest single-GPU board to pass through our lab.

Today’s best Gigabyte GeForce GTX 980 Ti XTREME deals

No price information

For more information visit their website

Introduction And Product 360

Nvidia’s reference GeForce GTX 980 Ti is already one of the fastest boards on the market. With the added benefits of a factory-overclocked binned GPU, a custom PCA and high-end cooling, Gigabyte’s GeForce GTX 980 Ti Xtreme Gaming promises even more performance.

Nvidia launched its gaming flagship last May to rave reviews, particularly from those who were dismayed by the Titan X’s comparable performance and much higher price tag. In fact, much of the time, Nvidia’s GTX 980 Ti is faster than the mighty Titan X.

That’s interesting because the 980 Ti employs a slightly cut-down version of the GM200 GPU (it sports 2816, rather than 3072 CUDA cores). Further, Nvidia specifies that the 980 Ti includes 6GB of GDDR5 rather than the 12GB found on the Titan X. If you want the complete comparison, check out our Nvidia GeForce GTX 980 Ti 6GB Review.

The 980 Ti has a base core clock rate of 1000MHz. Nvidia’s board partners have some freedom to adjust that frequency, and Gigabyte really pushes the envelope. Its GeForce GTX 980 Ti Xtreme Gaming ships with the core cranked up to 1216MHz and GPU Boost rated for 1317MHz. Gigabyte also tunes the board’s memory, pushing Nvidia’s 7 GT/s transfer rate to 7.2 GT/s. As a result, the company says it achieves up to 33 percent more performance than the reference version.

Specifications

Gigabyte GeForce GTX 980 Ti

Check Amazon

View Site

Nvidia GeForce GTX 980 Ti

$589

View at Walmart

View at Walmart

View at Amazon

263 Amazon customer reviews

☆☆☆☆☆

Sapphire Radeon R9 390

View Site

Product 360

The GPUs used in Gigabyte’s GTX 980 Ti Xtreme Gaming are specially selected; each one goes through the company’s GPU Gauntlet sorting process. Additionally, the GeForce GTX 980 Ti Xtreme Gaming employs Gigabyte’s Ultra Durable VGA technology. Its PCB has a special coating that Gigabyte says protects against damage from moisture, dust and corrosion. This board further benefits from the same high-grade chokes and capacitors found on the GeForce GTX Titan X. A 12+2-phase power design helps maintain proper load balancing.

Enthusiasts familiar with Gigabyte’s graphics portfolio will notice that the GeForce GTX 980 Ti Xtreme Gaming looks like some of the company’s other cards. The big difference, of course, is that its thermal solution is quite a bit larger than most of the other boards with WindForce 3X coolers.

Overclocking a large, hot GPU necessitates a capable heat sink, and Gigabyte’s starts with a copper plate that directly contacts the memory modules and processor. Heat pipes draw energy away from the plate and into two sets of vertically oriented aluminum fins.

Five 12mm copper heat pipes meet right above the GPU in a tight cluster, and then spread out into the second set of fins. The area above the GPU also benefits from two 8mm U-shaped copper pipes that improve the rate at which thermal energy moves away from the sensitive electronics.

All of that copper contributes to the GeForce GTX 980 Ti Xtreme Gaming’s 1347-gram weight. This isn’t the heaviest graphics card we’ve ever tested, but it’s up there.

Copper and aluminum are great materials for dissipating heat, but they’d quickly be saturated without cool air moving across the card’s surfaces. Gigabyte’s WindForce 3X triple-fan cooler is found on many of the company’s boards. But it made a change to the center fan for this particular model.

Notice that the middle fan’s blades pitch a different direction than the left and right fans. The center fan spins clockwise, moving air in the same direction as the fans on either side of it.

Gigabyte also adds RGB LED light rings behind each fan, which you can control through the bundled OC Guru II software.

On some of the lower-end WindForce-equipped products, such as the GTX 950 Xtreme Gaming, the heat sink shroud is made of plastic. Gigabyte springs for higher-quality material on its GeForce GTX 980 Ti, building the shroud out of 2mm-thick brushed aluminum. It’s painted mostly black, but there’s a silver stripe across the bottom-rear fan. The silver parts are additional pieces of metal that get glued on.

The top edge of the shroud features Gigabyte’s WindForce logo, which is lit by LEDs set to the same color as the fan rings. There’s also a light to indicate when the fans stop spinning.

If you measure the GeForce GTX 980 Ti Xtreme Gaming from its I/O bracket to the edge of its exoskeleton-like shroud, the card is 11.5 inches long. From top to bottom, it’s 4.75 inches tall. The heat sink and shroud actually extend beyond the length and height of the PCB. Without them, the circuit board would only measure 10.5 by 4.25 inches.

Flip the board over and you’ll find an aluminum back plate covering the PCA.

Most of Nvidia’s cards support multi-card configurations, and the GeForce GTX 980 Ti lets you connect as many as four boards together for improved performance. Gigabyte’s GTX 980 Ti Xtreme Gaming has the two interconnects you’d need to enable such a setup. But because its PCB is taller than reference, you’d likely need a flexible link cable to connect 980 Tis from another board partner.

You’ll need a power supply with plenty of output, as suggested by the pair of eight-pin auxiliary inputs. Gigabyte installs its connectors recessed somewhat into the heat sink, but the latches face outward, making it easier to remove the cables. Both inputs have a corresponding indicator light just below the latch. These LEDs stay lit when the power source is stable; they flash to alert you of a problem.

There’s an additional six-pin power connector on the back of the card, along with a button marked Xtreme. Pressing the button toggles LN2 mode and activates the extra plug. Unless you’re using extreme cooling, avoid messing with this setting.

Gigabyte’s GeForce GTX 980 Ti Xtreme Gaming has all of the outputs you’d expect, including three full-sized DisplayPort connectors, HDMI 2.0 and one DVI-I port. From them, you can drive up to four monitors at a time.

Gigabyte clearly dedicated some effort to ensuring its heavy board doesn’t get damaged in shipping. There’s over an inch of closed-cell foam protecting all sides of the card from impact. Beneath the graphics card, you’ll find a driver disc and setup guide. You also get a Y-connector that takes two six-pin PCIe cables and creates a single eight-pin connector in case your power supply doesn’t have the requisite leads.

High-end graphics cards sometimes come bundled with extras that add value, and Gigabyte’s GeForce GTX 980 Ti Xtreme Gaming card is no exception. Apparently, the company expects its customers to take part in some pretty intense action, because Gigabyte includes an Xtreme Gaming sweatband in the box.

MORE: Best Graphics CardsMORE: All Graphics Content

  • 1

Current page:
Introduction And Product 360

Next Page How We Test

 Kevin Carbotte is a contributing writer for Tom’s Hardware who primarily covers VR and AR hardware. He has been writing for us for more than four years. 

Tom’s Hardware is part of Future US Inc, an international media group and leading digital publisher. Visit our corporate site .

©
Future US, Inc. Full 7th Floor, 130 West 42nd Street,
New York,
NY 10036.

Nvidia GeForce GTX 980 Ti 6GB Review — Tom’s Hardware

Skip to main content

Tom’s Hardware is supported by its audience. When you purchase through links on our site, we may earn an affiliate commission. Here’s why you can trust us.

Today’s best Radeon R9 290X deals

No price information

For more information visit their website

Introduction

Fewer than three months have passed since Nvidia took the wraps off of GeForce GTX Titan X, and the company is already launching another GM200-based graphics card called GeForce GTX 980 Ti. It’s about $400 cheaper than the flagship’s street price. Yet, we’re told it only gives up a few percentage points of performance. Is there still a reason to lust after the Titan X? Could you, in good conscience, spend $500 on the 980 knowing that this monster exists (yes, the 980 is dropping $50, according to Nvidia)? Is this move preempting AMD’s upcoming ultra-high-end Fiji unveiling?

Any answer to that last question would be purely speculative. But we weren’t expecting to see a Titan X derivative so soon. Nvidia introduced its original GeForce GTX Titan in February of 2013 and followed up nine months later with GeForce GTX 780 Ti, also based on the GK110 GPU. Those cards were decidedly not built for the same customers. The Titan had one of its SMX clusters turned off, a then-unprecedented 6GB of memory and a GPU equally adept at 3D and double-precision math. Meanwhile, the 780 Ti featured a full 2880 CUDA cores and 240 texture units for graphics supremacy, higher clock rates and a $300-lower price tag. Most gamers with money to spend had little trouble choosing 780 Ti over the Titan.

Unfortunately, there was also a good reason to ding it: Nvidia armed GeForce GTX 780 Ti with 3GB of memory, and the rumored 6GB models never materialized. Two years ago, that was fine for 2560×1440. And 4K screens weren’t really “a thing” yet; those that did exist were $3000+ affairs. We did, however, figure out that 3GB wasn’t enough RAM to game smoothly on a trio of QHD displays (>11 million pixels). Later, we also ran into situations where 4K (>8 million pixels) was held back by the card’s available memory.

Today’s monitor market looks nothing like it did then. Ultra HD screens start under $500. Nvidia’s G-Sync variable refresh rate technology is almost 18 months-old. And AMD’s FreeSync equivalent is gaining momentum as well. We have to assume that anyone shopping for a high-end graphics card in 2015 is at least considering an upgrade to 4K.

Tweaking GM200 For GeForce GTX 980 Ti

Nvidia knows where the display market is heading, and it isn’t about to shortchange this generation’s Titan-derivative in the memory department. Beyond adding more on-board GDDR5 than 780 Ti, the company’s Maxwell architecture utilizes available bandwidth to greater effect—something we first observed last February from GeForce GTX 750 Ti and its GM107 GPU. GM200 is built even more robustly than that early implementation of Maxwell. Each of its SMMs sports 96KB of shared memory and a 48KB texture/L1 cache, while a large 3MB L2 cache minimizes requests made to DRAM as much as possible. All of those hardware-oriented changes, combined with new color compression schemes, make playable performance at 4K a more realistic goal for certain single-GPU systems.

That’s the good news. But because Nvidia’s GeForce GTX Titan X already features a fully-enabled GM200 processor, there’s really no way to make the 980 Ti faster. This creates a bit of an issue for differentiating two high-end cards based on the same ASIC.

How about characterizing their strengths in compute-oriented workloads? Last generation, the Titan was capable of around 1. 5 TFLOPS of double-precision math. Nvidia artificially dialed the 780 Ti to 1/8 of that, or roughly 210 GFLOPS, creating a nice split between them. But the same option isn’t available today, since GM200 gives up its compute potential altogether in favor of efficient gaming. As a result, the Titan X and 980 Ti are both limited to native FP64 rates of 1/32.

So, with Titan X already out there, selling for more than $1000, the company’s only option seemed to be a surgical incision, trimming away some of GM200’s resources and creating a GeForce GTX 980 Ti that’s slightly less potent than Titan X, but more compelling than GeForce GTX 980 (and a big upgrade over 780 Ti).

Nvidia GeForce GTX 980 Ti

$589

View at Walmart

View at Walmart

View at Amazon

263 Amazon customer reviews

☆☆☆☆☆

GeForce GTX Titan X

Check Amazon

View Site

GeForce GTX 980

$249

View at Amazon

View at Amazon

View at Walmart

775 Amazon customer reviews

☆☆☆☆☆

At least the haircut isn’t dramatic. We’re still looking at GM200 and its six Graphics Processing Clusters. Only, across that sextet, two Streaming Multiprocessors are disabled. With 128 CUDA cores per SMM, you’re down 256, yielding a total of 2816 cores across the processor. Similarly, the loss of eight texture units per SMM results in a GPU with 176 (instead of 192).

You might guess that fusing off ~8% of GM200’s shader and texturing resources would result in a corresponding performance drop in games bound by those parts of the graphics pipeline. But Nvidia claims that the difference between GeForce GTX Titan X and 980 Ti is minor.

The company doesn’t seem to be worried. It isn’t trying to compensate with higher clock rates—GeForce GTX 980 Ti is marketed at the same 1000MHz base and 1075MHz GPU Boost clock rates as Titan X. And the GPU’s back-end doesn’t change either. From our Titan X story:

“GeForce GTX 980’s four ROP partitions grow to six in (GeForce GTX 980 Ti). With 16 units each, that’s up to 96 32-bit integer pixels per clock. The ROP partitions are aligned with 512KB slices of L2 cache, totaling 3MB in GM200. When it introduced GeForce GTX 750 Ti, Nvidia talked about a big L2 as a mechanism for preventing bottlenecks on a relatively narrow 128-bit memory interface. That’s not as big of a concern with GM200, given its 384-bit path populated by 7 Gb/s memory. Maximum throughput of 336.5 GB/s matches the GeForce GTX 780 Ti, and exceeds GeForce GTX Titan, GeForce GTX 980 and Radeon R9 290X.”

Whereas the Titan X sports 12GB of GDDR5 memory, though, the GeForce GTX 980 Ti comes with 6GB at the same 7 Gb/s. That’s hardly a compromise, we’d say. Six gigabytes is plenty for 4K or three QHD screens in Surround. Don’t expect to see 12GB versions down the road, either. Nvidia doesn’t plan to chew into Titan X sales with a beefed-up 980 Ti.

  • 1

Current page:
Introduction

Next Page Meet The GeForce GTX 980 Ti

Tom’s Hardware is part of Future US Inc, an international media group and leading digital publisher. Visit our corporate site .

©
Future US, Inc. Full 7th Floor, 130 West 42nd Street,
New York,
NY 10036.

Review and testing of video card NVIDIA GeForce GTX 980 Ti — i2HARD

NVIDIA once again indulges its fans: now they will have something to comfortably play new games in the gaming industry. In the spring of this year, the GeForce GTX 980 Ti video card was introduced, on a powerful …

The company NVIDIA once again indulges its fans: now they will have something to comfortably play new games in the gaming industry. This spring, the GeForce GTX 980 Ti video card was introduced, based on a powerful Maxwell graphics chip. New graphics accelerator GM200 boasts 2816 stream processors and 176 texture units. The base frequency of the GPU is 1000 MHz, and the effective memory frequency is 7010 MHz. The amount of video memory GDDR5 amounted to an impressive 6 GB. The memory bandwidth has also been increased — it works on a 384-bit bus.

In addition, the new product fully supports the new DirectX 12, all the delights of which we will see soon. And in the already released games Far Cry 4 and The Witcher 3, GeForce GTX 9 users80 Ti will be able to experience all the effects of NVIDIA GameWorks: advanced visualization of dynamic wool and fabrics, interactive smoke and an improved particle system.

NVIDIA has presented a reference sample of a new video card to our editorial office. And we were happy to test its capabilities for you in various games and synthetic benchmarks.

Specification

  • Graphics Clusters: 6
  • Streaming Multiprocessors: 22
  • Stream processors: 2816
  • texture blocks: 176
  • Rending blocks: 96
  • Basic frequency GPU: 1000 MHz
  • Boost Clock: 1075 MHz
  • Memory Frequency: 3505 MHz

  • Kesh L2: 3 MB
  • MB 9002 data: 7 Gbps

  • Memory size: 6144MB GDDR5
  • Memory bus: 384-bit
  • Process technology: 28 nm
  • Number of transistors: 8000 million
  • Interfaces: 3 x DisplayPort, 1 x Dual-HDMI, 1 x Dual-HDMI Link DVI
  • Form factor: dual slot
  • Supported DirectX version: 12. 1
  • Power connectors: one 8-pin and one 6-pin
  • Wattage: 250 W
  • Recommended PSU: 600 W : 92° C

Design

Appearance

The reference design of the novelty is identical to the flagships of previous years. The casing completely covers the entire video card. Silver plastic with black inserts looks solid, there is a transparent tinted window through which you can see the radiator. Lots of screws for a non-standard hex screwdriver. The length of the video card is only 270 mm.

On the side, in the center, the inscription GeForce GTX with green illumination. Using the driver, you can customize the backlight indication, up to the color and music accompaniment.

Along the edges of the inscription — interfaces for connecting two SLI bridges and two connectors for connecting additional power — 6 and 8 pins.

The video connector interfaces are located in two rows: one DVI at the bottom, three DisplayPort and one HDMI at the top.

The reverse side shows us the printed circuit board textolite, not covered by a metal plate. Here we see a lot of screws, unscrewing which we can remove the entire cooling system.

Cooling system

From the far end we see an air intake grille, which is blown by the turbine through the entire length of the case, cools the radiators, and is thrown out through the front panel ventilation grilles.

On the bottom of the radiator we see thermal pads through which heat is removed from transistors and memory chips. A nickel-plated copper base of a separate heatsink contacts the GPU die.

PCB

The PCB houses the GM200-310-A1 processor. It has a protective metal frame around it.

12 SKhynix H5GQ4h34MFR R2C memory chips are soldered around the perimeter of the video processor.

There are six phases to power the processor and two phases to power the memory.

The power controller is assembled on a small separate board. Two connectors for connecting a fan and LED backlight.

First, let’s test the capabilities of the cooling system. Let’s load the video card with the FurMark V1.10.2 program. the readings will be recorded by the GPU-Z program. Testing was carried out in an open case, while case fans did not affect the cooling of the video card.

At rest, the temperature was 47 ° C at a fan speed of 1050. At the same time, it was not audible at all. The overall noise level was 37 dB. As the temperature rises, the speed increases and the noise from the cooling system increases.

Up to 80 °C, the fan speed was less than 2000, with these indicators the video card is very quiet, the noise level is not more than 45 dB.

Under load, the temperature rose to 85 ° C, the speed increased to 2618. The noise increased to 50 dB. The temperature does not rise above this value, the cooling system tries to keep it within these limits by increasing the fan speed.

Next, let’s test the capabilities of the video card in overclocking. To do this, we will use the EVGA PrecisionX 16 utility.

Softvoltmod, although available, has not been used yet, the cooling of the reference solution leaves much to be desired. It was possible to increase the GPU frequencies to 1250 MHz, while Boost Clock raised the frequency in some games to 1452 MHz. The memory frequency was raised to 1964 MHz. For more efficient cooling during overclocking, the speed of the video card fan was fixed at 3000 rpm. With such aggressive settings, this made it possible to keep the temperature of the video processor below 80 ° C.

Synthetic tests

These tests are interesting because they allow you to compare different game builds. By running these benchmarks on your gaming computer, you can compare the numbers and decide whether you need to change your video card to something more powerful, or leave everything as it is.

3DMark packages first. All default settings.

3DMark 11:

This test did not pass with increased frequencies: the levels did not load, giving an error.

3DMark – Fire Strike:

As the video core frequency increases, this test demonstrates a significant increase in values: the increase was 20%.

Now Unigine tests:

Unigine Valley Benchmark 1.0

In this test, the average fps increased, while the minimum and maximum remained unchanged.

Unigine Heaven Benchmark 4.0

This test responds better to video card overclocking — an increase of about 25%.

Gaming tests

Potential buyers are most concerned about the video card’s performance in games. After all, this video card is bought for this. In games, testing was carried out on a monitor with a resolution of 2560 × 1080. The quality settings were set to the highest possible (you can see them on the screenshots). V-sync has been disabled to demonstrate maximum fps. The results were recorded by the Fraps program. In some games, for clarity, there are videos of the gameplay recorded using the utility ShadowPlay — included in the GeForce Experience driver.

The videos monitored by MSI Afterburner were recorded using the same program, it clearly demonstrates not only fps, but also the loading of the video core, video memory, and their frequency.

GTA 5

The Witcher 3 Wild Hunt

9000 9000

256

Far Cry 4

Watch Dogs

Assassin’s Creed Rogue

Call of Duty Advanced Warfare0002

Need for Speed ​​Rivals

As we can see from the tests, the GeForce GTX 980 Ti graphics card copes without problems with all games at the highest settings. This is especially noticeable in the minimum frames per second. At the same time, an excellent realistic picture is demonstrated in the novelties of the gaming industry. The performance of the GTX 980 Ti will be sufficient for both 4K resolution and multi-monitor configuration. A large amount of memory is designed for just that. The games we tested didn’t use much video memory, 3700 — 4200 MB maximum.

Conclusion

Maxwell GPU-based graphics accelerator demonstrates impressive performance today. Only the Titan X is more powerful, but it also costs a lot more. The GeForce GTX 980 Ti has good overclocking potential, but overclocking enthusiasts should wait for the release of video cards with non-reference CO. It remains to wait for the release of Windows 10 and games with DirectX 12.1 to enjoy the realistic picture that the new product from NVIDIA can draw.

In the review, there was no comparison of new items with competitors or with the flagships of past years. But judging by the tests, the GTX 980 Ti outperforms its predecessor GTX 980 by 20-30%, and its advantage over the Radeon R9 290X is generally unrivaled. Let’s wait for the release of a full-fledged rival in the face of the Radeon R9 Fury X or Radeon R9 390X, and then it will be possible to make a full-fledged comparison.

Pros

  • Excellent gaming performance
  • Large memory
  • Good overclocking potential
  • Full DirectX 12. 1 support
  • Compact chassis

Cons

  • Noisy CO under load

GeForce GTX 980 Ti [in 14 benchmarks] 9002
NVIDIA

GeForce GTX 980 Ti

  • PCIe 3.0 x16 interface
  • Core frequency 1000
  • Video memory size 6 GB
  • Memory type GDDR5
  • Memory frequency 7.0 Gbps
  • Maximum resolution

Description

NVIDIA started GeForce GTX 980 Ti sales on June 2, 2015 at a suggested price of $649. This is a desktop graphics card based on Maxwell architecture and 28 nm manufacturing process, primarily aimed at gamers. It has 6 GB of GDDR5 memory at 7.0 Gb/s, and coupled with a 384-bit interface, this creates a bandwidth of 336.5 Gb/s.

In terms of compatibility, this is a two-slot PCIe 3.0 x16 card. The length of the reference version is 26.7 cm. An additional 6-pin + 8-pin power cable is required for connection, and the power consumption is 250 W.

It provides good performance in tests and games at the level of

46. 73%

from the leader, which is the NVIDIA GeForce RTX 3090 Ti.


GeForce GTX
980 Ti

or


GeForce RTX
3090 Ti

General information 9999 (A100 SXM4)

Value for money

To obtain an index, we compare the characteristics of video cards and their cost, taking into account the cost of other cards.

  • 0
  • 50
  • 100

Features

GeForce GTX 980 Ti’s general performance parameters such as number of shaders, GPU core clock, manufacturing process, texturing and calculation speed. They indirectly talk about the performance of the GeForce GTX + GeForce ShadowPlay + GPU Boost 2. 0 GameWorks +

API Support

APIs supported by GeForce GTX 980 Ti, including their versions.

DirectX +

Benchmark tests

These are the results of GeForce GTX 980 Ti non-gaming benchmarks for rendering performance. The overall score is set from 0 to 100, where 100 corresponds to the fastest video card at the moment.


Overall benchmark performance

This is our overall performance rating. We regularly improve our algorithms, but if you find any inconsistencies, feel free to speak up in the comments section, we usually fix problems quickly.

GTX 980 Ti
46.73

  • Passmark
  • 3DMark 11 Performance GPU
  • 3DMark Vantage Performance
  • 3DMark Cloud Gate GPU
  • 3DMark Fire Strike Score
  • 3DMark Fire Strike Graphics
  • GeekBench 5 OpenCL
  • 3DMark Ice Storm GPU
  • GeekBench 5 Vulcan
  • GeekBench 5 CUDA
  • Octane Render OctaneBench
  • SPECviewperf 12 — Showcase
  • SPECviewperf 12 — Maya
  • Unigine Heaven 4. 0
Passmark

This is a very common benchmark included in the Passmark PerformanceTest package. He gives the graphics card a thorough evaluation by running four separate tests for Direct3D versions 9, 10, 11 and 12 (the latter is done in 4K resolution if possible), and a few more tests using DirectCompute.

Benchmark coverage: 26%

GTX 980 Ti
13853

3DMark 11 Performance GPU

3DMark 11 is Futuremark’s legacy DirectX 11 benchmark. He used four tests based on two scenes: one is several submarines exploring a sunken ship, the other is an abandoned temple deep in the jungle. All tests make extensive use of volumetric lighting and tessellation and, despite being run at 1280×720, are relatively heavy. Support for 3DMark 11 ended in January 2020 and is now being replaced by Time Spy.

Benchmark coverage: 17%

GTX 980 Ti
23057

3DMark Vantage Performance

3DMark Vantage is an outdated DirectX 10 benchmark. It loads the graphics card with two scenes, one of which is a girl running away from some kind of military base located in a sea cave, and the other is a space fleet attacking defenseless planet. Support for 3DMark Vantage was discontinued in April 2017 and it is now recommended to use the Time Spy benchmark instead.

Benchmark coverage: 17%

GTX 980 Ti
48631

3DMark Cloud Gate GPU

Cloud Gate is a legacy DirectX 11 feature level 10 benchmark used to test home PCs and low-end laptops. It displays several scenes of some strange teleportation device launching spaceships into the unknown at a fixed resolution of 1280×720. As with the Ice Storm benchmark, it was deprecated in January 2020 and 3DMark Night Raid is now recommended instead.

Benchmark coverage: 14%

GTX 980 Ti
98958

3DMark Fire Strike Score

Benchmark coverage: 14%

GTX 980 Ti
14339

3DMark Fire Strike Graphics

Fire Strike is a DirectX 11 benchmark for gaming PCs. It features two separate tests showing a fight between a humanoid and a fiery creature that appears to be made of lava. Using resolution 1920×1080, Fire Strike shows quite realistic graphics and is quite demanding on hardware.

Benchmark coverage: 14%

GTX 980 Ti
16961

GeekBench 5 OpenCL

Geekbench 5 is a widely used benchmark for graphics cards that combines 11 different test scenarios. All of these scenarios are based on the direct use of the processing power of the GPU, without the use of 3D rendering. This option uses the Khronos Group’s OpenCL API.

Benchmark coverage: 9%

GTX 980 Ti
40025

3DMark Ice Storm GPU

Ice Storm Graphics is an obsolete benchmark, part of the 3DMark package. Ice Storm has been used to measure the performance of entry-level laptops and Windows-based tablets. It uses DirectX 11 feature level 9 to render a battle between two space fleets near a frozen planet at 1280×720 resolution. Support for Ice Storm ended in January 2020, now the developers recommend using Night Raid instead.

Benchmark coverage: 8%

GTX 980 Ti
443119

GeekBench 5 Vulkan

Geekbench 5 is a widely used benchmark for graphics cards that combines 11 different test scenarios. All of these scenarios are based on the direct use of the processing power of the GPU, without the use of 3D rendering. This option uses the Vulkan API from AMD and the Khronos Group.

Benchmark coverage: 5%

GTX 980 Ti
52856

GeekBench 5 CUDA

Geekbench 5 is a widely used benchmark for graphics cards that combines 11 different test scenarios. All of these scenarios are based on the direct use of the processing power of the GPU, without the use of 3D rendering. This option uses NVIDIA’s CUDA API.

Benchmark coverage: 5%

GTX 980 Ti
35714

Octane Render OctaneBench

This is a dedicated benchmark for measuring graphics card performance in OctaneRender, which is a realistic GPU rendering engine created by OTOY Inc. , available either as a standalone program or as a plug-in for 3DS Max, Cinema 4D and many other applications. It renders four different static scenes and then compares the render times to the reference card, which is the GeForce GTX 9 at the moment.80. This benchmark does not measure gaming performance and is intended for professional 3D artists.

Benchmark coverage: 4%

GTX 980 Ti
126

SPECviewperf 12 — Showcase

Benchmark coverage: 2%

GTX 980 Ti
90

SPECviewperf 12 — Maya

This part of the SPECviewperf 12 workstation benchmark uses the Autodesk Maya 13 engine to render a superhero power plant with over 700,000 polygons in six different modes.

Benchmark coverage: 2%

GTX 980 Ti
139

Unigine Heaven 4.0

This is an old DirectX 11 based benchmark, a newer version of Unigine 3.0 with relatively minor differences. It depicts a medieval fantasy city spread over several floating islands. The benchmark is still occasionally used despite its significant age, and it was released back in 2013.

Benchmark coverage: 1%

GTX 980 Ti
2550


Mining hashrates

GeForce GTX 980 Ti performance in cryptocurrency mining. Usually the result is measured in mhash / s — the number of millions of solutions generated by the video card in one second.

Bitcoin / BTC (SHA256) 784 Mh/s
Decred / DCR (Decred) 2.4 Gh/s
Ethereum / ETH (DaggerHashimoto) 21.57 Mh/s
Monero / XMR (CryptoNight) 0.7 kh/s
Zcash / ZEC (Equihash) 461 Sol/s

Game tests

99 4K 48

Popular games
  • Full HD Preset
    2 Low4
  • Full HD
    Medium Preset
  • Full HD
    High Preset
  • Full HD
    Ultra Preset
  • 1440p
    High Preset
  • 1440p
    Ultra Preset
  • 4K
    High Preset
  • 4K
    Ultra Preset
Cyberpunk 2077 45-50
Assassin’s Creed Odyssey 45-50
Assassin’s Creed Valhalla 45-50
Battlefield 5 45-50
Call of Duty: Modern Warfare 45-50
Cyberpunk 2077 45-50
Far Cry 5 45-50
Far Cry New Dawn 45-50
Forza Horizon 4 45-50
Hitman 3 45-50
Horizon Zero Dawn 45-50
Red Dead Redemption 2 45-50
Shadow of the Tomb Raider 45-50
Watch Dogs: Legion 45-50
Assassin’s Creed Odyssey 45-50
Assassin’s Creed Valhalla 45-50
Battlefield 5 45-50
Call of Duty: Modern Warfare 38
Cyberpunk 2077 45-50
Far Cry 5 45-50
Far Cry New Dawn 45-50
Forza Horizon 4 45-50
Hitman 3 45-50
Horizon Zero Dawn 45-50
Metro Exodus 45-50
Red Dead Redemption 2 45-50
Shadow of the Tomb Raider 33
The Witcher 3: Wild Hunt 45-50
Watch Dogs: Legion 45-50
Assassin’s Creed Odyssey 46
Assassin’s Creed Valhalla 45-50
Battlefield 5 94
Cyberpunk 2077 45-50
Far Cry 5 77
Far Cry New Dawn 45-50
Forza Horizon 4 72
The Witcher 3: Wild Hunt 59
Watch Dogs: Legion 45-50
Call of Duty: Modern Warfare 54
Hitman 3 45-50
Horizon Zero Dawn 45-50
Metro Exodus 45-50
Red Dead Redemption 2 45-50
Shadow of the Tomb Raider 45-50
Assassin’s Creed Odyssey 45-50
Assassin’s Creed Valhalla 45-50
Battlefield 5 45-50
Cyberpunk 2077 45-50
Far Cry 5 45-50
Far Cry New Dawn 45-50
Forza Horizon 4 45-50
Watch Dogs: Legion 45-50
Call of Duty: Modern Warfare 32
Hitman 3 45-50
Horizon Zero Dawn 45-50
Metro Exodus 45-50
Red Dead Redemption 2 45-50
Shadow of the Tomb Raider 23
The Witcher 3: Wild Hunt 44
Assassin’s Creed Odyssey 18
Assassin’s Creed Valhalla 45-50
Battlefield 5 40
Cyberpunk 2077 45-50
Far Cry 5 30
Far Cry New Dawn 45-50
Forza Horizon 4 42
Watch Dogs: Legion 45-50

Relative capacity

Overall GeForce GTX 980 Ti performance compared to its nearest desktop counterparts.


AMD Radeon RX 6600
103.94

NVIDIA RTX A2000 12GB
101.05

NVIDIA GeForce RTX 2060
100.79

NVIDIA GeForce GTX 980 Ti
100

AMD Radeon RX 5600XT
99.83

AMD Radeon RX Vega 56
98.44

NVIDIA GeForce GTX 1070
97.41

AMD competitor

We believe that the nearest equivalent to GeForce GTX 980 Ti from AMD is Radeon RX 5600 XT, which is approximately equal in speed and lower by 1 position in our rating.


Radeon RX
5600XT

Compare

Here are some of AMD’s closest competitors to the GeForce GTX 980 Ti:

AMD Radeon RX Vega 64
105.71

AMD Radeon RX 5700
105. 52

AMD Radeon RX 6600
103.94

NVIDIA GeForce GTX 980 Ti
100

AMD Radeon RX 5600XT
99.83

AMD Radeon RX Vega 56
98.44

AMD Radeon Vega Frontier Edition
97.15

Other video cards

Here we recommend several video cards that are more or less similar in performance to the reviewed one.


Radeon RX
Vega 56

Compare


GeForce GTX
1070

Compare


GeForce GTX
1070 Ti

Compare


Radeon RX
Vega 64

Compare


Titan X
Pascal

Compare


GeForce GTX
1080

Compare

Recommended Processors

According to our statistics, these processors are most often used with the GeForce GTX 980 Ti.


Core i7
6700K

4%


Ryzen 5
3600

3.3%


Core i7
4790K

3.3%


Core i5
10400F

2.9%


Ryzen 5
2600

2.6%


Core i3
10100F

2.4%


Core i7
4790

2%


Core i7
4770K

1.7%


Core i5
9400F

1.7%


Core i5
6600K

1. 6%

User rating

Here you can see the rating of the video card by users, as well as put your own rating.


Tips and comments

Here you can ask a question about the GeForce GTX 980 Ti, agree or disagree with our judgements, or report an error or mismatch.


Please enable JavaScript to view the comments powered by Disqus.

NVIDIA GeForce GTX 980 Ti

The most productive one -Processor accelerator of the game class

Content

  • Part 1 — Theory and Architecture
  • Part 2 — Practical acquaintance

      Corguses

      Corps

    • Synthetic test results
  • Part 3 — Game test results and conclusions

Developer ID : Nvidia Corporation (Nvidia trademark) was founded in 1993 in the United States. Headquarters in Santa Clara (California). Develops graphic processors, technologies. Until 1999, the main brand was Riva (Riva 128/TNT/TNT2), from 1999 to the present — Geforce. In 2000, the assets of 3dfx Interactive were acquired, after which the 3dfx / Voodoo trademarks were transferred to Nvidia. There is no production. The total number of employees (including regional offices) is about 5,000 people.

Part 1: Theory and architecture

Quite recently the announcement of Nvidia’s top new product from the «titanium» premium series — Geforce GTX Titan X — has died down, and here we are again getting acquainted with an almost equally powerful video card, but already from the familiar Geforce GTX 900 line. Nvidia has prepared its response to the competitor’s new top solution expected soon (I can’t even believe that we will see it soon) and decided to release it for sale a little earlier than the rival’s video card. Actually, they have been ready for this answer for a very long time, even Geforce GTX Titan X did not come out immediately, as soon as such an opportunity presented itself, and today’s hero is a minimal modification of this solution, largely repeating its characteristics.

We have already written many times about new Nvidia products based on the latest Maxwell architecture graphics processors, and we have covered, perhaps, almost all issues related to the operation of these video cards: all architectural changes, novelties in functionality, performance issues. It would seem that there is nothing more to write about. But lately, there has been evidence of new technologies that should work in order for the gaming PC hardware market to continue.

For example, towards the end of the year, the long-awaited update of the Microsoft graphics API in the form of DirectX 12 is expected, which we will write about in more detail below. It is impossible not to note the growing interest in the use of ultra-high resolution information output devices — in the so-called 4K resolution. With the sharp price cuts of related monitors and TVs, an increasing number of PC users are getting these displays.

Players have doubled the number of 4K monitors in the past year, and this is just the beginning. All Nvidia graphics cards based on Maxwell architecture chips are also well prepared for 4K resolution support, they have special optimizations for working in such conditions, support effective frame buffer information compression methods, and are the only ones on the market that support HDMI 2.0 video output, which allows you to connect 4K- TVs in full resolution at a refresh rate of 60 Hz.

Virtual reality devices are considered another factor (so far only potential) of growth. VR helmets, goggles and other similar devices are expected to be widely available next year, and in order to popularize virtual reality, it is important that users have sufficiently powerful graphics cards based on the latest generation GPUs that have special optimizations for VR and generally provide high performance with minimal delays. All this also applies to video chips of the Maxwell family in full measure.

At the moment, Nvidia has already released several solutions based on second-generation Maxwell chips, and today another model joins them — Geforce GTX 980 Ti. This is a top-level solution that provides 3D performance only slightly worse than the most powerful Geforce GTX Titan X. The new product features great mathematical and textural power, this GPU includes 2816 streaming cores and is equipped with six gigabytes of fast GDDR5 memory. A novelty from the top price segment will allow future users to forget about problems and brakes for several years, playing all modern PC games.

Since the Nvidia video card model under consideration is based on the top-end second-generation Maxwell GPU architecture, which we have already reviewed and which is in many ways similar to the chips of the previous Kepler architecture, it is useful to familiarize yourself with earlier articles before reading this material. about Nvidia video cards:

  • [04/22/15] Nvidia Geforce GTX Titan X — The most powerful single-processor accelerator
  • [09/19/14] Nvidia Geforce GTX 980 — Follower of Geforce GTX 680, outperforming even GTX 780 Ti
  • [12. 03.14] Nvidia Geforce GTX 750 Ti — Maxwell starts small… despite Maxwell
  • [22.03.12] Nvidia Geforce GTX 680 — new single processor leader 3D graphics

So, let’s look at the detailed characteristics of the Geforce GTX 980 Ti video card based on the GM200 graphics processor.

9000 9000 9000. Geforce GTX 980 Ti received the name familiar to the latest Geforce series — they simply added the prefix Ti to the name of the less powerful solution. The novelty does not replace other solutions in the current product line of the company, but builds it up from the top, without crowding out the premium Titan X and dual-chip Titan Z. Well, below it is the Geforce GTX 9 model80 based on the less complex GM204 chip.

The suggested price for the new board is $649, which is even below market expectations. As practice shows, one can rarely expect prices from Nvidia below those suggested by experts, but today is exactly the case. Although the new product is one of the best performance solutions on the single-chip video market, it is not prohibitively expensive (given that the price also includes the key to the Batman: Arkham Knight game — at least in Western markets).

It seems that Nvidia is thus delivering a preemptive blow to the competitor’s position, whose renamed (once again!) line is about to appear, being supplemented only by a new top solution. Naturally, the prices for all other motherboards in the Geforce GTX 900 series have also changed. For the GTX 980, the recommended price is now set at $499, the GTX 970 will cost from $329, and the GTX 960 — from $199. Very good prices, although a competitor can lower them even more — they simply have nothing else to do.

The Nvidia model reviewed today is based on the GM200 chip, which has a 384-bit memory bus, and the memory runs at 7 GHz, like the Titan X, which gives the same peak bandwidth of 336.5 GB / s — one and a half times more than in the GTX 980. With such a bus, the amount of video memory installed on the video card could be 6 or 12 GB, and in this case there is simply no sense in a larger value, besides, it is occupied by a model of the Titan series. And on the GTX 980 Ti, 6 GB is installed, which is enough to run any 3D applications with any quality settings — now this amount of video memory is enough for all games. And the future top competitor with the expected 4 GB of memory of the new HBM standard will lose out.

The Geforce GTX 980 Ti circuit board is no different from the GTX Titan X board, which is not surprising — they are similar in all characteristics. The typical power consumption for the Geforce GTX 980 Ti is 250 W, the same as for the Titan X. The boards are otherwise the same, the Geforce GTX 980 Ti reference board is 267 mm long and has the same image output connectors: one Dual-Link DVI, one HDMI 2.0 and three DisplayPort.

Architecture

Like the Geforce GTX Titan X video card announced a little earlier, the new product is based on the GM200 GPU. It includes all the architectural features of the GM204 chip, so everything said in the article on the GTX 980 fully applies to today’s new product — we advise you to first read the material, which more fully considers the architectural features of Maxwell.

Today, GM200 is the most powerful GPU from Nvidia and in general on the market. Architecturally, the GM200 video chip is fully consistent with the younger model GM204, it also consists of GPC clusters, which contain several SM multiprocessors. The GPU contains six GPC clusters, consisting of 24 multiprocessors, but in this model two of them were disabled — mainly so that the Titan X was still a little faster, justifying its name and price.

That’s why the video chip for the Geforce GTX 980 Ti model is produced in a slightly truncated form, it contains a couple of streaming multiprocessors less than the full GM200 core. Of the 24 multiprocessors of this GPU, 22 are active in this variant. Accordingly, the chip includes 2816 CUDA stream processors out of 3072 physically available, and 176 (out of 192) TMU texture sampling and filtering units.

But the number of ROPs and the L2 cache attached to them remained untouched. The GPU of this modification has all 96 ROP blocks and 3 MB L2 cache physically available in the GPU. It is important to note that since nothing was cut in terms of the number of ROPs and the amount of L2 cache in this modification, the Geforce GTX 980 Ti has no problems similar to the GTX 970, which has cut ROP and L2 cache, and along with them and the bandwidth for one of the video memory segments (0. 5 GB out of the available 4 GB in this model are extremely slow access). Nvidia didn’t allow this situation again and the read speed from all 6 GB of memory is equally high here.

The base clock frequency of the novelty is 1000 MHz, and the average turbo frequency Boost Clock is 1075 MHz, that is, exactly the same as that of the GTX Titan X. Do not forget that the actual frequency of the GPU in games may differ from these indicators, most often up, and the average turbo frequency corresponds to a certain set of applications and conditions. Theoretically, compared to the GTX Titan X, the chip in the new model can operate at a slightly higher frequency, since some of the functional blocks are disabled, which can increase the operating frequency. Well, the overclocking capabilities should be a little better — according to the first reviews, it is quite possible to achieve frequencies of the order of 1400 MHz and even higher.

As for the RAM, everything has remained unchanged in relation to the Titan X. The GM200 GPU in the GTX 980 Ti has a 384-bit memory bus (six channels of 64-bit each), and GDDR5 video memory chips in the amount of six gigabytes operate at an effective frequency of 7 GHz. Which gives the same 336.5 GB / s as the expensive older model. That is, in terms of memory bandwidth, the new product is already 50% faster than the younger GTX 980 model. /With. By itself, the advantage in memory bandwidth is very important in graphics tasks, but other factors can limit the memory and data caching subsystem, which prevent it from showing high efficiency and using the full capabilities of the GPU and video memory. So, in order to solve these potential problems, the chips of the second generation of the Maxwell architecture introduced a new generation of the memory subsystem, which uses the available memory bandwidth more efficiently.

We have already written about this in more detail before, all new Nvidia GPUs use the third generation of the frame buffer color information compression algorithm, which supports new compression methods. Additionally, each of the SMMs in the GM200 chip has its own 96 KB shared memory, while the L1 cache and texture cache are combined into a 48 KB area per multiprocessor. This solution favorably distinguishes the new GPUs from the Kepler family, which used a shared memory of 64 KB, which was also an L1 cache. All this is complemented by a generous 3 MB L2 cache. As a result, even with a negative difference in memory bandwidth compared to competing solutions, Nvidia video cards usually perform just as well.

In all other respects, the GM200 chip is no different from the GM204 in terms of its capabilities and supported technologies. And everything that we have previously written about the GTX 980 and GTX 970 fully applies to the GTX 980 Ti. Therefore, for all other questions about the functional subtleties of the novelty, you can refer to the reviews of the Geforce GTX 980 and GTX 750 Ti, in which we wrote in detail about the Maxwell architecture, the device of streaming multiprocessors (Streaming Multiprocessor — SMM), the organization of the memory subsystem and some other architectural differences. You can also check out features like hardware support for accelerated VXGI global illumination calculation, new full-screen anti-aliasing methods, and improved DirectX 12 graphics API capabilities.0003

Some changes have been made to the G-Sync capability, which allows for the smoothest frame rates possible when using a supported monitor. The technology really makes a big difference in terms of gaming comfort and causes the output device to change the image only when it is fully calculated by the GPU.

This approach removes the tearing and jittery frame rate artifacts that occur when using conventional monitors with V-sync off and on. With G-Sync technology, it turns out that the refresh rate of the picture on the monitor is exactly the same as the frame rate provided by the gaming system.

Among the latest innovations in G-Sync, we highlight Variable Overdrive — a technology that provides more accurate color shift (color-shifting). Also new to G-Sync is support for windowed rather than full-screen mode. While G-Sync was previously inferior to competing technology from AMD in this indicator, now users of Nvidia graphics cards and related monitors can use 3D applications with fine synchronization and in windowed mode.

G-Sync support in gaming laptops from Gigabyte, MSI, Asus and Clevo, which use special LCD panels well suited for gaming, including 3K and 4K resolutions, as well as refresh rates in 75 Hz. In addition, several new models of monitors with support for this technology are expected to appear on the market.

So, Asus and Acer will soon offer users several new models of G-Sync monitors, including the most interesting 34-inch models with a high resolution curved IPS matrix of 3440×1440 pixels and a refresh rate of 75 Hz.

Full support for DirectX 12 features

In its materials, Microsoft mainly talks about the performance optimizations introduced in the new version of the graphics API — Direct3D 12 allows less load on the CPU with useless work, it is convenient to control the use of GPU resources, which was previously done by the operating system and the video driver, and graphics code can be better parallelized across multiple computing devices. All this can significantly improve performance, especially when it is limited by a large number of draw calls. More importantly, these features are supported on all Geforce graphics cards since the GTX 400.

But not only performance optimizations distinguish DirectX 12, there are also new features in this API to help introduce new effects into 3D applications. Among them, we note the appearance of support for volume tiled resources (volume tiled resources), which can be used when rendering realistic-looking fire and smoke. DirectX 12 offers two levels of support for Feature Level functionality: 12.0 and 12.1.

Level 12.0 includes support for tiled resources that can be used to render shadows using shadow maps of different resolutions, bindless textures that increase the number of textures processed simultaneously in a single shader program and reduce CPU load, as well as Typed UAV ( Unordered Access Views). Level 12.1 adds conservative rasterization and raster ordered view to all these features — this function gives control over the order of operations of the pixel shader and allows you to use algorithms for rendering translucent surfaces that do not require pre-sorting, for example.

Additionally, all graphic chips of the GM2xx family support volume tiled resources, similar in type to just tiled ones, but in three dimensions. The principle of operation of tile resources is to divide textures into tiles, and during the rendering process, the application determines and loads into video memory only those tiles that are needed for rendering. This feature allows game developers to get more varied textures on objects in the scene with less video memory usage, and also helps with texture streaming.

So, earlier tile resources were available only for 2D textures, and 3D tile textures transfer the same functionality to 3D textures. This is logical, because many effects need volume to look realistic: liquids, smoke, fire, fog are just the most obvious examples. And rendering complex scenes containing similar effects on the GPU with support for voluminous tile textures allows you to use video memory resources more efficiently, and also improve the quality of imitation of a particular effect. For example, you can use liquid simulation to simulate smoke in games, as Nvidia showed in several demos:

Well, conservative rasterization, which is also supported by Maxwell family chips and is a mandatory feature of Feature Level 12.1, differs from regular rasterization in that not only those pixels are drawn in the process, in the center of which the scene geometry has hit, but all pixels, in the area of ​​which the scene geometry has hit even a small piece of a triangle. This functionality can be used in the process of voxelization (converting geometry to voxels), as in Nvidia’s VXGI global illumination algorithm, which we have written about many times.

This operation is far from free, conservative rasterization is in any case slower than usual, but if the GPU provides hardware support for this feature, then the calculations are many times faster, which will be useful for some algorithms expected in games in the near future. Another example of the use of conservative rasterization in games is high-quality rendering of shadows calculated using ray tracing:

As you can see, such shadows compare favorably with the usual shadows using shadow maps in the absence of pixel «ladders». And with the usual rasterization method and the ray tracing algorithm in the shadows, unpleasant artifacts are obtained. Turning on conservative rasterization helps get rid of them, which provides this shadow rendering algorithm with perfect pixel accuracy.

The most important question is when will we see all this splendor in games? According to Microsoft, about 100 game developers are already developing 3D applications that use the capabilities of the new version of their graphics API, and we have already seen several demonstrations on CryENGINE, Unity and Unreal Engine running on PCs with Geforce graphics cards at various events. The main thing is that the Maxwell architecture from Nvidia has full support for all the features of the current version of DirectX 12 Feature Level 12.1 — the most advanced at the moment.

Virtual reality with GameWorks VR

Surely almost all of our readers are already aware of the new reincarnation of virtual reality, which began its journey with the filing of Oculus VR, later acquired by Facebook. Many, many years ago, quite primitive by modern standards, VR helmets were produced by many companies, were quite massive with poor image quality, and at the same time cost hundreds and even thousands of dollars. With the improvement of technical capabilities, several companies have taken up the projects of helmets and glasses of virtual reality, the most notable of which has become Oculus.

Virtual reality is one of the possible drivers that will revive the gaming PC hardware market, as all new things take root faster on a universal system that allows you to connect literally anything and use it the way you want. PC as such an experimental system is quite suitable, so it is hoped that VR on PC will give a new impetus to the component and gaming markets.

But successful implementation of VR requires not only hardware, but also software support. This is exactly what virtual reality companies do, since from a hardware point of view, there are no particularly complex technologies in a VR helmet, almost everything has been worked out for a long time. The biggest problem with VR is the long delays between the physical action of the player (turning the head) and the image on the screen — literally every millisecond counts here! It is at a glance that we will not notice small delays, but our brain will notice everything. Hence all the problems with dizziness and nausea, which they are now trying to solve, first of all.

While this is not only important for VR, dedicated VR support from GPUs would also be very useful to make the rendering task easier, because in this case the high-resolution image has to be rendered twice, for each eye separately. Nvidia engineers are working closely with VR hardware and software vendors and have already developed several technologies to better support VR on the GPU side, naming the initiative GameWorks VR.

For example, you can take various rendering optimizations. It is well known that VR devices use a special optical system consisting of a high-resolution screen and lenses placed a few centimeters in front of them. These same lenses create the feeling of virtual reality, providing a corresponding ultra-wide view with a large field of view.

For VR devices to work, on their built-in screen, you need to display an image distorted in a special way — the center of the image is large, and the periphery is compressed at the edges. As a result, using a special lens, the user will see the correct image with a large field of view (Field of View). And all this must be done twice — separately for each eye.

Since GPUs are originally designed to produce a conventional 2D image, they do not have special support for rendering specially stretched images for virtual reality systems. The issue has to be solved programmatically, rendering the 3D image in normal mode and compressing the image on the periphery before feeding it to the VR device.

Although this approach works well in VR in terms of image quality, it is not very efficient in terms of computing resources. After all, the GPU renders the image as a typical image for a regular monitor, when the center of the image and its edges are the same in value. And when the image distortion required by VR devices, a huge part of the pixels is simply discarded to no avail. Pixels on the periphery of the frame are compressed several times, and many of them are simply not needed (see the illustration above). In the middle, the pixels remain almost one to one, and from the edges, a large area noticeably decreases in the final VR image — this is an inefficient approach.

To solve the problem of VR rendering efficiency, Nvidia came up with Multi-Res Shading technology, developed specifically for virtual reality systems and using the capabilities of multi-projection (multi-projection) — all second-generation Maxwell GPUs can simultaneously project the scene geometry onto several projections .

In the case of virtual reality, the technology works like this — the picture is divided into several areas (viewport) and each of them is drawn in different resolutions, depending on the location. Thus, the resolution of the central area remains unchanged, while the side areas are scaled depending on the required quality. On conventional GPUs without multi-projection support, it would be necessary to draw the same geometry several times into several buffers, and this feature allows you to project the geometry into several buffers at once in one pass.

As a result, according to Nvidia, this technology provides a 1.3-2x increase in pixel shader performance compared to the traditional approach of rendering the entire image at the same resolution. Most importantly, this technology from the GameWorks SDK is already available to VR developers, many of whom are already implementing it.

The latest generation of Geforce graphics cards based on 2nd generation Maxwell architecture GPUs are the ideal platform for the development and use of virtual reality systems expected from many companies in the coming months. In addition to the fact that for VR systems, a critical indicator is high performance and the smoothest possible frame change, which is fully provided by the top Geforce GTX 980 Ti, both software and hardware support from the GPU manufacturer is important for them.

In the case of VR systems, it will be especially important to properly adjust game settings and 3D graphics quality to ensure maximum fluidity, as performance issues can cause physical discomfort to players. Here, the Geforce Experience utility, included in the Nvidia driver kit, is the best fit, which will provide optimal settings for all 3D applications, including VR (in the future). And for VR developers, the GameWorks VR initiative is very useful, combining a set of APIs, libraries and technologies to help in the development of virtual reality.

Soon we will finally see the results of this work in the public domain, but for now we have to be content with videos from numerous demos and five minutes of viewing in VR helmets at various events dedicated to games and 3D graphics. The demos are quite impressive, I must say, they have already been released by several companies, here are just a few examples: «Thief in the Shadows» — a joint development of Nvidia, Epic, Oculus and WETA Digital — the studio that created the visual effects in the film trilogy «The Hobbit», «Back to Dinosaur Island is a reboot of Crytek’s 14-year-old X-Isle: Dinosaur Island demo, Valve’s Portal, Job Simulator, TheBluVR, and Gallery demos, and a virtual reality on the Unity 5. 9 engine0003

Preliminary performance evaluation

First, let’s look at the Geforce GTX 980 Ti comparison table with the GTX 680 and GTX 780 Ti models of the past years, which have gained considerable popularity among PC gamers. Nvidia’s new product looks noticeably better than its predecessors, it is up to three times faster than the GTX 680, it provides playable frame rates even at 4K resolution in all games at maximum settings. In addition, the Maxwell architecture GM200 graphics processor supports all the features of Feature Level 12.1 from DirectX 12, and the video card is equipped with six gigabytes of video memory, versus 2 and 3 GB for its predecessors.

The new top video card is good, whatever one may say. Especially if we consider its energy efficiency, because the technological process has not changed over the years, as well as energy consumption, and productivity has increased significantly. Based on theory, the Geforce GTX 980 Ti is much closer in speed to the GTX Titan X than to the GTX 980, for example. Thus, the new product has 38% more ALUs and TMUs, and significantly more ROPs. Yes, and in terms of PSP it has a serious advantage. In general, the GeForce GTX 980 Ti is cut down by about 10% compared to the GTX Titan X (in terms of the number of ALUs and TMUs, the most important ones for a video chip), and in terms of ROPs and memory bandwidth, it is completely on par.

But compared to the Geforce GTX 980, the new product has 38% more CUDA processors and texture units, and even more memory bandwidth — this is a very decent difference. In general, compared to the top Geforce GTX models of previous generations, the new model looks much more productive in theory. And in practice, this is confirmed, especially in high and ultra-high resolutions convenient for the new product:

According to Nvidia, according to the average frame rate in the most modern games, including Witcher 3, Assassin’s Creed: Unity, Grand Theft Auto: IV, Far Cry 4, Dragon Age: Inquisition, Shadow of Mordor and Watch Dogs, running on a top system with high game settings in different resolutions, the Geforce GTX 9 model introduced today80 Ti is two to three times faster than the Geforce GTX 680 (the maximum difference is observed in 4K resolution, of course), and the predecessor GTX 780 Ti is outperformed by an average of one and a half times. Nvidia gives specific examples of games tested at the highest resolutions:

As you can see, the Geforce GTX 980 Ti provides much higher frame rates in such conditions, when compared with the long-obsolete GTX 680 model. The advantage is 2.2-2.6 times in such cases completely unsurprising. It will be much more interesting to compare the new product with … and now there is nothing to compare it with especially. After all, there is still no competitor from AMD for Nvidia’s top solutions on the market, they only promise to release their version in June — we will have to wait for it for some more time.

Theoretical Conclusions

From an architectural point of view, the graphics processor used in the Geforce GTX 980 Ti is no different from what we saw in the GTX Titan X. The GM200 incorporates the best of the company’s past architectures, additional functionality and second generation improvements Maxwell. With the help of a complete redesign of the execution units, Nvidia engineers have achieved the best energy efficiency indicator for this generation of GPUs — the ratio of performance to energy consumption, while adding more functionality (full support for Feature Level 12. 1 from the DirectX 12 graphics API).

Nvidia has released another flagship for its lineup in the form of the Geforce GTX 980 Ti, designed for gaming enthusiasts. She does not care about any game settings, any resolutions and anti-aliasing levels, she will always provide acceptable playability. It stands out especially favorably against the background of multi-chip solutions, which have certain disadvantages associated with uneven frame changes and increased delays — a single-chip solution is always better, other things being equal, and even with a small difference in price, it provides not only better speed, but also less heat and noise .

Alas, it’s really impossible to compare with single-chip video cards Geforce GTX 980 Ti — it has no competitors, except for its older sister GTX Titan X. AMD has not yet released a competitive top-end video card, but we are waiting for its announcement from week to week. And we are sure that the battle of new products from Nvidia and AMD will be hot!

The Geforce GTX 980 Ti features Nvidia’s most powerful GPU, only slightly reduced in performance, with 2816 active stream processors (compared to 3072 cores in the GTX Titan X). Similarly, the number of texture units has been reduced. But the number of rasterization units and memory channels remained the same, as a result, the new memory subsystem includes six 64-bit channels (384-bit in total), through which 6 GB of memory operating at a frequency of 7 GHz are connected.

The clock frequencies of the novelty have not changed at all — 1000 MHz base and 1075 MHz Boost-frequency. In general, let the GTX 980 Ti be slightly inferior to the GTX Titan X from the elite series, but the performance difference should not be more than 5-10%, and six gigabytes of video memory will be enough for a while until this GPU loses its relevance — that is, for the next few years.

In addition, Geforce GTX 980 Ti also received a very attractive price of $649, which is somewhat unexpected for Nvidia’s top solution. It is not clear how much the expected imminent release of a competitor from AMD has affected the price, but the fact is. The model presented today is very powerful, but it is not at all as expensive as the GTX Titan X with almost the same performance and exactly the same features.

2024 © All rights reserved
Geforce GTX 980 Ti graphics accelerator
Parameter
Texturing units 176 active (out of 192) texture addressing and filtering units with support for FP16 and FP32 components in textures and support for trilinear and anisotropic filtering for all texture formats
ROP units

6 wide ROPs (96 pixels) with support for various anti-aliasing modes, including programmable and with FP16 or FP32 frame buffer format. Blocks consist of an array of configurable ALUs and are responsible for generating and comparing depth, multisampling and blending One 8-pin and one 6-pin connectors
The number of slots occupied in the system building 2
Recommended price $ 649 (USA), 399990 RUB (Russia)