Power, Temperature, & Noise — The NVIDIA GeForce GTX 1080 & GTX 1070 Founders Editions Review: Kicking Off the FinFET Generation
by Ryan Smithon July 20, 2016 8:45 AM EST
- Posted in
- GPUs
- GeForce
- NVIDIA
- Pascal
- 16nm
200 Comments
|
200 Comments
The GTX 1080 & GTX 1070 ReviewPascal’s Architecture: What Follows MaxwellGP104: The Heart of GTX 1080GP104’s ArchitectureFP16 Throughput on GP104: Good for Compatibility (and Not Much Else)Designing GP104: Running Up the ClocksFeeding Pascal: GDDR5XFeeding Pascal, Cont: 4th Gen Delta Color CompressionAsynchronous Concurrent Compute: Pascal Gets More FlexiblePreemption Improved: Fine-Grained Preemption for Time-Critical TasksSimultaneous Multi-Projection: Reusing Geometry on the CheapDisplay Matters: New Display Controller, HDR, & HEVCFast Sync & SLI Updates: Less Latency, Fewer GPUsSLI: The Abridged VersionGPU Boost 3. 0: Finer-Grained Clockspeed ControlsNVIDIA Works: ANSEL & VRWorks AudioMeet the GeForce GTX 1080 & GTX 1070 Founders Edition CardsGPU 2016 Benchmark Suite & The TestRise of the Tomb RaiderDiRT RallyAshes of the SingularityBattlefield 4Crysis 3The Witcher 3The DivisionGrand Theft Auto VHitmanComputeSyntheticsPower, Temperature, & NoiseOverclockingFinal Words
Having finished our look at GTX 1080 and GTX 1070’s rendering and compute performance, it’s time to take a look at power, temperature, and noise. Next to price and performance of course, these are some of the most important aspects of a video card, due in large part to the impact of noise. All things considered, a loud card is undesirable unless there’s a sufficiently good reason – or sufficiently good performance – to ignore the noise.
It goes without saying that with a new architecture on a new manufacturing node, Pascal is significantly different from Maxwell when it comes to voltages and clockspeeds. Even without radically changing the underlying processing architecture, the combination of FinFETs and NVIDIA’s desire to drive up the clockspeed means that Pascal and the GP104 GPU are going to behave in new and different ways from Maxwell. Back in our look at GPU Boost 3.0, we already saw how the two GP104 cards are more temperature sensitive than GM204, backing off on clockspeed and voltage by a few bins as the card warmed up to 70C. And this isn’t the only surprise in store.
GeForce Video Card Voltages | |||||
GTX 1080/1070 Boost | GTX 980 Boost | GTX 1080/1070 Idle | GTX 980 Idle | ||
1.062v | 1.225v | 0.625v | 0.856v |
Though we often treat FinFET as the solution to planar’s scaling problems, FinFET is more than just a means to enable 20nm/16nm geometry. It’s also a solution in and of itself to voltages. As a result, GP104’s operating voltages are significantly lower than GM204’s. Idle voltage in particular is much lower; whereas GTX 980 idled at 0.856v, the GP104 cards get to do so at 0.625v. Load voltages are also reduced, as GM204’s 1.225v boost voltage is replaced with GP104’s 1.062v boost voltage.
Now voltage alone isn’t the whole picture; what we don’t see from a high level view is how amperage has changed (answer: it went up), so power consumption hasn’t dropped by nearly as much as the voltage alone has. Still, it will be interesting to see what this means for the mobile versions of NVIDIA’s GPUs, as voltage drops have traditionally proven rather beneficial for idle power consumption.
There is a double-edged sword aspect to all of this though: FinFET doesn’t just allow lower voltages, but it essentially requires it. The FinFET transistors can’t take a punishment like TSMC’s 28nm transistors can, and in discussing the architecture and process with NVIDIA, they have confirmed that the voltage/frequency curve for 16nm FinFET is steeper than 28nm. For general consumers this shouldn’t matter, but for hardcore overclockers there’s going to be a learning process to find just what kind of voltages GP104 can take, and whether those voltages can enable significantly higher clockspeeds.
Speaking of clockspeeds, let’s take a look at the average clockspeeds during our testing. As we saw earlier, NVIDIA has designed Pascal to clock much higher than Maxwell, and here we can quantify that. Though before getting to the numbers, it should be noted that both the GTX 1080FE and GTX 1070FE do reach NVIDIA’s 83C throttle point under sustained load, so these frequencies are somewhat dependent on environmental temperature.
GeForce Video Card Average Clockspeeds | ||||
Game | GTX 1080 | GTX 1070 | GTX 980 | |
Max Boost Clock |
1898MHz |
1898MHz |
1252MHz |
|
Tomb Raider |
1721MHz |
1721MHz |
1177MHz |
|
DiRT Rally |
1771MHz |
1797MHz |
1202MHz |
|
Ashes |
1759MHz |
1797MHz |
1215MHz |
|
Battlefield 4 |
1771MHz |
1771MHz |
1227MHz |
|
Crysis 3 |
1759MHz |
1759MHz |
1177MHz |
|
The Witcher 3 |
1759MHz |
1759MHz |
1215MHz |
|
The Division |
1721MHz |
1771MHz |
1189MHz |
|
Grand Theft Auto V |
1797MHz |
1822MHz |
1215MHz |
|
Hitman |
1771MHz |
1797MHz |
1202MHz |
As a percentage of the maximum boost clock, the average clockspeeds of the the GTX 1080 and GTX 1070 both drop more significantly than with GTX 980, where the latter only drops a few percent from its maximum. This is due to a combination of the temperature compensation effect we discussed earlier and both cards hitting 83C (though so does GTX 980). Either way both cards are still happily running in the 1700MHz range, and the averages for both cards remain north of NVIDIA’s official boost clock. Though this does give us a good idea as to why the official boost clock is so much lower than the cards’ maximum boost clocks.
It’s interesting to see that despite its lower rated clockspeeds, GTX 1070 actually averages a bin or two higher than GTX 1080. As our samples have identical maximum boost clocks – something I should note is not guaranteed, as the maximum boost clock varies from card to card – we get a slightly more apples-to-apples comparison here. GTX 1070 has a lower TDP, which can hurt its ability to run at its highest clocks, but at the same time it’s a partially disabled GPU, which can reduce power consumption. Meanwhile the GTX 1070’s cooler is a bit less sophisticated than the GTX 1080s – losing the vapor chamber for heatpipes – but on the whole it’s still a very powerful cooler for a 150W card. As a result our GTX 1070 sample is able to get away with slightly better boosting than GTX 1080 in most situations. This means that the cards’ on-paper clockspeed differences are generally nullified and aren’t a factor in how the cards’ overall performance differs.
With clockspeeds out of the way, let’s get down to business with power numbers. Starting with idle power consumption, the GTX 1080FE and GTX 1070FE both top the charts. Bear in mind that this is a system level reading taken at the wall, so we’re hitting diminishing returns here with increasingly low powered cards, but regardless it’s interesting that system power with both cards is a couple of watts lower than their GM204 counterparts. As I’ve said before, I’m very interested in seeing what Pascal and FinFET does for laptops, which are especially sensitive to this.
Ramping up to load power we have Crysis 3. This provides us with a solid look at gaming power consumption, as Crysis 3’s world simulation is stable over long periods of time, and benchmark performance is decently close to the average difference in the relative performance between cards. For better or worse, this benchmark also captures the CPU impact of performance; a GPU that can produce a higher framerate also requires the CPU to work harder to feed it frames.
In the middle of the pack is the GTX 1080FE, with 335W at the wall. This is 20W more than GTX 980, and this is due to a couple of factors. The first of which is that GTX 1080FE is an outright higher TDP card, rated for 180W TDP as compared to GTX 980’s 165W TDP. On a relative basis NVIDA’s TDPs have been rather consistent/reliable since Kepler, so it’s safe to attribute a lot of this difference to the increase in the official TDP.
Overall I’m actually a bit surprised that given the higher TDP and much higher performance of the card that the increase at the wall for GTX 1080FE is only 20W. If anything I would have expected the CPU power impact to be more pronounced. But at any rate, compared to GTX 980 there is a real increase in power consumption while gaming. Though with that said, if we were to plot watts per frame here, GTX 1080FE would be the leader by far; it’s drawing a bit more power than GTX 980, and delivering performance well in excess of the 388W GTX 980 Ti.
As for GTX 1070, it comes close to topping this chart. The 150W card leads to a total system power draw of 307W, trailing only lower performing cards like Radeon RX 480 and GeForce GTX 670. Taking performance into consideration, it’s almost too easy to forget that this is for what is the second fastest card on the market, and it draws less power than virtually any other DX12-era high performance card. In fact given its very close performance to GTX 980 Ti, perhaps the better comparison is there, in which case we’re looking at a savings of 80W at the wall. So 307W is definitely impressive, and a reminder of how great it is to get off of 28nm.
Looking at some inter-card comparisons, the difference compared to the GTX 970 actually puts the GTX 1070FE ahead by 6W. However I am a bit hesitant here to make too strong of a comparison since NVIDIA did not release and sample a reference GTX 970 card, so our reprogrammed EVGA card isn’t quite an apples-to-apples comparison. But more apples-to-apples is GTX 1080FE vs GTX 1070FE; very close to NVIDIA’s TDP ratings, the difference at the wall is 28W, affirming that GTX 1070FE delivers less performance, but it draws less power at the same time. Though with this in mind, it does mean that GTX 1070FE isn’t quite as efficient overall as GTX 1080FE; 30W in power savings is outpaced by the 20-25% performance drop.
Shifting over to FurMark, we have a more holistic look at GPU power consumption. What we see here isn’t real world – FurMark is a synthetic test designed to max out virtually every aspect of the GPU – but it gives us an idea of what maximum power consumption should be like.
Starting with GTX 1080FE, it’s interesting that it’s only making a 4W difference at the wall. This test is nominally TDP limited, but in practice for NVIDIA’s reference blower you will eventually hit 83C and you will have to throttle more than you get there. This means that we’re essentially looking at a proxy test for the cooler; to sustain your TDP limit, you need to be able to dissipate all of the heat that comes with that.
Bearing in mind that there is going to be some uncontrollable card-to-card variation, what these results hint at is that GTX 1080FE’s blower isn’t much better than GTX 980’s, despite the use of a vapor chamber cooler. Or at least, it isn’t tuned to dissipate much more heat than GTX 980 and may be operating on the conservative side. In any case, this puts worst case power consumption in the middle of the pack, significantly ahead of NVIDIA’s 250W cards and about even with GTX 980.
Meanwhile GTX 1070FE is once again near the top of the charts, behind only the Radeon R9 Nano and RX 480. I’ll skip the GTX 970 comparison due to the aforementioned sample differences and instead move on to the GTX 1080FE comparison, in which case the 50W difference at the wall ends up being quite surprising since it’s greater than the formal TDP difference. This will make a bit more sense once we get to temperatures, but what we’re looking at is a combination of GTX 1070FE being purely TDP limited – it doesn’t reach the card’s thermal throttle point – and undoubtedly some card-to-card variation in GTX 1070FE’s favor. Either way these results are consistent, and for the GTX 1070FE consistently awesome.
Up next we have idle temperatures. With NVIDIA retaining the same basic design of their reference blowers, there are no big surprises here. Both cards idle at 30C.
As for Crysis 3, we also see performance similar to other NVIDIA blowers, with both cards topping out at around 80C. It is interesting to note though that neither card reaches the 83C thermal throttle point – though the GTX 1080FE flirts with it – so what’s limiting the performance of these cards is primarily a combination of TDP and running out of turbo bins (or as GPU-Z calls it, VREL).
With FurMark the results are similar to above. Both cards reach the 80s, though only GTX 1080FE hits the 83C thermal throttle point. GTX 1070FE actually never reaches that point, which means that its cooler is more than powerful enough to keep up with its 150W TDP, as this should be the maximum load possible. This shouldn’t be too surprising, as the basic cooler design was previously used for the 165W GTX 980, so there’s a bit of additional headroom for a 150W board.
Last but not least, we have noise. As with the temperature situation, the reuse of NVIDIA’s blower design means that we already have a pretty good idea of what the cooler can do, and the only real question is how they’ve balanced it between performance and noise. But for idle noise in particular, what we’re looking at is the floor for what this cooler can do.
Moving to load noise, as it turns out NVIDIA has tuned the coolers on both cards to operate similarly to their past cards. At the 47dB(A) mark we find GTX 1070FE, GTX 980, GTX 770, GTX 780, our non-reference GTX 970, and finally GTX 1080FE at 47.6dB(A). What this indicates is that the acoustic profile under a gaming workload is exceedingly close to NVIDA’s past designs. A GTX 1080FE or GTX 1070FE is going to sound basically exactly like a sub-200W reference GTX 700 or 900 series card. Though this does make me suspect that the real-world cooling performance of all of these cards in terms of heat moved is also quite similar.
In any case, I’ve previous called this point NVIDIA’s sweet spot, and obviously this doesn’t change. NVIDIA’s blower continues to be unrivaled, making it possible to have a fully exhausting card without it being noisy. The only possible downside to any of this is that it means that despite its lower gaming performance relative to GTX 1080FE, GTX 1070FE isn’t really any quieter.
Finally, FurMark confirms our earlier observations. GTX 1070FE doesn’t even move compared to our Crysis 3 results. Meanwhile GTX 1080FE peaks a little higher – 48.6dB(A) – as it reaches 83C and the fan spins up a bit more to handle the heat.
In the end this may be the single most convicing argument for paying NVIDIA’s price premium for the Founders Edition cards. Like the GTX 900 and 700 series, when it comes to acoustics, NVIDIA has done a fantastic job building a quiet blower. We’ll undoubtedly see some quieter open air cards (in fact we already have some in for future testing), but open air cards have to forgo the near universal compatibility and ease of mind that comes from a blower.
Synthetics
Overclocking
The GTX 1080 & GTX 1070 ReviewPascal’s Architecture: What Follows MaxwellGP104: The Heart of GTX 1080GP104’s ArchitectureFP16 Throughput on GP104: Good for Compatibility (and Not Much Else)Designing GP104: Running Up the ClocksFeeding Pascal: GDDR5XFeeding Pascal, Cont: 4th Gen Delta Color CompressionAsynchronous Concurrent Compute: Pascal Gets More FlexiblePreemption Improved: Fine-Grained Preemption for Time-Critical TasksSimultaneous Multi-Projection: Reusing Geometry on the CheapDisplay Matters: New Display Controller, HDR, & HEVCFast Sync & SLI Updates: Less Latency, Fewer GPUsSLI: The Abridged VersionGPU Boost 3.0: Finer-Grained Clockspeed ControlsNVIDIA Works: ANSEL & VRWorks AudioMeet the GeForce GTX 1080 & GTX 1070 Founders Edition CardsGPU 2016 Benchmark Suite & The TestRise of the Tomb RaiderDiRT RallyAshes of the SingularityBattlefield 4Crysis 3The Witcher 3The DivisionGrand Theft Auto VHitmanComputeSyntheticsPower, Temperature, & NoiseOverclockingFinal Words
PRINT THIS ARTICLE
Nvidia Geforce GTX 1080 review
Nvidia has recently unveiled their latest high-end graphics cards for gamers, called the GeForce GTX 1080 and GTX 1070. The company’ s CEO Jen-Hsun Huang made claims of big performance gains and new-found levels of efficiency at attainable price points for the masses. With today’s official launch of these new graphics cards based on Nvidia’s new Pascal architecture, the GeForce GTX 1080 promises to deliver the goods on all fronts in performance and value, at an impressive performance-per-watt metrics. This is in-part due to Nvidia’s move to a 16nm FinFET manufacturing process with fab partner TSMC.
The GP104 GPU under the hood of Nvidia’s new GeForce GTX 1080 is comprised of some 7.2 billion transistors and has a die size measuring 314mm2. The company’s previous generation Maxwell architecture-based GeForce GTX 980 has a GPU die that measures 398mm2 and is made up of roughly 5.2 billion transistors. That equates to 2 billion more transistors in about 20 percent less die area for the new GeForce GTX 1080 Pascal-based GPU.
Nvidia also redesigned key ares of the architecture, like the memory IO structure, including taking advantage of Micron’s latest GDDR5X memory technology, enabling 320GB/sec of bandwidth versus the previous generation GTX 980 at 224GB/sec. Nvidia also implemented new color compression algorithms to decrease memory and cache utilization and introduced asynchronous compute support in addition to a new compute technique call «premption» that allows the GPU to render workloads in a more parallelized manner, for better efficiency. These features also afford the architecture lower latency and faster rendering in VR applications, especially when combined with new features like Nvidia’s Simultaneous Multi-Projection technology that can perform image aspect correction in a single pass in VR apps, without having to do double the geometry work like previous-generation architectures.
Here are the specifications of the new GTX 1080, as well as the two other Nvidia graphics cards we have added in the performance comparison charts of this brief review — the GeForce GTX 980 and the overclocked Zotac GeForce GTX 980 Ti:
GeForce GTX 1080 | GeForce GTX 980 | ZOTAC GeForce GTX 980 Ti AMP! Extreme | |
---|---|---|---|
Architecture | Pascal (GP104) | Maxwell (GM204) | Maxwell (GM200) |
Manufacturing process |
16nm |
28nm |
|
GPU clock (maximum) |
1,733MHz |
1,216MHz |
1,075MHz |
Cuda cores |
2,560 |
2,048 |
2,816 |
Texture units |
160 |
128 |
176 |
Memory |
8GB GDDR5X |
4GB GDDR5 |
6GB GDDR5 |
Memory Clock |
10GHz |
7. |
|
Memory Interface |
256bit |
256bit |
384bit |
ROP units |
64 |
64 |
96 |
Board Power |
180W |
165W |
250W |
The new GeForce GTX 1080 will retail for $699 in the Founder’s Edition model that was tested; $599 for standard edition models.
The GTX 1080 maintains the look and feel of the reference cards going back as far as the GeForce GTX 780, although the new card features a silver heatsink. Measuring 10.5in long and fitting into the dual-slot form factor, the reference card is now called the Founders Edition (FE).
The 180W card is kept cool using a heatsink-and-fan unit. Nvidia outfits its own card with a single 8-pin power connector, moving away from the dual six pins on the similar-wattage GTX 980.
A look at the back shows that Nvidia hasn’t changed the outputs in the move from GTX 980 to GTX 1080. Dual-link DVI, for legacy reasons, sits on top of a trio of DisplayPort and a single HDMI 2.0b.
The DisplayPort is 1.2 certified and spec 1.3/1.4-ready, which means supporting 4K screens at 120Hz or 5K at 60Hz from a single cable. Looking forwards, 8K at 60Hz is supported by using two cables. Four displays can be driven at once, as well.
Starting with the GeForce GTX 1080, Nvidia is discontinuing official support for 3-way and 4-way multi-GPU SLI setups, rolling out rigid new high-bandwidth bridges that limit SLI configurations to just two graphics cards.
The new SLI HB connectors, which work only with Nvidia’s new Pascal GPU-based graphics cards, occupy both SLI connectors on a GeForce graphics card in order to transfer data between them at 650MHz, compared to the 400MHz that traditional SLI bridges run at. This allows dual GPUs to deliver a smoother gaming experience at 4K-plus resolutions and on multi-monitor Nvidia Surround setups, according to Nvidia. That of course means there’s no room to extend SLI configurations to more than two GeForce cards now.
However, you can still use a 3-way setup with two GTX 1080s in SLI and a third dedicated to PhysX alone. Nvidia also says that developers can manually support 3- and 4-way graphics card setups using DirectX 12’s multi-display adapter and explicit linked display adapter modes.
We tested the Nvidia GTX 1080 using the following PC setup:
CPU | Core i7-6700K |
Motherboard | ASUS Z170-A |
Memory | DDR4-2133 8GB × 2 (15-15-15-35,1.20V) |
Storage | 256GB SSD |
Graphics driver | GeForce 368.![]() |
OS | Windows 10 Pro 64bit |
Measuring the power consumption of Nvidia video cards — Inno3D GeForce GTX 1080 Twin X2
This material was written by a website visitor and has been rewarded.
This article will measure the power consumption of the Inno3D GeForce GTX 1080 Twin X2 (N1080-1SDN-P6DN) graphics card, powered by a separate Corsair HX1000i PSU with Corsair Link Digital support for monitoring power settings.
You can see a more visual connection of a video card to a separate power supply in the previous article The whole truth about the power consumption of AMD video cards. ASUS Radeon RX 470 DirectCU II.
Previous graphics card power consumption measurements:
recommendations
The whole truth about AMD graphics card power consumption. ASUS Radeon RX 470 DirectCU II
Measuring the power consumption of AMD video cards. PowerColor Radeon AXRX VEGA 56 8GBHBM2-3DH reference
We measure the power consumption of AMD video cards. PowerColor Radeon VII AXVII 16GBHBM2-3DH
Measuring the power consumption of AMD graphics cards — PowerColor Radeon™ RX 480 8GB GDDR5
Contents:
- 3Dmark Time Spy Extreme
- FurMark
- Company of Heroes 2
- Total
3Dmark Time Spy Extreme
.33 W, average — 172.14 W, and for Videocard + 12V Power maximum — 202.31 W, average — 174.6 W. In the second test segment of 3DMark Time Spy Extreme, the following results were obtained: GPU Power maximum — 218.66 W, average -169.46 W, and for Videocard + 12V Power the maximum is 223.45 W, the average is 173.92 W.
With the increase in the power consumption limit, the following results were obtained: 1st test section GPU Power maximum 216. 47 W, average 191.53 W, Videocard +12V Power maximum — 217.69 W, average — 193.74 W; 2nd test section GPU Power maximum 239.42 W, average 190.44 W, Videocard +12V Power maximum — 253.64 W, average — 194.23 W.
Fur Mark
In Furmark AAx8, the data is almost the same: maximum GPU Power 180.08W, average — 173W, Videocard +12V Power maximum — 187.21W, average — 187.21 Tue With a 20% increase in the limit, the following power consumption values were obtained: maximum GPU Power 188.22 W, average — 181.17 W, Videocard + 12V Power maximum — 196.22 W, average — 196.27 W.
Furmark AAx0 repeats the same situation as before: GPU Power max 186. 32W, 176.73W average, Videocard +12V Power max 193.25W, 180.18W average. With a 20% increase in the limit, the following power consumption values were obtained: maximum GPU Power 221.85 W, average — 208.59 W, Videocard + 12V Power maximum — 226.47 W, average — 214.66 W.
Company of Heroes 2
In Company of Heroes 2 1920×1080, the power consumption of the video card was: maximum GPU Power 207.37 W, average — 165.50 W, Videocard + 12V Power maximum — 205.33 W, average — 169.84 W. With a 20% increase in the limit, the following power consumption values were obtained: maximum GPU Power 234.31 W, average — 183.84 W, Videocard + 12V Power maximum — 232.5 W, average — 188.05 W.
Total
As a result, we see that the Nvidia driver measures the total power consumption of the video card, unlike AMD. This video card is based on the original printed circuit board (PCB) GTX 1080 Founders Edition, you can see the measurements taken by tomshardware below.
This material was written by a site visitor and has been rewarded.
Subscribe to our channel in Yandex.Zen or telegram channel @overclockers_news — these are convenient ways to follow new materials on the site. With pictures, extended descriptions and no ads.
Measuring the power consumption of Nvidia video cards — Inno3D GeForce GTX 1080 Twin X2 and breaking the power consumption limit of the reference PCB
This material was written by a site visitor and rewarded for it.
This article will measure the power consumption of an Inno3D GeForce GTX 1080 Twin X2 (N1080-1SDN-P6DN) graphics card with BIOS from another graphics card, it will be powered by a separate Corsair HX1000i power supply that supports Corsair Link Digital to monitor power settings.
You can see a more visual connection of a video card to a separate power supply in the previous article The whole truth about the power consumption of AMD video cards. ASUS Radeon RX 470 DirectCU II. Previous video card power consumption measurement We measure the power consumption of AMD video cards — VEGA 56 undervolting and find out if lowering the voltage will be beneficial
Contents:
- 3Dmark Time Spy Extreme
- FurMark
- Company of Heroes 2
- Total
BIOS Palit GTX 1080 8 GB BIOS (GameRock Premium) 230 W was used to increase the power consumption limit, suitable for video cards with a reference pcb. With this BIOS, the video card will no longer be able to correctly measure GPU Power and TDP power consumption, the real power consumption will be shown by Videocard +12V Power [W] at the output of the power supply.
3DMark Time Spy Extreme
recommendations
In the first test segment of 3DMark Time Spy Extreme, the following results were obtained: GPU Power maximum power consumption 114 W, average — 106. 6 W, and for Videocard +12V Power maximum — 219.9 195.9W avg, 81.7% max TDP, 73.2% avg. In the second test segment of 3DMark Time Spy Extreme, the following results were obtained: GPU Power maximum — 138.4 W, average — 112.4 W, and for Videocard + 12V Power maximum — 285.7 W, average — 207.4 W, TDP maximum 110.7%, average 77.8%
With the increase in the power consumption limit, the following results were obtained: 1st test section GPU Power maximum 115.5 W, average 107.8 W, Videocard +12V Power maximum — 222.9 W, average — 197.4 W, TDP maximum 84%, average 73.6%; 2nd test section GPU Power maximum 138.7 W, average 113.2 W, Videocard +12V Power maximum — 285.7 W, average — 211.3 W, TDP maximum 108.5%, average 78.1%.
FurMark
Furmark AAx0 repeats the same situation as before: maximum GPU Power 146.9W, average — 139W, Videocard +12V Power maximum — 276.7W, average — 258 , 3 W, TDP maximum — 100.5%, average — 95.2%. With a 20% increase in the limit, the following power consumption values were obtained: maximum GPU Power 161. 1 W, average — 153.5 W, Videocard + 12V Power maximum — 309.81 W, average — 295.2 W, TDP maximum — 115.8%, average — 107.8%.
Company of Heroes 2
In Company of Heroes 2 1920×1080, the power consumption of the video card was: maximum GPU Power 122.4 W, average — 10 2.4 W, Videocard +12V Power maximum — 243, 6 W, average — 192.2 W, TDP maximum — 95.2%, average — 72%. With a 20% increase in the limit, the following power consumption values were obtained: maximum GPU Power 124 W, average — 102.2 W, Videocard + 12V Power maximum — 240.9W, average — 191.6 W, TDP maximum — 98.2%, average — 71.3%.
Total
Data for a video card with native BIOS taken from here We measure the power consumption of Nvidia video cards — Inno3D GeForce GTX 1080 Twin X2.