1070 power consumption: NVIDIA GeForce GTX 1070 8 GB Review — Power Consumption

GeForce GTX 1070 Pascal — Power Consumption Results

Power Consumption Results

Stock Power Consumption

Power consumption at idle is less than 10W for the two Founders Edition boards. That’s a respectable result, to be sure. The same can’t be said for MSI’s two Pascal-based models though, which land at 15 and 16W. The explanation for those higher idle power figures is related to Nvidia’s restrictive rules for the number of GPU Boost steps. Due to how the technology is handled, Nvidia’s Founders Edition cards idle at 139 MHz, whereas MSI’s factory-overclocked models sit at 235 MHz.

As it stands, if manufacturers want a higher maximum GPU Boost frequency, then they have to take all of the other steps down a notch to make space for it.This means that the lowest possible GPU Boost clock rate step gets eliminated from the bottom of the BIOS’ table. So, if you want an additional space at the top, you need to make room for it by getting rid of the very bottom one.

Next, let’s take a detailed look at the five graphics cards’ power consumption during gaming. Starting with the general overview, it’s plain to see that the GeForce GTX 1070 Founders Edition produces the lowest spikes, whereas MSI’s older Maxwell-based GeForce GTX 980 Ti Lightning knows no bounds.

Image 1 of 5

Taking a closer look at the motherboard slot yields a surprising finding: none of the cards in this round-up use the 3V rail at all. This means that the PCIe slot doesn’t really provide the 75W most enthusiasts assume it does, since the 12V rail only offers about 65W on its own. This is almost exactly where Nvidia’s GeForce GTX 1070 Founders Edition ends up, with spikes well in excess of 75W. They’re not particularly dangerous, but can cause audible artifacts if you’re using on-board audio on a poorly-designed motherboard.

The other graphics cards fare a lot better due to their additional power phases. MSI’s GeForce GTX 980 Ti Lightning doesn’t use the motherboard’s PCIe slot at all. Instead, it has an additional six-pin power connector on top of the expected pair of eight-pin connectors.

Image 1 of 5

A detailed 3D graph illustrates the distribution across each rail. The curve is smoothed out.

Image 1 of 5

Overall, each card performs as expected. What happens if we play with their clock rates and power targets, though?

Power Consumption at Different Frequencies

In order to find out, we performed 10 to 20 tests per board. Step by step, we both overclocked them and lowered their power target to achieve the lowest possible power consumption and performance. The following bar graph shows the three main results representing the lowest possible power consumption, stock settings, and the highest possible overclock.

But we didn’t want to limit ourselves to three results. Instead, we set out to examine each graphics card’s efficiency and performance across its entire range. Facilitating this required adjusting the settings to get closer and closer to each of the GPU Boost clock rates that we wanted to test. For each one, we had to find the setting that allowed the card to run stably under full loan in a closed PC case after warming up.

The graph below clearly shows that frequency and power consumption have a very linear relationship. This is actually an odd result based on past experience. Efficiency curves usually level out toward the top, which is to say that power consumption starts exploding at some point.

It turns out that the increase really is linear so long as GPU Boost is left alone to do its thing and you don’t intervene with (largely pointless) manual voltage increases. Unfortunately, this calls into question what happens to performance. As clock frequency, power consumption, and performance are all increased, at least one of the three will run into a wall. Seeing that the limiting factor isn’t clock rate or power consumption, our next order of business is exploring the third variable.

MORE: Best Graphics Cards

MORE: All Graphics Content

Current page:
Power Consumption Results

Prev Page Partner Cards And Efficiency Testing

Next Page Efficiency Results

Get instant access to breaking news, in-depth reviews and helpful tips.

Contact me with news and offers from other Future brandsReceive email from us on behalf of our trusted partners or sponsors

Reddit — Dive into anything

Hey guys!

This week I am sharing my experience with the GTX 1070 Ti. I got my hands on that GPU a few weeks back and since then I was testing different overclock settings to get the most out of my GPU and I will share my most efficient settings and different hash rates for certain algorithms. In general, it is a GPU that offers a lot of versatility for mining different algorithms, while not using too much power of course when overclocked correctly. GTX 1070 Ti on factory settings really does not do anything special, while it does use a lot of power.

— My first finding is that GTX 1070 Ti GPUs do not consume too much energy, their** power usage is reasonable above 110W. The speed or hash rate when running GTX 1070 Ti on stock settings, will be around 25-26 Mh/s (DaggerHashimoto/Ethash) and power consumption on a stock will be 120W. Overclocking correctly can get you well above 30 Mh/s. My goal with most of the GPUs is to get as much out of it at the lowest possible power consumption and temperature When overclocking GPUs I always try setting the power limit to lowest possible in MSI Afterburner (usually this means 70% Power limit), while still maintaining stable hash rate. I managed to get the 1 fan version to 95W, but I found out that the sweet spot for that GPU is at 110-140W**, depending on the algorithm. Of course, dual mining consumes more electricity, when dual mining you can expect a rise in power consumption (approximately 20%).

In MSI Afterburner I have the temperature set below 85 degrees Celsius and I have my fans set on auto, that way the GPU regulates the speed of fans by itself.

I managed to get the 1 fan version to 95W and the 2 fan version to around 105W, but I did not like the mining performance.

When talking about temperatures GTX 1070 Ti is running at** higher temperatures in comparison to the GTX 1050 Ti. This actually makes sense since this GPU can perform much better and achieve higher hash rates up to 110-120% of a single GTX 1050 Ti**. Higher hash rate and profitability do not only cause power consumption to rise, but it also means that the temperature of the GPU will probably be higher and therefore the fans will have to do more work. That being said it is understandable that** temperatures will be above 75 and will even go above 80 degrees Celsius, when running a multiple GPU ri**g. Considering all that I have fans set to auto speed (MSI Afterburner) and in July fans are running above 70% with the temperature of 85 degrees Celsius in my mining rig, meaning that the GPU regulates the speed of fans by itself, so when the temperatures are too high it will bump the speed of fans up (mostly in summer months).

  • Core and Memory clock: So I first checked the internet to find out where to start with the settings. I found out that for dual mining a combination of +150/+650 meaning core clock: +150 and memory clock: +650 works very well; however, right now dual mining is not worth it, therefore I mostly focused on mining Ethash (DaggerHashimoto) and other profitable algorithms such as Equihash. I also noticed that blower cooling system does not work that well with the GTX 1070 Ti, I personally tested the one with a blower and the regular 2 fan version and found out that temperatures can vary for up to 10 degrees Celsius. I also played with the settings on Ethash and managed to get my card to almost 33 Mh/s with the core clock set to -15 and memory clock set to +800 (Power limit: 70%). On Equihash I used +200/+600 with power limit set to 70% and fans set to auto.

  • Mining performance: GTX 1070 Ti is extremely popular GPU and in general shines when mining Ethash** (DaggerHashimoto) algorithm, since it easily goes over 30 Mh/s. I managed to overclock my cards with the settings I have posted above to around 32.5Mh/s, while the GPU only used around 120W. I must say that I dislike the price of this GPU. Right now you can get one between 450-500 euros, that is around 14 euros that a user must pay for each Mh/s. This GPU can achieve high hashing power, but at the same time the price is also high and temperatures will also be high. GTX 1070 Ti will, on average, do best on the Ethash algorithm, but it will also perform well on Equihash and X16R, being followed by Lyra2REv2 and CryptoNightHeavy. This card should easily go over 450 h/s when mining Equihash**, users could even get to 500 h/s with the power usage around 120-130W. I personally managed to get my GTX 1070 Ti over 500 H/ and got to around 48Mh/s when mining Lyra2REv2.

**Conclusion with pros and cons: For those with a long-term view on mining and that want to get involved this is **a great GPU since its performance should keep shinning in the future. As I mentioned before, this GPU will manage higher speed, but at the same time it will cost you a bit more and and the temperature will be high. A big pro that I see is potential resell value in the future. This is a GPU that slowly loses value, similar to the GTX 970, which is still going for a nice price.

This is a GPU for all that want to get serious about mining and it will perform well for a longer period of time.