Gtx 680 asus: ASUS GTX 680 DirectCU II OC Specs

Asus GeForce GTX 680 DirectCU II V2 vs Nvidia GeForce GTX 1050: What is the difference?

42points

Asus GeForce GTX 680 DirectCU II V2

41points

Nvidia GeForce GTX 1050

vs

54 facts in comparison

Asus GeForce GTX 680 DirectCU II V2

Nvidia GeForce GTX 1050

Why is Asus GeForce GTX 680 DirectCU II V2 better than Nvidia GeForce GTX 1050?

  • 1.36 TFLOPS higher floating-point performance?
    3.09 TFLOPSvs1.73 TFLOPS
  • 56.14 GTexels/s higher texture rate?
    129 GTexels/svs72.86 GTexels/s
  • 79.9GB/s more memory bandwidth?
    192GB/svs112.1GB/s
  • 128bit wider memory bus width?
    256bitvs128bit
  • 768 more shading units?
    1536vs768
  • 240million more transistors?
    3540 millionvs3300 million
  • Supports multi-display technology?
  • 19°C lower load GPU temperature?
    78°Cvs97°C

Why is Nvidia GeForce GTX 1050 better than Asus GeForce GTX 680 DirectCU II V2?

  • 386MHz faster GPU clock speed?
    1392MHzvs1006MHz
  • 4. 23 GPixel/s higher pixel rate?
    36.43 GPixel/svs32.2 GPixel/s
  • 120W lower TDP?
    75Wvs195W
  • 250MHz faster memory clock speed?
    1752MHzvs1502MHz
  • 1000MHz higher effective memory clock speed?
    7008MHzvs6008MHz
  • 1 newer version of DirectX?
    12vs11
  • 0.3 newer version of OpenGL?
    4.6vs4.3
  • 460MHz faster GPU turbo speed?
    1518MHzvs1058MHz

Which are the most popular comparisons?

Asus GeForce GTX 680 DirectCU II V2

vs

MSI GeForce GTX 760 OC

Nvidia GeForce GTX 1050

vs

Nvidia GeForce RTX 3050 Laptop

Asus GeForce GTX 680 DirectCU II V2

vs

AMD Radeon RX 460

Nvidia GeForce GTX 1050

vs

AMD Radeon RX Vega 8

Asus GeForce GTX 680 DirectCU II V2

vs

Asus GeForce GTX 690

Nvidia GeForce GTX 1050

vs

AMD Radeon RX 550

Asus GeForce GTX 680 DirectCU II V2

vs

EVGA GeForce GTX 760

Nvidia GeForce GTX 1050

vs

Nvidia GeForce GTX 1650

Asus GeForce GTX 680 DirectCU II V2

vs

Gigabyte GeForce GTX 1050 Ti

Nvidia GeForce GTX 1050

vs

AMD Radeon Vega 8

Asus GeForce GTX 680 DirectCU II V2

vs

Asus Dual Radeon RX 5500 XT Evo OC

Nvidia GeForce GTX 1050

vs

Nvidia GeForce MX330

Asus GeForce GTX 680 DirectCU II V2

vs

Nvidia GeForce MX250

Nvidia GeForce GTX 1050

vs

Nvidia GeForce MX350 Laptop

Asus GeForce GTX 680 DirectCU II V2

vs

Nvidia GeForce MX330

Nvidia GeForce GTX 1050

vs

Nvidia GeForce MX150

Asus GeForce GTX 680 DirectCU II V2

vs

MSI GeForce GT 710 2GB

Nvidia GeForce GTX 1050

vs

Nvidia GeForce GTX 750 Ti

Asus GeForce GTX 680 DirectCU II V2

vs

Nvidia GeForce GTX 970

Nvidia GeForce GTX 1050

vs

Nvidia GeForce 940MX

Price comparison

User reviews

Overall Rating

Asus GeForce GTX 680 DirectCU II V2

0 User reviews

Asus GeForce GTX 680 DirectCU II V2

0. 0/10

0 User reviews

Nvidia GeForce GTX 1050

2 User reviews

Nvidia GeForce GTX 1050

6.0/10

2 User reviews

Features

Value for money

No reviews yet

 

6.5/10

2 votes

Gaming

No reviews yet

 

6.0/10

2 votes

Performance

No reviews yet

 

6.0/10

2 votes

Fan noise

No reviews yet

 

10.0/10

2 votes

Reliability

No reviews yet

 

6.0/10

2 votes

Performance

1.GPU clock speed

1006MHz

1392MHz

The graphics processing unit (GPU) has a higher clock speed.

2.GPU turbo

1058MHz

1518MHz

When the GPU is running below its limitations, it can boost to a higher clock speed in order to give increased performance.

3.pixel rate

32. 2 GPixel/s

36.43 GPixel/s

The number of pixels that can be rendered to the screen every second.

4.floating-point performance

3.09 TFLOPS

1.73 TFLOPS

Floating-point performance is a measurement of the raw processing power of the GPU.

5.texture rate

129 GTexels/s

72.86 GTexels/s

The number of textured pixels that can be rendered to the screen every second.

6.GPU memory speed

1502MHz

1752MHz

The memory clock speed is one aspect that determines the memory bandwidth.

7.shading units

Shading units (or stream processors) are small processors within the graphics card that are responsible for processing different aspects of the image.

8.texture mapping units (TMUs)

TMUs take textures and map them to the geometry of a 3D scene. More TMUs will typically mean that texture information is processed faster.

9.render output units (ROPs)

The ROPs are responsible for some of the final steps of the rendering process, writing the final pixel data to memory and carrying out other tasks such as anti-aliasing to improve the look of graphics.

Memory

1.effective memory speed

6008MHz

7008MHz

The effective memory clock speed is calculated from the size and data rate of the memory. Higher clock speeds can give increased performance in games and other apps.

2.maximum memory bandwidth

192GB/s

112.1GB/s

This is the maximum rate that data can be read from or stored into memory.

3.VRAM

VRAM (video RAM) is the dedicated memory of a graphics card. More VRAM generally allows you to run games at higher settings, especially for things like texture resolution.

4.memory bus width

256bit

128bit

A wider bus width means that it can carry more data per cycle. It is an important factor of memory performance, and therefore the general performance of the graphics card.

5.version of GDDR memory

Newer versions of GDDR memory offer improvements such as higher transfer rates that give increased performance.

6.Supports ECC memory

✖Asus GeForce GTX 680 DirectCU II V2

✖Nvidia GeForce GTX 1050

Error-correcting code memory can detect and correct data corruption. It is used when is it essential to avoid corruption, such as scientific computing or when running a server.

Features

1.DirectX version

DirectX is used in games, with newer versions supporting better graphics.

2.OpenGL version

OpenGL is used in games, with newer versions supporting better graphics.

3.OpenCL version

Some apps use OpenCL to apply the power of the graphics processing unit (GPU) for non-graphical computing. Newer versions introduce more functionality and better performance.

4.Supports multi-display technology

✔Asus GeForce GTX 680 DirectCU II V2

✖Nvidia GeForce GTX 1050

The graphics card supports multi-display technology. This allows you to configure multiple monitors in order to create a more immersive gaming experience, such as having a wider field of view.

5.load GPU temperature

A lower load temperature means that the card produces less heat and its cooling system performs better.

6.supports ray tracing

✖Asus GeForce GTX 680 DirectCU II V2

✖Nvidia GeForce GTX 1050

Ray tracing is an advanced light rendering technique that provides more realistic lighting, shadows, and reflections in games.

7.Supports 3D

✔Asus GeForce GTX 680 DirectCU II V2

✔Nvidia GeForce GTX 1050

Allows you to view in 3D (if you have a 3D display and glasses).

8.supports DLSS

✖Asus GeForce GTX 680 DirectCU II V2

✖Nvidia GeForce GTX 1050

DLSS (Deep Learning Super Sampling) is an upscaling technology powered by AI. It allows the graphics card to render games at a lower resolution and upscale them to a higher resolution with near-native visual quality and increased performance. DLSS is only available on select games.

9.PassMark (G3D) result

Unknown. Help us by suggesting a value. (Nvidia GeForce GTX 1050)

This benchmark measures the graphics performance of a video card. Source: PassMark.

Ports

1.has an HDMI output

✔Asus GeForce GTX 680 DirectCU II V2

✔Nvidia GeForce GTX 1050

Devices with a HDMI or mini HDMI port can transfer high definition video and audio to a display.

2.HDMI ports

Unknown. Help us by suggesting a value. (Asus GeForce GTX 680 DirectCU II V2)

More HDMI ports mean that you can simultaneously connect numerous devices, such as video game consoles and set-top boxes.

3.HDMI version

Unknown. Help us by suggesting a value. (Asus GeForce GTX 680 DirectCU II V2)

HDMI 2.0

Newer versions of HDMI support higher bandwidth, which allows for higher resolutions and frame rates.

4.DisplayPort outputs

Allows you to connect to a display using DisplayPort.

5.DVI outputs

Allows you to connect to a display using DVI.

6.mini DisplayPort outputs

Allows you to connect to a display using mini-DisplayPort.

Price comparison

Cancel

Which are the best graphics cards?

ASUS GeForce GTX 680 DirectCU II OC

The ASUS GeForce GTX 680 DirectCU II OC graphics card has it all: big performance, low power consumption, a very efficient and quieter cooler, and a extravagant price.

August 20, 2012 by Lawrence Lee

Product

ASUS GeForce GTX 680
DirectCU II OC

PCI-E Graphics Card

Manufacturer

ASUS
Sample Supplier AVADirect Computers

Street Price

US$530

In January, AMD debuted their HD 7000 series GPUs, sporting the new GCN graphics core manufactured using a 28 nm process. The die-shrink combined with additional improvements resulted in sizable performance boosts without heavy increases in power draw. Their flagship Radeon HD 7970 also took the single-GPU performance crown back from the GeForce GTX 580. Nvidia’s GeForce 500 series (which is actually only a slightly modified version of the 400 series) remained viable performance-wise but as the aging 40 nm Fermi core was never terribly power efficient to begin with, the difference was now even more noticeable.

The GeForce 600 series with its 28 nm Kepler core (found in mid-high level models only) seems to address this. The high-end GTX 680 and 670 have official TDPs about 50W lower than their 500 series analogs. The new generation also provides an updated video decoder (PureVideo HD 5) capable of rendering 4K resolutions and support for four independent displays; hardcore gamers can play across three monitors using SurroundView with an extra screen keeping track of Windows desktop applications. Also, like Intel and AMD’s Turbo Boost and Turbo Core CPU overclocking technologies, the Kepler core has GPU Boost, a feature that adjusts clock speeds dynamically based on the current power draw.


The ASUS GeForce GTX 680 DirectCU II OC box.


Package contents: card, software, setup guide, SLI bridge, 2 x 6-pin to 8-pin adapter.

Our first Kepler sample is made by ASUS, the GTX 680 DirectCU II OC. “DirectCU” is a name ASUS has used many times to brand a custom down-blowing, shrouded VGA cooler. The second iteration is a behemoth with multiple heatpipes and two 10 cm fans and extends past the length of the 680’s already fairly long 26.6 cm (10.5 inch) circuit board. Like their many motherboards, the card also has ASUS’ DIGI+VRM technology, advanced voltage regulation circuitry that claims to help deliver stable, efficient power to the GPU.

The GTX 680 DirectCU II OC is currently selling for about US$530 while more barebones GTX 680’s start at US$500. You could say that this card is factory overclocked as well as its GPU Boost clock is set to 1084 MHz, about 30 MHz more than the basic model, but that figure isn’t really set in stone. If the card is running cool enough, it can and will exceed that speed. This is a card for deep-pocketed users seeking extremely smooth gameplay on monstrous 2560×1440 displays or multi-monitor configurations.

ASUS GeForce GTX 680 DirectCU II OC
(GTX680-DC2O-2GD5): Specifications

(from the product
web page)

Graphics Engine

NVIDIA GeForce GTX 680

Bus Standard

PCI Express 3.0

Video Memory

GDDR5 2GB

Engine Clock

GPU Boost Clock : 1084 MHz
GPU Base Clock : 1019 MHz

CUDA Core

1536

Memory Clock

6008 MHz ( GDDR5 )

Memory Interface

256-bit

Interface

DVI Output : Yes x 1 (DVI-I), Yes x 1 (DVI-D)
HDMI Output : Yes x 1
Display Port : Yes x 1 (Regular DP)

Accessories

1 x Power cable
1 x Extended SLI cable

Software

ASUS Utilities & Driver

ASUS Features

DirectCU Series
Super Alloy Power

Dimensions

11. 8 ” x 5.1 ” x 2.3 ” Inch

Notes

*To have the best cooling performance, ASUS GTX680-DC2O-2GD5 extends the fansink to 2.5 slots. Please double check you Chassis and Motherboard dimension prior to purchase to make sure it fits in your system!

*Note that the actual boost clock will vary depending on actual system conditions. For more information, please visit http://www.geforce.com/

PHYSICAL DETAILS

The ASUS GTX 680 DirectCU II OC has a hefty dual fan heatsink that’s so thick, the card takes up three expansion slots. The cooler alone accounts for 710 grams of the card’s total weight of 1.2 kg.


According to our measurements, the PCB is approximately 26.6 cm (10.5 inches) long but the cooler extends the total length to 29. 7 cm (11.7 inches). It hangs over the side as well, making the card 9 mm wider than normal.


The heatsink is equipped with a pair of 10 cm fans with nine blades a piece.


The shroud covering the heatsink opens up near the back so air can be exhausted out of the case through vents on the expansion bracket. There are four video outputs, one DisplayPort, one HDMI, and two DVI connectors.


A series of pin-outs on the right-side edge of the PCB allow true enthusiasts to wire the card to select ASUS RoG (Republic of Gamers) motherboards, enabling them to adjust the card’s various voltages.


The GTX 680 requires both an 8-pin and 6-pin power adapter to function properly. They’re placed on the side for convenience.

INSTALLATION & HEATSINK

Installation of a PCI-E graphics card is a very easy procedure but it’s nice to see exactly how much space the card takes up to determine case compatibility or interference with motherboard components. We also make a point to look under the hood and examine the heatsink itself.


The card extends about an inch past the edge of the motherboard tray in our modified Fractal Design Define R3 case. The hard drive cage was physically removed to make room for long cards but by the looks of it, the GTX 680 DirectCU II would have fit… if just barely. Also notice how much the card dips on the right side due to its weight.


A nice extra feature is the LED indicators for the external power ports that shine red when left unconnected. It’s a useful reminder for even the most experienced PC builder.


The trace side of the card is covered by an enormous backplate with multiple ventilation holes. It’s affixed at multiple points but only a few screws need to be removed to take off the main heatsink. The mounting holes around the GPU score are spaced 58 mm apart in a square formation.


The heatsink has five 8 mm thick direct-touch heatpipes, an excessive number considering the GPU core only makes contact with three. The aluminum fins are approximately 0.38 mm thick with 1.48 mm of separation.


The organization of the PCB components is very tidy with all the capacitors and inductors neatly arranged on the right side along with a long VRM heatsink. The memory chips are very close to the mounting holes though which might cause interference between third party coolers and their included memory heatsinks.

TEST METHODOLOGY

Our test procedure is an in-system test, designed to:

1. Determine whether the cooler is adequate for use in a low-noise system.
By adequately cooled, we mean cooled well enough that no misbehavior
related to thermal overload is exhibited. Thermal misbehavior in a graphics
card can show up in a variety of ways, including:

  • Sudden system shutdown, reboot without warning, or loss of display signal
  • Jaggies and other visual artifacts on the screen.
  • Motion slowing and/or screen freezing.

Any of these misbehaviors are annoying at best and dangerous at worst —
dangerous to the health and lifespan of the graphics card, and sometimes to
the system OS.

2. Estimate the card’s power consumption. This is a good indicator of how efficient
the card is, and it affects how hot the GPU runs. The lower the better.

3. Determine how well the card decodes high definition video.

Test Platform

  • Intel Core i3-2100 processor, Sandy Bridge core, dual core 3.1 GHz, integrated HD 2000 graphics, TDP of 65W.
  • Thermalright HR-02 Macho
    heatsink, an early design prototype with a Scythe
    Slip Stream
    500RPM 120mm fan.
  • Gigabyte Z77X-UD3H motherboard, Z77 chipset, ATX, PCI-E 3.0.
  • Kingston HyperX Genesis memory, 2x4GB, DDR3-1600.
  • Corsair Force GT
    solid state drive, 120GB, 2.5 inch, SATA 6 Gbps, refurbished.
  • Kingwin Lazer Platinum
    power supply, ATX v2.2, 80 Plus Platinum, 1000W total output, 83A on +12V rail.
  • Fractal Design Define R3 case,ATX, modified.
  • Antec TrueQuiet 120 120mm
    fan
    , two connected to controllable motherboard headers, 1000 RPM, 3-pin.
  • Microsoft
    Windows 7 Ultimate
    operating system, 64-bit
  • AMD
    Catalyst
    graphics driver for AMD/ATI based graphics card, version 12.8.
  • Nvidia GeForce graphics driver for Nvidia based graphics cards, version 301.42.


GPU-Z screenshot.

Measurement and Analysis Tools

  • Prime95 stability test to stress the CPU.
  • FurMark
    stability test to stress the GPU.
  • GPU-Z to
    monitor GPU temperatures and fan speeds.
  • Cyberlink
    PowerDVD
    to play H.264/VC-1 video.
  • Mozilla
    Firefox
    with Adobe
    Flash Player
    to play Flash video.
  • MSI Afterburner to adjust GPU fan speeds.
  • Extech
    AC Power Analyzer 380803

    AC power meter, used to measure the power consumption
    of the system.
  • A custom-built variable fan speed controller to power the system
    fan.
  • PC-based spectrum analyzer
    — SpectraPlus
    with ACO Pacific mic and M-Audio digital
    audio interfaces.
  • Anechoic chamber
    with ambient level of 11 dBA or lower.

3D Performance Benchmarks (for low-end/budget graphics processors only)

  • 3DMark Vantage
    as a 3D DirectX 10 benchmark.
  • 3DMark11
    as a 3D DirectX 11 benchmark.
  • Lost Planet 2 standalone benchmark, Test “A”.
  • Tom Clancy’s H.A.W.X. 2 standalone benchmark.
  • Alien vs. Predator standalone benchmark.

Estimating DC Power

The following power efficiency figures were obtained for the
Kingwin LZP-1000
used in our test system:

Kingwin LZP-1000 Test Results

DC Output (W)

65. 5

90.7

149.0

199.6

251.2

300.3

400.9

AC Input (W)

81

105

166

211

265

322

426

Efficiency

80.8%

86.4%

89.8%

92.8%

92.9%

93.5%

94.1%

This data is enough to give us a very good estimate of DC demand in our
test system. We extrapolate the DC power output from the measured AC power
input based on this data. We won’t go through the math; it’s easy enough
to figure out for yourself if you really want to.

Ambient Noise Level

Our test system’s CPU fan is a low speed Scythe that is set to full speed at all times. The two Antec TrueQuiet 120 case fans are connected to the motherboard and are controlled using SpeedFan. Three standard speed settings have been established for testing.

VGA Test System:
Anechoic chamber measurements

Setting

System SPL@1m

System Fan Speed

High (loud)

26 dBA

1130 RPM

Med (quiet)

18 dBA

820 RPM

Low (silent)

12~13 dBA

580 RPM

When testing video cards and coolers with active cooling, the low setting will be used. For passive cards and heatsinks, all three settings will be tested to determine the effect of system airflow on cooling performance.

Video Test Suite

1080p | 24fps | ~10mbps

H.264/MOV:
Rush Hour 3 Trailer 1
is a 1080p H.264 encoded clip inside an
Apple Quicktime container.

 

1080p | 24fps | ~22 mbps

H.264/MKV: A custom 1080p H.264 encoded clip inside an Matroska container.

 

720p | 24fps | ~1.2 mbps

Flash 720p: The Dark Knight Rises Official Trailer #3, a 720p YouTube HD trailer.

 

1080p | 24fps | ~2. 3 mbps

Flash 1080p: The Dark Knight Rises Official Trailer #3, the same YouTube HD trailer in 1080p.

 

Testing Procedures

Our first test involves monitoring the system power consumption as well as CPU and GPU temperatures during
different states, idle, under load with Prime95 to stress the processor, and Prime95 plus FurMark to stress both the CPU and GPU simultaneously. This last state is an extremely stressful, worst case scenario test which generates
more heat and higher power consumption than can be produced by a modern video
game. If the card can survive this torture in our low airflow system, it should be
able to function normally in the vast majority of PCs. If we deem the card’s fan control to be overly aggressive, we can adjust them at our discretion.

Our second test procedure is to run the system through a video test suite featuring
a variety of high definition clips played with PowerDVD and Mozilla Firefox (for Flash video). During playback, a CPU usage graph is created
by the Windows Task Manger for analysis to determine the average CPU usage.
High CPU usage is indicative of poor video decoding ability. If the video (and/or
audio) skips or freezes, we conclude the GPU (in conjunction with the processor)
is inadequate to decompress the clip properly.

Lastly, for low-end and budget graphics cards, we also run a few gaming benchmarks to get a general idea of the GPU’s 3D performance. We don’t feel this is necessary for high-end models as there are many websites that do this in painstaking detail.

TEST RESULTS

Baseline with Integrated Graphics: First, here are the results of
our baseline results of the system with its integrated graphics, without
a discrete video card. We also need the power consumption reading during
Prime95 to estimate the actual power draw of the discrete card later.

Power Consumption Measurements:
VGA Test System (IGP)

Measurement

Idle

CPU Load

CPU + GPU Load

Sys. Power (AC)

36W

74W

87W

Sys. Power (DC)

unknown

61W

72W

System fan speeds: low (580 RPM)
Ambient noise level: 10~11 dBA
System noise level: 12~13 dBA

System with ASUS GTX 680 DirectCU II OC:

System Measurements: VGA Test System
(ASUS GTX 680 DirectCU II OC)

State

Idle

CPU Load

CPU + GPU Load

Temp

CPU

27°C

42°C

62°C

61°C

PCH

43°C

44°C

61°C

57°C

GPU

34°C

34°C

85°C

75°C

GPU VRM

49°C

50°C

112°C

96°C

GPU Fan Speed

840 RPM
(auto)

1200 RPM
(manual)

1860 RPM
(auto)

SPL @1m

15 dBA

17~18 dBA

27~28 dBA

Sys. Power (AC)

58W

99W

281W

280W

Sys. Power (DC)

unknown

84W

265W

264W

System fan speeds: low (580 RPM)
Ambient noise level: 10~11 dBA
Ambient system noise level: 12~13 dBA
Ambient temperature: 24°C

Note: the GPU Boost feature clocked the core at 1111 MHz during both load states, eclipsing the officially specified 1084 MHz figure.

The GTX 680 DirectCU II OC was very quiet under idle conditions. The GPU fan spun at only 840 RPM and our test system measured 15 dBA@1m, only 2~3 dB higher than what the system produces running without a discrete video card. On load we found that the fan speed behavior was overly aggressive, pumping up the fan to 1860 RPM to maintain a GPU temperature of 75°C and generating a noise level of 27~28 dBA@1m. Taking the fan off automatic control, we found that even 1200 RPM was sufficient and a great deal quieter.

System power consumption was an impressively low 60W AC when sitting idle while load pushed it to to 281W. The latter figure is still very reasonable considering the GTX 680 is a top tier performer with a US$500+ price-tag. It does produce a considerable amount of heat however as stressing the GPU heated up the CPU by an additional 20°C, an increase 5°C greater than putting the CPU on load by itself. The GPU cooler is very good at keep the video card cool but as it lacks a side-blowing fan, a good amount of exhaust is required to keep heat from lingering in the upper portion of the case.

The quality of noise generated by the GTX 680 DirectCU II OC was quite good. Sitting idle, the test system had a very gentle hum that was difficult to pick out compared to the same machine running without a dedicated graphics card. On load with the fan spinning at 1860 RPM, there were some tonal elements that we could detect up close, but at one meter’s distance, the turbulent noise from the fans and the side panel of the case masked it fairly well.

POWER

The power consumption of an add-on video card can be estimated by comparing
the total system power draw with and without the card installed. Our results
were derived thus:

1. Power consumption of the graphics card at idle – When Prime95 is
run on a system, the video card is not stressed at all and stays idle.
This is true whether the video card is integrated or an add-on PCIe 16X device.
Hence, when the power consumption of the base system under Prime95 is subtracted
from the power consumption of the same test with the graphics card installed,
we obtain the increase in idle power of the add-on card over the
integrated graphics chip.

2. Power consumption of the graphics card under load – The power draw
of the system is measured with the add-on video card, with Prime95 and FurMark
running simultaneously. Then the power of the baseline system (with integrated
graphics) running just Prime95 is subtracted. The difference is the load power
of the add-on card. Any load on the CPU from FurMark
should not skew the results, since the CPU was running at full load in both
systems.

Both results are scaled by the efficiency of the power supply (tested
here) to obtain a final estimate of the DC power consumption.

Note: The actual power
of the add-on card cannot be derived using this method because the integrated graphics may draw
some power even when not in use. However, the relative difference between the cards should be accurate.

With an estimated idle power of 23W, the GTX 680 DirectCU II OC is one of the most efficient performance cards we’ve tested, beaten only by the Gainward GTX 560 Ti. The GTX 680 is also very frugal on load, using just over 200W, a full 30W less than the much slower GTX 560 Ti and an overclocked Radeon HD 5870.

NOISE & COOLING COMPARISON

System Measurements: VGA Test System
(Comparison)

Card

Idle

Load

SPL @1m

GPU Temp

SPL @1m

GPU Temp

HIS HD 5870 Turbo
+ GELID Icy Vision
(5V / 1260 RPM)

17~18 dBA

37°C

17~18 dBA

89°C

ASUS GTX 680
DirectCU II OC
(manual, 1200 RPM)

N/A

17~18 dBA

85°C

AMD HD 6870
+ GELID Icy Vision
(5V / 1260 RPM)

17~18 dBA

40°C

17~18 dBA

80°C

ASUS GTX 680
DirectCU II OC
(auto)

15 dBA

34°C

27~28 dBA

75°C

Gainward GTX 560
Ti Phantom

18 dBA

39°C

37 dBA

88°C

Ambient temperature: 24°C
Ambient noise level: 11 dBA
System noise level: 13 dBA

The GTX 680 DirectCU II OC’s cooler is an admirable performer, beating out the the HD 5870 paired with a GELID Icy Vision, one of the better aftermarket heatsinks available. To be fair, the 5870 uses about 30W more but it’s still a fairly impressive result for a stock cooling unit to even be in the same ballpark as a ~US$50 heatsink.

Video Playback

The GTX 680 features Nvidia’s latest of PureVideo HD video processing technology though its main benefit over older generations is support for 4K resolution video. Its ability to decoding more mundane 1080p H.264 and Flash video is similar to the previous GTX 500 series. Both AMD and Nvidia’s solutions render these types of media with no issues and eat up very few CPU cycles.

Power consumption during video playback greatly favored the Nvidia cards, with the HD 5870 and 6870 using about 30W more than their idle numbers. The GTX 680 on the otherhand used +6~8W, while the GTX 560 Ti stayed in the +10~12W region.

Clock Speed Comparison (Core/Memory in MHz)

Card

Idle

Video Decode

Load

HIS HD 5870 Turbo

157/300

400/900

875/1225

AMD HD 6870

300/300

300/1050

900/1050

ASUS GTX 680
DirectCU II OC

324/162

324/162

1111/1502

Gainward GTX 560
Ti Phantom

51/68

405/162

822/1000

After some investigation we found that the probable culprit for this discrepancy. The HD 5870 and 6870’s clock speeds didn’t decrease as much during playback as the GTX 680 and GTX 560 Ti. The AMD cards’ memory frequencies didn’t underclock at all while Nvidia’s cards downclocked to idle or close to idle levels.

Software

The GTX 680 DirectCU II OC ships with ASUS’ GPU Tweak utility which offers fan and frequency adjustments as well as monitoring functionality. It looks suspiciously like MSI’s popular Afterburner application, though skinned with ASUS’ red and black Republic of Gamers scheme.

GPU Tweak also includes their own version of GPU-Z with the same color motif.

MP3 Sound Recordings

These recordings were made with a high
resolution, lab quality, digital recording system
inside SPCR’s
own 11 dBA ambient anechoic chamber
, then converted to LAME 128kbps
encoded MP3s. We’ve listened long and hard to ensure there is no audible degradation
from the original WAV files to these MP3s. They represent a quick snapshot of
what we heard during the review.

These recordings are intended to give you an idea of how the product sounds
in actual use — one meter is a reasonable typical distance between a computer
or computer component and your ear. The recording contains stretches of ambient
noise that you can use to judge the relative loudness of the subject. Be aware
that very quiet subjects may not be audible — if we couldn’t hear it from
one meter, chances are we couldn’t record it either!

The recording starts with 5~10 seconds of room ambiance, followed by 5~10 seconds
of the VGA test system without a video card installed, and then the actual product’s
noise at various levels. For the most realistic results, set the volume
so that the starting ambient level is just barely audible, then don’t change
the volume setting again.

  • VGA test system
    with ASUS GTX 680 DirectCU II OC at one meter

    — idle (840 RPM, 15 dBA@1m)
    — load (1200 RPM, 17~18 dBA@1m)
    — load (1860 RPM, 27~28 dBA@1m)

FINAL THOUGHTS

According to credible gaming-oriented review sites like PC Perspective, HardwareCanucks and AnandTech, the GTX 680 is one of the fastest single GPU cards on the market, falling somewhere between AMD’s Radeon 7970 and the overclocked 7970 GHz Edition. Both the 7970 and 680 are overpowered for gaming with a 1920×1080 or 1920×1200 display. Their benefits don’t become apparent unless you use an even higher resolution or multi-monitor setup.

Given its level of performance we’re stunned by the energy efficiency of the GTX 680. Sitting idle, it used less power than the older Radeon HD 6870 and HD 5870 which were once praised for their idle frugality. On load, the GTX 680 used 203W, a ~30W advantage over the HD 5870 and the GTX 560 Ti, a much slower card that can be purchased for less than half the cost. The 680 was also the most efficient of these four cards at playing high definition video.

ASUS has a created a very cool, quiet, and efficient card that’s capable of some big performance numbers. The DirectCU II cooler with its dual 100 mm fans and thick, direct-touch heatpipes is a ringer, close in proficiency to the GELID Icy Vision, a US$50 third party heatsink. The automated fan control is a little aggressive but can be easily tweaked using software like MSI Afterburner or ASUS’ own GPU Tweak application. With a fan speed reduction, one can attain the rare combination of superb gaming performance and very quiet operation. The heatsink is overkill considering the relatively low power requirements but this is a blessing even if you’re unconcerned by noise. The extra thermal headroom allows one to take better advantage of the GTX 680’s GPU Boost overclocking capability.

The only thing we don’t like about the ASUS GTX 680 DirectCU II OC is the price-tag, currently US$530. If you’re looking to game on a very high resolution monitor or across multiple displays, it’s going to cost you, but even with such a high budget, value should still be taken into consideration. It stacks up well against more basic GTX 680s that start at US$500 but if you look across in the AMD aisle you might turn green with envy. The Radeon HD 7970 recently received a price drop making its various iterations $60~$90 cheaper than the GTX 680 DirectCU II OC. While it isn’t nearly as energy efficient, the price discrepancy is substantial. A GTX 680 might be able to make up the difference in electricity savings but to do so in a relatively short time frame would require a heavy use case in a location with high utility rates.

Our thanks to AVADirect Computers for the GeForce GTX 680 DirectCU II video card sample.

ASUS GTX 680 DirectCU II OC wins the SPCR Editor’s Choice

* * *

Articles of Related Interest
Sapphire HD 7750 Ultimate Edition

ASUS DirectCU & AMD Radeon HD 6850 Graphics Cards

AMD Radeon HD 6570 & 6670 Budget GPUs

Arctic Cooling Accelero Xtreme Plus GPU Cooler

AMD Radeon HD 6870 Graphics Card

GELID Icy Vision Dual Fan VGA Cooler

* * *

Discuss this article in the
SPCR forums.

ASUS GeForce GTX 680 DirectCU II 2 GB Review

On 22nd March 2012, NVIDIA introduced its latest 28nm ‘Kepler’ architecture with the launch of their flagship GeForce GTX 680 graphics card. The Kepler architecture makes use of a refined 28nm process which compared to the 40nm Fermi not only provides better performance but is much more power efficient than its predecessor.

NVIDIA’s Kepler is built on the same foundation first laid by the 40nm Fermi in 2010. Fermi at the time of its launch introduced an entirely new parallel geometry pipeline that was optimized for tessellation and displacement mapping. Kepler retains these features and delivers even better performance when rendering tessellation in the latest DirectX 11 enabled titles, all of this in a highly efficient package.

One of the reasons many gamers didn’t accept Fermi was its high power consumption and heat output even though it provided rich gaming performance compared to its competitors at that time. With the Kepler architecture, the GeForce GTX 680 does not only becomes the fastest performing GPU of the GeForce 600 series but also the most power efficient GPU NVIDIA have ever built.

The Kepler architecture looks a lot similar to the Fermi architecture being composed of a number of Graphics processing clusters (GPCs), Streaming Multi-processors (SMs) and memory controllers. The GeForce GTX 680 makes use of the GK104 core architecture (Kepler).

The GK104 core consists of four Graphics processing clusters (GPCs), where each GPC unit is comprised of two SMX units and a single memory controller. This puts the total number of SMX units on the GK104 Kepler to eight and four memory controllers. Each memory controller has 128 KB L2 Cache and eight ROPs which totals 512 KB L3 cache and 32 ROPs (Raster Operation Units) on the GK104.

Among the Kepler GK104 die, you can see a PCI-e Gen 3.0 Host interface, 256-bit wide GDDR5 memory interface and NVIDIA’s latest GigaThread Engine situated underneath those two. About the memory, it’s quite amazing how NVIDIA actually achieved a 6 GHz frequency with the first revision of Kepler architecture. As you may recall, NVIDIA had quite a problem with its Fermi memory controllers since the chips ended with lower memory speeds than they were originally intended to reach. With Kepler, the GDDR5 hits 6 Gbps along a 256-bit interface.

2 of 9

The NVIDIA Fermi SM block featured 32 cores within the control logic, this resulted in a total of 512 cores within the GF-110 core. NVIDIA’s Kepler uses the Next-Generation Streaming multiprocessors known as the ‘SMX’ which have 192 cores each and deliver upto 2 times the performance per watt compared to Fermi. Since there are 8 GPCs on the GK104 core, this leads to a total of 1536 cores on the GK104 Kepler die which are three times more than on Fermi.

Each SMX has its own dedicated and shared resources with the new Polymorph 2.0 engine handling raster operations such as Vertex Fetch, Tessellator, Viewport Transform, Attribute Setup and Stream output hence pumping two times the primitive and tessellation performance compared to Fermi SM units. There is one Raster Engine per GPC, four in total that handles Edge setup, Rasterizer, Z-Cull through 32 Raster Operation processors or ROPs.

Another improvement over Fermi is the implementation of ‘Kepler Bindless Textures’ which increases the number of textures a shader can reference to over a million, whereas this was restricted to 128 on Fermi. The new feature allows faster rendering of textures and provides richer texture detail in a scene. In total there are 128 Texture memory Units onboard the Kepler GK104 die.

All in all, the GK104 Kepler die onboard the GeForce GTX 680 features 1536 Cuda Cores, 192 per SMX, 384 per GPC, 128 TMUs, 32 ROPs and 256-bit GDDR5 memory.

NVIDIA’s Kepler architecture is not only faster but also much more power efficient than any of NVIDIA’s previous GPU architecture. The GK104 chip is the fastest performing of the GeForce 600 series but power consumption is only rated at half of that much as Fermi.

Past GPUs from NVIDIA and AMD featured 8 Pin and 6 Pin connectors to obtain power with TDPs over 200 Watts. The GeForce GTX 680 uses two 6 Pin connectors for power which results in a total rated TDP of 195W, compared to 250W on the Radeon HD 7970 which is its direct competitor.

Automatic clock Boost for GPUs

In addition to bringing performance and efficiency to its GeForce products, NVIDIA has also brought the latest GPU boost dynamic overclocking technology to its GeForce Kepler family.

GPU Boost is similar to the Turbo boost/Turbo core technologies we see on AMD and Intel processors. The GPU Boost feature is dynamically controlled in the background while running applications. The GPU boost algorithm shows what needs to be taken in account (Power Consumption, GPU Temperature, etc) before the boost is applied upon GPU Frequency, GPU Voltage and the memory. Once again we would like to mention that no software is required by the user to enable GPU Boost, it is a dynamic feature which runs in the background without user intervention.

When GPU is given boost, it pushes the frequencies to an undetermined level based on the TDP. For instance, when a user is running an application and the card has not yet reached its TDP limit, the GPU would boost to give added performance to the application by converting the available power to boost. The lower the TDP the more the boost you would get out of your GPU, for instance a GPU running at 180W  would give lower boost speeds compared to GPU running at 160W. At its max 195W limit, the GPU (GeForce GTX 680) would run at its pre-determined base clock of 1006 MHz since no more room would be available to apply GPU Boost.

The new GPU Boost gives overclockers an advantage since it can also work while overclocked settings applied on the GPU. When users overclock the GPU’s core, an overclock would also be applied in return to the max GPU Boost frequency hence increasing its boost frequency limit. However, the GPU would run Boost technology determined on the GPUs available TDP as we have mentioned above. We have also done overclocking on our review samples which you can see later in the article.

NVIDIA’s Kepler architecture on the GeForce 600 series also brings new technologies for games such as new anti-aliasing algorithms, Adaptive V-Sync, 3D Vision Surround (Supports 4 Displays), NVENC, and Richer PhysX processing.

FXAA and TXAA

NVIDIA has developed two new Anti-Aliasing algorithms for its GeForce Kepler family. The FXAA uses the GPUs Cuda cores to reduce visible aliasing in gaming titles whilst applying other post-processing effects such as motion blur and bloom. FXAA was first introduced in Age of Conan last year, since then it can be applied in various titles using NVIDIA’s R300 drivers.

 

2 of 9

The FXAA algorithm reduces visible aliasing without compromising on performance. A demonstration by NVIDIA shows that running the Epic’s Next-Generation Samaritan demo a year ago at GDC 2011 required three GTX 580’s in SLI, a year later the demo with the same image quality was possible with a single GTX 680 utilizing FXAA technology.

 

2 of 9

TXAA in a similar way is another anti-aliasing algorithm developed by NVIDIA which harnesses the GTX 680’s texture performance and available in two modes. TXAA 1 offers better image quality than 8xMSAA with the performance hit of 2x MSAA while TXAA 2 offers even better image quality than TXAA 1 at the performance hit of 4x MSAA. However, it is limited by developers that which games are to use the new and much better TXAA algorithm. Consider the feature to be included on NVIDIA optimized games from Crytek, EPIC such as Crysis 3, Borderlands 2, The Secret World, Mechwarrior Online, etc.

Adaptive V-Sync

NVIDIA has also developed a new V-Sync mode known as ‘Adaptive V-Sync’ which can dynamically turn V-Sync off if the frame rate falls below the monitor’s refresh rate. Casually, V-Sync is applied when any users want to run a game at 60 FPS set.

Gamers with high end video cards able of producing more than 60 FPS apply this to get rid of any sort of screen tearing which happens when the frame rate goes past the monitor’s refresh rate. However screen tearing also occurs when frame rates drop below the V-Sync cap. For example a game is capped at 60 FPS with V-Sync, when frames start to dip past that limit, V-Sync will take in account the next cap limit which is 30 FPS and 20 FPS. This transition to low FPS cap causes screen tearing.

 

2 of 9

What Adaptive V-Sync does is dynamically adjust when such a transition takes place. The new features turns off V-Sync when the frames dip below 60 FPS so that the games keeps on running in a playable state without any sort of screen tears. When frames start to gain up, the 60 FPS V-Sync cap is applied again. Adaptive V-Sync adjusts dynamically for displays with both 60 and 120 refresh rates.

Single GPU 3D Vision Surround

With the GeForce GTX 680, gamers can now run upto three displays simultaneously in 3D Vision surround and fourth display to show email and optional applications all through a single GPU. The GTX 680 comes with native support for NVIDIA’s 3D Vision surround and supports HDMI 1.4a, 4K Monitors (3840 x 2160) and multi-stream audio.

Improved PhysX and NVENC:

With the GeForce GTX 680 and its latest SMX unit, gamers can now run titles that incorporate the PhysX effects at much higher frame rates than the GeForce GTX 580. Games such as the recently released Borderlands 2 makes use of the new PhysX effects on cloth and particles enhancing the visual experience.

All GeForce Kepler GPUs come with NVIDIA’s latest hardware based H.264 video encoder known as the NVENC which with the help of the GPUs Cuda Cores provides massive performance improvement as per compared to CPU encoding. The GeForce Kepler architecture does this job with much lesser power consumption compared to GeForce Fermi. NVENC provides the following:

  • [Can encode full HD resolution (1080p) videos up to 8x faster than real-time. For example, in high performance mode, encoding of a 16 minute long 1080p, 30 fps video will take approximately 2 minutes. ]
  • Support for H.264 Base, Main, and High Profile Level 4.1 (same as Blu-ray standard)
  • Supports MVC (Multiview Video Coding) for stereoscopic video—an extension of H.264 which is used for Blu-ray 3D.
  • Up to 4096×4096 encode

Finally, we get to cover about the card itself. The ASUS GeForce GTX 680 DirectCU II video card is quite a mammoth, both in terms of size and performance.

The ASUS GeForce GTX 680 is built on the same GK104 chip which is also the first 28nm chip from NVIDIA we have been detailing in the last two pages, tech and features wise. Now it’s time to learn what kind of specifications does the ASUS GeForce GTX 680 holds.

The first thing to notice about the card is that the ASUS GeForce GTX 680 is built on a non-reference PCB. Compared to the reference PCB which features 4+2 VRM Phases (GPU+Memory), the ASUS GeForce GTX 680 gets a DIGI+ VRM with 10-phase Super Alloy Power technology. This allows massive stability and overclocking performance for users. However, due to addition of a higher amount of VRM phases, the 6 Pin stacked power connector on the reference model is changed with an 8 + 6 Pin side by side PCI-e connectors for the ASUS GeForce GTX 680. Looks and features of the card are detailed by us below after the unboxing section.

ASUS GeForce GTX 680 features 1536 cores, 32 ROPs, 128 TMUs and 3500 million transistors onboard the GK104. The core of the GTX 680 runs at a base clock of 1006 MHz and GPU Boost clock of 1058 MHz at stock. These are the reference specs of the GTX 680 which ASUS’s model retains. In addition to core, there’s a 2GB GDDR5 memory which runs at 6 GHz (6008 MHz) effective frequency along a 256-bit wide interface.

Specification Table

 

Radeon HD 6970 Radeon HD  7870 Radeon HD 7970 GeForce GTX 560 Ti GeForce GTX   660

GeForce GTX
580

GeForce GTX
680
Stream Processors 1536 1280 2048 334 960 512 1536
Core Clock (MHz) 880 1000 925 950 980 772 1006
Boost clock (Mhz) 1033 1058
Memory Clock (MHz) 5500  4800 5500 6008 6008 4008 6008
VRAM Buffer 2048 2048 3096 1024 2048 1536 2048
Memory Interface 256-bit 256-bit 384-bit 256-bit 192-bit 512-bit 256-bit
Memory Type GDDR5 GDDR5 GDDR5 GDDR5 GDDR5 GDDR5 GDDR5

 

Let’s have a look at the GPU shall we?

The ASUS GeForce GTX 680 DirectCU II 2 GB is shipped within a large box which at its front shows the ‘GeForce GTX 680’ and ‘DirectCU II’ label. Three claws in red color are embedded on the box illustrating the design theme of the DirectCU II cooler. The bottom part lists features of GeForce GTX 680 such as Digi+ VRM, VGA Hotwire, ASUS’s GPU Tweak and 2 GB GDDR5 memory.

The backside of the box provides detailed information about the DirectCU II cooler which is said to be 20% cooler than reference design, Digi+ VRM with Super Alloy Power and Real-time overvolting with VGA Hotwire. The backside also mentions and points out the Input/Output points on the GeForce GTX 680 DirectCU II.

There’s a second cardboard box inside the packaging which holds the GeForce GTX 680 DirectCU II. This packaging only has a single ASUS logo etched at its center.

Opening the box gives us a cover made foam which holds the GPU manual and driver disk. The GeForce GTX 680 DirectCU II is supplied with a manual, setup disk, Flexible SLI cable and a PCI-e power connector as accessories.

Beneath the foam cover is the card itself – ASUS’s GeForce GTX 680 DirectCU II. Held on another foam package to protect it from any kind of structural damage and kept inside an anti-static bag.

A first look at the card is enough to determine the kind of power the ASUS GeForce GTX 680 DirectCU II holds.

From the front, we can see that the card uses ASUS’s famous DirectCU II cooler which they have been using on their cards for a while now. The DirectCU II label is etched in the left corner of the card and we see the claw line (previously noted on the box) running through the center of the card. There are two 100mm noise dampening fans which provide air to the central parts of the GPU.

The back of the GPU comes with a back-plate, a rather good move by ASUS to add to the GTX 680. The back-plate is protected by a protective film which can easily be removed by users. The back-plate has several holds to dissipate heat from the back and has selective key points kept bare for GPU Hotwiring. Also note that the ‘ASUS GeForce GTX 680’ logo is imprinted on the plate which is a nice addition to its looks.

From the side, we can note that the GeForce GTX 680 DirectCU II covers three slots which could be a hassle for users with small cases or for those looking forward to SLI. In our setup, the DirectCU II fitted easily without any issues. The PCI-express connector is protected by a slot cover.

The other side comes with yet another ‘ASUS’ logo and has a total of 7 cutouts in total to dissipate hot air out of the heatsink. A large metallic cover keeps the GPU and PCB in place preventing the card from bending. We can also see the power connectors from this position, an 8 and 6 Pin PCI-e to be exact. The reference models comes with two 6-pin connectors stacked on top of each other to reserve space.

ASUS GeForce GTX 680 DirectCU II features four display outputs located on the back-panel — Dual Link-DVI, HDMI and a full length Display port to allow single GPU 3D Vision Surround support. The back-panel also comes with a half-length and full length exhaust vents to dissipate heat out of the GPU shroud.

At the front side of the PCB, we can see two SLI gold fingers protected by slot covers. The SLI gold fingers allow for upto 4-Way SLI functionality using the GeForce GTX 680 GPU.

We take a close look at the ASUS GeForce GTX 680 DirectCU II to see what’s kept under the GPU’s hood.

The DirectCU II cooler makes use of a large aluminum fin array block through which five copper heatpipe run. These heatpipes make direct contact with the GPU’s core dissipating heat to the heatsink block and being blown away by the PWM controlled fans.

Once again we see the power connectors, an 8 Pin and a 6 Pin which are used to load the card. A green LED is situated on the backside of the PCB which shows that whether the power cables are properly inserted or not.

ASUS has added Voltage measurement points on the GTX 680 DirectCU II enabling overclockers to get easy details of the GPU in real-time while overclocking.

In addition to voltage measurement, the backside of the PCB has an additional cut-out for VGA Hotwire which can be enabled with ASUS ROG motherboards. This allows to remove voltage limitations and is useful if you’re upto some mean overclocking sessions.

2 of 9

Processor  Intel Core i5-3570K @ 4.5 GHz
Motherboard: ASRock Z77 Extreme6
Power Supply: Xigmatek NRP-MC1002 1000 Watt
Hard Disk: Seagate Barracuda 500GB 7200.12
Kingston HyperX 3K 90GB
Memory: 2 x 4096 MB G. Skill ARES 2133 MHz DDR3
Case: Cooler Master HAF 932
Video Cards: ASUS GTX 680
ASUS GTX 660
ASUS GTX 580
MSI GTX 560 Ti
MSI HD 7970
MSI HD 7870
MSI HD 6970
Video Drivers: NVIDIA ForceWare 310.90
AMD Catalyst 12.11
OS: Windows 7 Ultimate 64-bit
  1. All games were tested on 1920×1080 and 2560×1600 resolutions.
  2. Games with PhysX were benchmarked with the setting either kept on Low or Off for fair comparison.

Rebellion Studios bring back the action to their Alien and the Predators franchise with the launch of 2010’s Alien vs Predators. The PC version of the title was one of the first games to feature DirectX 11 and tessellation.

The second title in the Batman: Arkham series has also been developed by Rocksteady Studios. Batman: Arkham City takes place in (isn’t it obvious by the name?) Arkham City which is infested with all the super-villains and their minions which Batman has previously met past his journey.

The game was released on PC in November 2011 and runs on the latest Unreal Engine 3 which features rich DirectX 11 detail, tessellation and PhysX support for NVIDIA cards.

Battlefield series is a name loyal to any PC gamer. Developed by DICE and published by EA, Battlefield 3 brings back the action, being one of the largest multiplayer launch titles of 2011. The game features both infantry and vehicular combat on some of the largest landscapes ever built in game with a total of 64 players pitted against each other.

Powering the game is DICE’s own Frostbite 2.0 engine. The successor to the original Frostbite engine that powered Battlefield: Bad Company 2. Battlefield 3 makes use of a highly detailed DirectX 11 engine, hardware accelerated tessellation and new lightning effects which deliver some of the most amazing visuals ever to be seen in a game.

Borderlands 2, developed by Gearbox Studios is one of the hottest titles released in 2012. The game runs on a highly modified version of Unreal Engine making use of PhysX and rich DirectX 9 detail.

During our test, we set the PhysX low for a fair comparison between the video cards.

The first things to pop up on forums after Crysis’s launch was ‘Can my system run Crysis’. Almost every forum in the world, gaming or tech related was filled with the same question. This was not because of any bug but because of the technical and graphical achievement Crytek achieved with Crysis.

In 2007, Crytek released Crysis, A Sci-Fi FPS set on a jungle. The first few scenes were enough to determine the graphical leap the game took over others available at its time and still remains one of the most gorgeous looking titles to date. The game quickly became a benchmark to test modern PC’s performance. Crysis is powered by CryEngine 2 which makes use of a highly modified DirectX 10 set with technologies such as Ambient Occlusion and Parallax mapping detailing the rich Jungle in Crysis.

Crysis 2 is the second title to be released by Crytek under their Crysis Franchise. The game is set in New York and revolves in the footsteps of Alcatraz who has to take out the Ceph and Cell along his path.

The game makes use of CryEngine 3 but at the time of its launch was shipped with DirectX 9 only. The game was later given DirectX 11 and High-Res textures through patches. We had our Crysis 2 with the latest DirectX 11 and High Res patch installed.

Dues EX: Human Revolution developed by Edios Montreal brings us back in Adam Jensen’s footsteps and is set 25 years before the events of the original Dues EX. The game makes use of a modified version of the Crystal Engine which features DirectX 11 capabilities.

F1 2012 bring back formula racing with an actual representation of teams, drivers and cars. The game is developed on the Ego 2.0 engine by Codemasters which makes use of DirectX 11 feature set.

Developed by Ubisoft Montreal, Far Cry 3 is one of 2012’s hit titles which makes us take the role of Jason Brody, a tourist stranded on a tropical jungle along with his friends which is filled with pirates and a mad man known by the name of ‘Vaas’.

The game runs on Dunia Engine 2 and features DirectX 11 effects along with making use of Havok Physics effects. The game is one of the most graphically intensive titles released.

Hitman Absolution is the fifth entry to Agent 47’s Hitman franchise. Developed by IO Interactive and published by Square Enix, the game revolves around 47 once again, betrayed by his former handler Diana in order to protect Victoria, a teen girl. Mystery solves about the girl as the game progress.

The game makes use of a highly improved Glacier 2 engine making use of DirectX 11 effects, Tessellation, Global Illumination and Depth of Field. Hitman Absolution is also one of the most demanding and visually impressive titles to be released in 2012.

Metro 2033 is a post-apocalyptic FPS set under the streets of Moscow, Russia. Yes, the game is set within the Metro system to be exact which has become the last refuge to humans since the world above them is now infected with various creatures and rouge human factions.

The game uses rich DirectX 11 tessellation and lightning effects along with high quality textures. The game is on par with Crysis 1 being the most hardware demanding titles ever released.

Stalker: Call of Pripyat is developed by Ukrainian studios GSC Games World. The game takes place after the events of Stalker: Shadow of Chernobyl.

The game uses an updated X-Ray Engine 1.6 which features DirectX 11 effects such as Tessellation and dynamic shadows.

The last game in our list is Sleeping Dogs. The game gives us the role of Wei Shen, a Chinese-American undercover cop who has to infiltrate the Sun On Yee Triad organization. The game uses a powerful DX11 engine developed and tweaked by Square Enix that makes use of High-Resolution Textures.

Futuremark’s 3DMark 11 has been around for a while, being a comprehensive benchmark application to evaluate overall GPU and PC performance. 3DMark 11 as the name suggests makes use of DirectX 11 API and makes use of every DX11 feature at hand such as Tessellation, Depth of Field, Dynamic Lightning, Parallax Occlusion mapping, etc.

  • For testing we ran 3DMark 11 in Extreme and Performance presets.

Based on the Unigine Engine, Unigine Heaven was one of the first demos to feature DirectX 11 effects. We use the latest Unigine Heaven 3.0 to evaluate DirectX 11 performance of GPUs with intensive features such as Tessellation. The demo also supports DirectX 9, DirectX 10 and OpenGL.

It should be noted that the ASUS GeForce DirectCU II makes use of a three-slot non-reference cooler which provides much better cooling than the reference design.

According to ASUS, the DirectCU II delivers 20% better cooling and 14db lesser noise compared to the reference GeForce GTX 680.

We tested the card under different environments – idle/load/load with OC

The temperatures are great considering the power those extra phases had added. Still, the ASUS GTX 680 runs around the same temperatures of a reference 680 and even better while gaming.

Note – We tested load with Kombuster which is known as ‘Power viruses’ and can permanently damage hardware. Use the software at your own risk!

The overclocked settings we used were 1204 MHz on Core, 1257 MHz Boost and 1630 MHz memory clock. You can check out the overclocked results of the ASUS GeForce GTX 680 below.

The maximum stable overclock we could achieve with the ASUS GeForce GTX 680 is shown below in the GPU-z:
The overclock was achieved with the default GPU Voltage and Fan settings. Please note that the ASUS GTX 680 DirectCU II makes use of a non-reference PCB design featuring better VRM and power phases. This allows better overclocking than reference models of GTX 680.

ASUS claims that users can reach upto 1.3 GHz with proper configurations. We believe it to be possible since the card offers various overclocking features such as VGA Hotwire and Voltage measurement.

After overclocking, we evaluated the performance again in 3DMark 11, we also benchmarked our MSI Radeon HD 7970 to its limit for a better comparison between both flagship cards.

We said it before and we are going to say it again, the ASUS GeForce GTX 680 DirectCU II is a beast. Both in terms of performance and power.

The ASUS GeForce GTX 680 DirectCU II currently costs $499.99 US ($479.99 after rebate) which is $20 more than the reference GeForce GTX 680 and $70 more expensive than the Radeon HD 7970. However the card is worth its price due to the fact it has a three slot cooler which delivers tremendous cooling performance and a non-reference PCB designed by ASUS that allows for better overclocking stability. The cooler may become a burden for users with smaller cases or if you’re looking to SLI multiple DCIIs. ASUS recently launched a Dual-Slot GeForce GTX 680 to address this issue but it comes with added cost due to addition of 4 GB memory.

Taking these into account, the ASUS GTX 680 DirectCU II is the perfect choice for enthusiasts, gamers and overclockers. Performance wise the card trumps the Radeon HD 7970 in almost every benchmark (with the exception of a few). With continued added driver support, the performance gains for newer titles keeps on increasing.

The only thing Kepler disappointed us is that manually overclocking has become quite a hassle for the average user. Except this, the Kepler architecture has impressed us technologically and feature-wise. The ASUS GeForce GTX 680 delivers exceptionally well performance per watt due to the latest 28nm Kepler architecture and features like GPU Boost, TXAA/FXAA and Adaptive V-Sync are much needed features which help bring PC gaming back to action.

Test and review: ASUS GeForce GTX 680 DirectCU II with 4 GB of memory (GTX680-DC2G-4GD5)

The GeForce GTX Titan video card has just been released, but it’s still too early to write off the GeForce GTX 680. The graphics card has moved into second place in terms of performance among NVIDIA’s single-GPU models, but it still delivers enough performance for today’s games. Shortly after the official release of the GTX 680 last year, ASUS sent the GeForce GTX 680 DirectCU II TOP to our test lab (Hardwareluxx test and review). Today we will consider its «older» version, equipped with twice as much video memory 4096 MB. Because the graphics card is not a TOP family, ASUS has left all other specifications unchanged so that the graphics card runs at standard clock speeds.

The NVIDIA GeForce GTX Titan is the new flagship single-GPU graphics card and in some tests even outperforms the dual-GPU GeForce GTX 690 based on the GK104 «Kepler». Although the GeForce GTX 680 is no longer the fastest single GPU model, it is still relevant. The upcoming GeForce GTX 700 family will be announced much later, so the GeForce GTX 680 will remain relevant in the coming months. ASUS decided to equip the popular GeForce GTX 680 DirectCU II model with twice the amount of video memory a few weeks before the release of the GeForce GTX Titan. Such a step will allow the video card to perform at its best in high-resolution modes and high quality settings compared to its predecessor, equipped with a standard 2 GB.

The rest of the specs have not changed since ASUS did not introduce the factory overclocked TOP model. Therefore, the 4 GB ASUS GeForce GTX 680 DirectCU II graphics card runs at the standard clock speeds announced by NVIDIA to its graphics card partners, but at the same time it is equipped with ASUS’ proprietary DirectCU II cooler. It uses two 80mm fans, three heatpipes up to 8mm thick, and numerous aluminum fins. In our previous test, the 2GB graphics card did its best, delivering cool temperatures with relatively low noise levels. Therefore, we approached the tests of the novelty with interest — can it give no less decent results?

» Photostrecke

Before we move on to reviewing the ASUS GeForce GTX 680 DirectCU II with 4096 MB of video memory, let me remind you of the architectural features of the «Kepler» GPU.

Basic architecture information

The ASUS GeForce GTX 680 DirectCU II video card is based on the NVIDIA GK104 chip with «Kepler» architecture, which is manufactured using the modern 28nm process technology at TSMC factories. Until recently, the graphics card held the title of single-GPU leader with 1,536 CUDA stream processors and a total of 3.54 billion transistors. They are habitually organized into SMX clusters, each containing 192 ALUs (stream processors). Eight SMX clusters are combined in pairs to form the Graphics Processing Cluster (GPC), the largest element of the GK104 GPU. Each SMX cluster uses 16 texture units. Since the GeForce GTX 680 contains eight SMX clusters, we get a total of 128 TMUs. NVIDIA used four 64-bit controllers to work with the memory. Accordingly, the standard 2048 MB of GDDR5 memory in the case of the ASUS GeForce GTX 680 DirectCU II was increased to 4096 MB of memory, which is connected via a 256-bit bus.

In terms of clock speeds, ASUS fully complied with NVIDIA’s specifications, so the chip operates at a clock frequency of 1006 MHz, and the memory at 1502 MHz. Depending on the load, the video card can raise the frequencies of the graphics chip by several megahertz. In the case of the GeForce GTX 680, the increase is at least 1058 MHz, if the video card does not exceed the thermal package. ASUS for the 4 GB version of the GeForce GTX 680 DirectCU II chose the stock frequencies from NVIDIA, but doubled the amount of memory and installed a cooler of its own design.

ASUS GeForce GTX 680 DirectCU II 4096 MB (GTX680-DC2G-4GD5)
Retail price about 500 euros in Europe
Find out the price in Russia
Manufacturer website Official page ASUS GeForce GTX 680 DirectCU II 4096 MB
Specifications
GPU GK104
Process 28 nm
Number of transistors 3. 54 billion
GPU clock speed (base frequency) 1006 MHz
GPU clock speed (Boost frequency) 1059 MHz
Memory frequency 1502 MHz
Memory type GDDR5
Memory capacity 4096 MB
Memory bus width 256 bit
Memory bandwidth 192.3 GB/s
DirectX Version 11.0
Stream Processors 1536 (1D)
Texture blocks 128
Raster pipelines 32
Pixel Fill Rate 36.1 Gpixel/s.
SLI/CrossFire SLI

Before we get into the ASUS GeForce GTX 680 DirectCU II with 4096 MB of memory, let me introduce some performance measurements.

<>Test & Review: ASUS GeForce GTX 680 DirectCU II with 4GB Memory (GTX680-DC2G-4GD5)
ASUS GTX680-DC2G-4GD5 | Test card in detail

Review and testing of the video card ASUS GeForce GTX 680 DirectCU II TOP

GeForce GTX 680

Consider the specification of the new NVIDIA flagship:

The graphics core now contains a total of 1536 stream processors (GTX 580 has 512). The GPU runs at 1006 MHz. However, this is some basic value, because during operation the chip can automatically accelerate and operate at a higher frequency (1110 MHz). Therefore, an additional Boost Clock parameter has been introduced, indicating some average value. The amount of memory is 2 GB, while accessing the RAM uses a 256-bit bus. This feature of the GTX 680 is perhaps the most controversial. The predecessor, as well as the competitor’s top solution, use a 384-bit bus. To get memory bandwidth similar to that of the GTX 580, NVIDIA had to increase the GDDR5 clock speed to 6 GHz. And the volume of 2 GB, against the background of 3 GB from a competitor and the previous flagship NVIDIA, does not look very impressive. For configurations with a single display, this will be enough, but when connecting multiple high-resolution displays, questions may arise.

TDP of 195 W is an excellent indicator for a video card of this level. The specified TDP allows you to get by with just two six-pin auxiliary power connectors. Previously, this configuration was available rather to mid-range adapters. We also note support for PCI-Express 3.0 and the presence of four digital video outputs — 2xDVI, HDMI and DisplayPort.

For comparison, let’s present the key characteristics of the predecessor and main competitor of the GeForce GTX 680.

GeForce GTX 680 GeForce GTX 580 Radeon HD 7970
Crystal name GK104 GF110 Tahiti XT
Manufacturing process, nm 28 40 28
Chip area, mm² 294 520 365
Number of transistors, billion 3. 54 3 4.31
GPU clock frequency, MHz 1006/1058 772 925
Number of stream processors 1536 512 2048
Number of texture units 128 64 128
Number of ROPs 32 48 32
Memory (type, volume), MB GDDR5, 2048 GDDR5, 1536/3072 GDDR5, 3072
Memory bus bit 256 384 384
Effective memory clock frequency, MHz 6008 4008 5500
Memory bandwidth, GB/s 192 192 264
Power consumption, W 195 244 250
Recommended price $500 $400/500 $550

GTX 680 Overview

The graphics card looks not much different from the previous version of the GTX 580. Especially considering the use of a massive turbine cooler and other features of the cooling system.

The evaporation chamber has been removed from the board to save space. The radiator fins were made more beveled, facilitating the passage of air. This feature and the use of a special sound-absorbing material for the manufacture of the turbine leads to a noticeable reduction in noise from the cooler when compared with the 580th version or the Radeon HD 7970.

Among other features of the model, it is worth noting:

  • the presence of full-size DisplayPort and HDMI ports, allowing you to do without adapters;
  • fewer memory chips per board compared to competitors;
  • lack of a heat spreader on the GPU to avoid chipping when installing the cooler.

Review and testing of the video card ASUS GeForce GTX 680 DirectCU II TOP

23.04.2012 19:00

Slayer Moon

Contents
Review and testing of the video card ASUS GeForce GTX 680 DirectCU II TOP
Test bench configuration
Game tests
Synthetic tests
Power consumption of ASUS GeForce GTX 680 DirectCU II TOP graphics card
Noise level
Overclocking ASUS GeForce GTX 680 DirectCU II TOP
conclusions

Test stand configuration

Test bench
Processor: Intel Core i7-3770K @ 4. 7 GHz (Ivy Bridge, 8192KB cache)
Motherboard:

ASUS Maximus V Gene

Intel Z77

RAM: 2x 4096MB Corsair Vengeance PC3-12800 DDR3 @ 1600MHz 9-9-9-24
Accumulator: WD Caviar Blue WD5000AAKS 500GB
Power supply: Antec HCP-1200 1200 watts
Axle: Windows 7 64-bit SP1
Driver versions:

NVIDIA: 296.10

GTX 680: 301.24

ATI: Catalyst 12.3

Monitor: LG Flatron W3000H 30″ 2560×1600

Further, it says that all benchmarks were run at maximum graphics settings, with the following resolutions and anti-aliasing set in a certain way:

  • 1280 x 1024, 2x AA. Standard graphics settings for monitors with a diagonal of 17″ — 19″
  • 1680×1050, 4x AA. Standard graphics settings for monitors with a diagonal of 19″ — 22″
  • 1920×1200, 4x AA. Standard graphics settings for monitors with a diagonal of 22″ — 26″
  • 2560×1600, 4x AA. Standard graphics settings for monitors with a diagonal of 30″ and above
  • 5760 x 1080, 4x AA. Typical resolution in a multi-monitor configuration.

<< Previous — Next >>

Latest content on the site:
  • Work on the site is terminated indefinitely
  • Software News / Software News for 01/13/2017: DesktopOK, NetBalancer, Caliber and more
  • New software/apps for Android, iOS, Windows Phone and more for 01/12-13/2017: DU Battery Saver, HERE WeGo, Angry Birds Friends and more
  • Emulator pack / selection for January 13, 2017: bsnes-plus, Citra, DeSmuMe, Dolphin, Pcsx2, RockNES, WinVice + Cemu W.I.P. — Pikmin 3 + Torrent (torrent)
  • World by Thread — Intel Skylake and Kaby Lake Processors Vulnerable to USB Attack, Samsung Galaxy S8 Render Appeared, Fresh Videos from UMi, Cubot and Nomu
  • World by Thread — Biostar Z270GT8, Z270GT6 and Z270GT4 Motherboards Introduced, Fresh Mass Effect Andromeda Trailer, New ARCTIC Logo
  • World by Thread – Swipe Konnect Grand Simple Smartphone Announced, MGCOOL Explorer
  • 4K Action Camera Available for Purchase

  • World by Thread – Sanwa PRJ-6 Portable DLP Projector Introduced, Moto G5 Plus Photo Shown, Elephone S7 New Features Demonstration
  • With the world on a thread — the DAIV-DGZ510U1-SH5 workstation from Mouse Computer is presented, the Oukitel U20 Plus phablet is disassembled into parts (video)
  • Nintendo Switch and his funny friends — photos, cost and features of the console + tons of videos!

How to overclock the Nvidia Geforce GTX 680

By overclocking the GTX 680, you can try to increase its performance in games and even use this card for cryptocurrency mining. For this, a special GPU Boost technology is used and one of the utilities suitable for Nvidia GPUs, such as MSI Afterburner or RivaTuner.

By increasing the voltage of the video adapter by 10%, you can get a frequency of 1100 MHz (instead of 1006) and an effective value of 6912 MHz (instead of 6008). The result of this overclocking was a slight increase in the load on the cooling system, the coolers of which began to work with the maximum number of revolutions.

When using the graphics adapter in games, performance increased by approximately 7-13%, depending on the settings.

This level is not too high and roughly corresponds to the GTX 1050 graphics cards, although the newer model from Nvidia has lower power consumption, which means it is more profitable to mine.

With a relatively low cost of electricity, mining on the GTX 680 will bring some profit to the domestic user. Although it is not advisable to buy this card specifically for such purposes — you can find options on the market with a better ratio of costs to the speed of mining cryptocurrencies.

Overclocking

Despite the fact that ASUS initially decided to seriously increase the frequency of the graphics chip, there is still room for maneuver for fans of self-tuning video cards. Without consequences for the stability of the work, the core managed to be stimulated by 70 MHz — up to 1207 MHz, the dynamic auto-overclocking bar was also raised by the same value. Thanks to the work of GPU Boost, the chip “shoots” up to almost 1.3 GHz. Recall that we are talking about a full-fledged GK104 chip, with all the CUDA computers involved.

The operating frequency of the memory was increased by almost 900 MHz — from 6008 to 6896 MHz. A noticeable increase in the throughput of this subsystem should bring a good performance boost. It remains to verify this in practice.

In progress

The latest version of the GPU-Z diagnostic utility recognizes the GeForce GTX 680 without problems, confirming the specified specifications.

In idle mode, the graphics chip warms up to just 34 degrees, while the fan speed is only slightly more than 1000 rpm. After a fairly long gaming session, the temperature of the GPU increased to 79°, and the propeller frequency increased to 2200–2300 rpm. Curiously, Furmark also produced identical results. The reason for this was the work of GPU Boost technology. During a real game, the clock frequency of the chip periodically increased from the base 1006 MHz to 1110 MHz, while the supply voltage also rose to 1.150–1.175 V. In the warm-up utility, the adapter worked in a different mode — the GPU frequency did not exceed 1045 MHz, and the maximum supply voltage was 1.087–1.1 V. Such parameters allowed the video card to be within the allotted TDP and not overheat.

Here it is worth clarifying that, despite the similar fan speed, the standard cooler of the GeForce GTX 680 is noticeably quieter than those of the Radeon HD 7970 and GeForce GTX 580. Of course, under load it is far from silent, but other things being equal, it looks preferable, than the coolers of its predecessor and competitor.

Test bench configuration

Processor Intel Core 2 GHz Intel, www. intel.ua
Cooler Thermalright Archon Rev.A «1-Incom», www.1-incom.com.ua
Motherboard MSI Z68A-GD65 (G3) (Intel Z68 Express) MSI, www.msi.com
Video cards ASUS ENGTX580/2DI/1536MD5 (GeForce GTX 580), ASUS HD7970-3GD5 (Radeon HD 7970) ASUS, www.asus.ua
RAM Team Xtreem TXD38192M2133HC9KDC-L (2x4GB DDR3-2133) DC-Link www.dclink.com.ua
Accumulator WD WD1001FALS (1 TB, 7200 rpm) WD, www.wdc.com
Power supply Thermaltake Toughpower Grand TPG-1200M (1200 W) Thermaltake, www.thermaltakeusa.com

Packing and contents

Picking up a box from ASUS GeForce GTX670 DirectCU Mini, from the first seconds you might think that you are holding a box of a video card of the middle or lower price segment. The box is so small compared to the packaging from the usual ASUS GeForce GTX670. In the foreground of the package, the proprietary DirectCU Mini cooling system is schematically demonstrated. Recently, only lazy people have not announced support for Windows 8 on the packaging of their products, and ASUS, not wanting to lag behind the general trend, also indicates this information right in the center of the front of the box.

The back of the box contains much more useful information. The photo of the video card in front, information on the connectors and the main technical advantages are presented.

The package contains only the necessary components:

  • ASUS GeForce GTX670 DirectCU Mini graphics accelerator;
  • adapter 2*6pin > 8pin;
  • CD with drivers and software;
  • user manual.

What are tensor cores?

While ray tracing is the top-selling feature of the RTX 20 and 30 series GPUs, the Turing architecture also introduced another major new feature to the main GeForce lineup, advanced deep learning capabilities made possible by dedicated tensor cores.

These cores were introduced in 2017 in Nvidia Volta GPUs, however gaming graphics cards were not based on this architecture. So the tensor cores present in the Turing models are actually second generation tensor cores. In terms of games, deep learning has one main application: deep learning supersampling, DLSS for short, which is a completely new anti-aliasing technique. So, how exactly does DLSS work, and is it better than conventional anti-aliasing methods?

DLSS uses deep learning models to generate detail and upscale an image to a higher resolution, thereby making it sharper and reducing distortion. The above deep learning models are built on Nvidia supercomputers and then powered by the graphics card’s tensor cores.

Supersampling provides sharper images but requires less hardware than most other anti-aliasing methods. What’s more, the technology can noticeably improve performance when ray tracing is enabled, which is a good thing considering how fast the performance of this feature is.

However, as with ray tracing, the list of games that currently support DLSS is unfortunately rather small. However, this is likely to change in the future.

Game Testing

By testing the FPS of the GTX 680 in games, you can get an idea of ​​its capabilities and make a decision about the purchase of the card. The most convenient way to do this is by testing the most demanding gaming applications at the time of the release of the model and at the present time. Moreover, using it together with the appropriate CPU (Core i5 or i7) and memory (at least 6–8 GB).

When running modern games using the video adapter, the following results were obtained:

  • for Lost Planet 2 — a game that heavily uses 3D graphics and constantly overlays displacement maps with geometric objects — the GTX 680 shows 52 frames in give me a sec;
  • when running the even more demanding Metro 2033 game with FullHD resolution, you can get a value of about 25 FPS — not bad for a comfortable gameplay, but almost on the verge;
  • The GTX 680 is running at around 74fps in DiRT 3, although that game can no longer showcase its full graphics capabilities;
  • for a Mafia II FPS card reaches 50 even without overclocking, for an overclocked card the value increases by about 12%;
  • at maximum settings in Aliens vs. Predator video adapter shows 32 FPS;
  • for the technological shooter Hard Reset, the FPS values ​​turned out to be the most impressive — when you start the game at high settings, the frequency reaches 83 fps;
  • when trying to play one of the last parts of the Total War strategy, Shogun 2, the video card again gets a decent frequency value — 106 fps;
  • in the game Batman: Arkham City at a resolution of 1650×1080, the video adapter shows 27 FPS.

Running modern gaming applications on the GTX 680 drops performance. However, at medium and minimum settings, almost all games still run.

For example, in Assasin Creed: Origins, when launched with a Core i7 processor and 6 GB of RAM, it will provide a smooth gaming experience with a resolution of 1280×720. Kingdom Come: Deliverance, the 2018 game, will also run on low settings, although it will require 8 GB of RAM to run properly.

It is not recommended to run Final Fantasy XV on a computer with a GTX 680 — it does not even meet the minimum requirements for this action role-playing game.

What is an RT core?

As mentioned above, RT cores are GPU cores dedicated exclusively to real-time ray tracing.

So what does ray tracing do to video game graphics? The technology allows for more realistic lighting and reflections. This is achieved by tracking the back path of the beam, which allows the GPU to produce much more realistic simulations of the interaction of light with the environment. Ray tracing is still possible even on GPUs without RT cores, but in this case the performance is just terrible, even on flagship models like the GTX 1080 Ti.

Speaking of performance, real-time ray tracing actually has a big impact on performance even when used with RTX GPUs, which inevitably leads to the question — is it worth using this technology at all?

As of 2020, few games support ray tracing.

The video above shows what ray tracing looks like in Control (2019): the graphical improvements provided by ray tracing are significant. However, the feature cuts the FPS by half, from a stable 60 to 30, and this is with a high-performance RTX 2070 Super graphics card!

Real-time ray tracing is a major advancement in gaming that will greatly improve video game graphics in the coming years. However, right now the hardware is not powerful enough and the developers are not yet fully exploiting the potential of the feature.

Video cards

We studied the capabilities of the GeForce GTX 680 using two adapters as an example:

ZOTAC ZT-60101-10P

Both models are based on the reference design and differ minimally only in appearance and equipment. The latter deserves special mention. Delivery of the adapter from ASUS is spartan simple — in addition to the disk with drivers and utilities, the package includes only one power adapter from molex to a six-pin connector. ZOTAC included a pair of similar power adapters, DVI-VGA, with its GTX 680, and as a bonus is offering a set of three Assassin’s Creed games, including the latest, Revelations, as well as a coupon for three-day races in TrackMania2 Canyon.

The considered nuances of operation are typical for both adapters, as well as for other identical devices based on base PCBs and a cooling system.

Like its predecessor, the video card looks like a heavy bar, which, after being installed on the motherboard, will take up two expansion slots. The adapter’s cooling system is securely covered with a plastic cover; only the centrifugal fan on the top panel is visible.

The fixing plate contains four interface connectors — a pair of DVI (one of them is DVI-D, the second is DVI-I), HDMI version 1.4a and DisplayPort 1.2. Part of the site is occupied by a grill for the output of heated air. At the same time, to reduce resistance, there are ventilation slots even between the HDMI and DVI ports.

The “GeForce GTX” logo flaunts on the top of the adapter, and this is not a banal sticker — the inscription was cast during the manufacture of the top cover. At the edge of the PCB, there are two six-pin connectors for connecting additional power, and they are located one above the other with a certain offset. Previously, such a constructive approach has not been applied. Curiously, when connecting the connectors, the locking latches are turned towards each other, but the difference in levels does not make it difficult to connect the cables.

Traditionally, MIO connectors for connecting SLI bridges are located closer to the mounting plate, allowing you to create a configuration with several video cards. The presence of a pair of connectors indicates that there can be two, three or four such adapters.

Printed circuit board made of black textolite. PCB length — 254 mm. The GPU is located in the central part. On three sides of the GPU, 8 memory chips are soldered. Unlike the GF110, the new chip does not have a heat distributing cover. A protective frame will protect the crystal from possible chipping during manipulations with the cooler. The power subsystem and the vast majority of surface-mount elements are concentrated on the right side of the board. At the same time, the stabilizer elements have an unusual horizontal orientation. In this case, a six-phase «4 + 2» scheme is used. It is curious that the board has seats for the components of another phase. In addition, at the top end there is an empty area for the third six-pin power connector.

The general concept of the reference cooling system is typical for adapters of this level. The cooler includes a heat sink with a cassette consisting of aluminum plates, which is blown through with a radial fan. The overall metal frame, in addition to serving as the general frame of the CO, contacts with the memory chips and power elements of the stabilizer through thermal pads.

The radiator module is not as simple as it seems at first glance. Although there is no evaporation chamber, instead of it, three heat pipes are pressed into the base, which allow you to accelerate heat removal, evenly distributing it over the entire surface of the heat sink.

The efficiency and noise levels of such cooling systems are implementation dependent. The indisputable advantage of this type of CO is the removal of the bulk of the heated air outside the system unit.

Bases

All Nvidia gaming GPUs are from their own GeForce brand, which was launched in 1999 with the release of the GeForce 256. GeForce 16 released in 2019year, and the GeForce 30 series released in 2020.

Today, the GeForce 20 and GeForce 30 series are exclusively RTX GPUs, while the GeForce 16 series is GTX graphics cards. So what do all these letters mean? In fact, neither GTX nor RTX are acronyms and have no specific meaning as such. They exist simply for marketing purposes.

Nvidia has used several similar two- and three-letter designations to give users a general idea of ​​what performance each GPU can offer. For example, manufacturers have used designations such as GT, GTS, GTX, and many others over the years, but only GTX and the new RTX have survived to this day.

Testing

Nvidia GeForce RTX 3050Ti TGP 60W

5246

Nvidia GeForce GTX 1650Ti TGP 60W

3842

Over 60FPS

High Framerate

Shadow of the Tomb Raider. Shadow of the Tomb Raider takes place after the events depicted in RISE OF THE TOMB RAIDER. This time, Miss Croft travels to Latin America, where she travels through the ruins of an ancient civilization (including the Mayan and Aztec pyramids), discovers the mysteries of the Order of the Trinity, and learns more about the research done by her father. The story is more mature than previous games. To portray Lara as a more advanced adventurer, the developers intended to have her face the consequences of the decisions she made. For Rise of the Shadow of the Tomb Raider uses a new game engine called the Foundation Engine. Shadow of the Tomb Raider includes improved subsurface scattering on Lara, volumetric lighting, and beams of light through foliage. Also, the new game uses advanced anti-aliasing — TAA, instead of SMAA. In addition, the effects of water have improved and the waves from the movement now have a three-dimensional shape, and in general the interaction with water is better. Tessellation is present in both games, but is used for different purposes.

Settings: Display 1920 x1080 pixels, Detail: High Preset

Test Level 60 FPS

NVIDIA GeForce RTX 3050Ti TGP 75W Laptop

82 FPS100%

NVIDIA GeForce RTX 3050Ti TGP 60W Laptop

76 FPS100%

NVIDIA GeForce GTX 1650Ti TGP 60W Laptop

57 FPS87%

Settings: Display 1920 x1080 pixels, detail: Highest Preset

test level 60 FPS

NVIDIA GeForce RTX 3050Ti TGP 75W Laptop

58 FPS98%

NVIDIA GeForce RTX 3050Ti TGP 60W Laptop

53 FPS88%

NVIDIA GeForce GTX 1650Ti TGP 60W Laptop

48 FPS80%

Cooling Efficiency Test

DirectCU Mini’s proprietary cooling system performed well at idle, keeping the GPU temperature from rising above 38 degrees Celsius. This result was achieved with a relatively quiet fan at 1440 rpm.

After 100% load using the MSI Kombustor 2.5 graphics card warm-up utility, the GPU of the GTX670 graphics card warmed up to 82 degrees Celsius. However, the noise level at the same time can no longer be called comfortable. The fan of the DirectCU Mini cooling system, rotating at a speed of 2750 rpm, reminded those present even in the next room.

Appearance and cooling system

First of all, it’s worth saying that according to the first impressions, the video card really liked its size. When you hold such a baby, it seems that you are holding a GeForce GT630-level video card in your hands, although in fact, the power of the GeForce GTX670 is hidden under the casing of the video card.

The video card is made in a very strict style, black textolite and black casing, with a red insert with the manufacturer’s logo.

On the rear wall there are DVI-I, DVI-D, HDMI, DisplayPort connectors, as well as a technological grill for blowing hot air from under the cooling system casing.

The size of the video card is two-slot. Such dimensions are due to its cooling system.

A very rare combination in the form of a single 8-pin connector is used for additional power supply to the video card. To ensure compatibility with some power supplies, a 2 * 6pin > 8pin adapter is included. The requirements for the power supply in the case of the ASUS GeForce GTX670 DirectCU Mini are very small — a 450-500W power supply is required.

The cooling fan connection has a 4pin connector. With the help of a four-pin connection, it is possible not only to control the fan speed, but also to monitor its speed.

Having removed the cooling system, you can immediately notice four empty places for memory chips, both on the front and on the back of the PCB. It can be assumed that the video card was developed in two versions: 2Gb and 4Gb. However, only a 2Gb video card saw the light of day.

An interesting feature of the ASUS GeForce GTX670 DirectCU Mini is a plate labeled Direct Power, through which power is supplied to the GPU pins located in the opposite direction from the power elements of the power supply system. With the help of such an engineering solution, ASUS avoided the need to conduct conductive paths around the GPU on the board, thereby reducing the PCB size and, accordingly, the dimensions of the video card as a whole.

The ASUS GeForce GTX670 DirectCU Mini is by no means a cut-down model compared to the regular GeForce GTX670. This is also proved by the presence of two SLI bridges, with the help of which, on the basis of ASUS GeForce GTX670 DirectCU Mini, you can assemble a bunch of 4-Way SLI.

The GPU and memory power subsystem are made according to the 4+1 scheme. All four GPU power phases are in close proximity to each other, and the memory power phase is a little further away. The power cells are in contact with the DirectCU Mini cooling system through thermal pads.

The 28nm GK104 GPU is equipped with a protective metal frame around the perimeter.

Memory on ASUS GeForce GTX670 DirectCU Mini has a capacity of 2Gb.

It is soldered with eight Hynix H5GQ2h34AFR chips, four chips on the front side, four on the back.

The DirectCU Mini cooling fan connector is located on the lower right side of the PCB.

The cooling system on ASUS GeForce GTX670 DirectCU Mini is strikingly different from the usual cooling systems due to the absence of heat pipes. Cooling is based on a copper heat sink with an evaporation chamber that transfers the heat removed from the GPU to a massive round metal heatsink.

In addition to the vapor chamber, the DirectCU Mini also has a metal plate that removes heat from the power elements of the board and memory chips through thermal pads.

From above, the entire structure of the cooling system is covered with a plastic casing. A 97mm EVERFLOW fan is responsible for cooling the radiator of the cooling system and removing heat from the casing.

Conclusion

Small dimensions and low power supply requirements allow the ASUS GeForce GTX670 DirectCU Mini video card to take a place even in budget computers of inexperienced gamers. A fly in the ointment in this accelerator will certainly be the high noise level of the proprietary DirectCU Mini cooling system. But here you have to sacrifice something, either noise level or performance. ASUS puts performance first with its unique GeForce GTX670 DirectCU Mini product. With a level of performance suitable for gaming at maximum graphics settings, even in the most modern projects, the ASUS GeForce GTX670 DirectCU Mini graphics card allows owners of compact cases to build truly powerful gaming builds.

Pros:

  • small;
  • 3 year warranty;
  • high performance;
  • low power supply requirements.

Cons:

high noise level in the load.

Having appreciated the ASUS GeForce GTX670 DirectCU Mini video card, the editors of i2hard.ru award it with a fair assessment — a silver medal.

Conclusion

Well, it’s time to sum it up: the RTX designation was introduced by Nvidia mainly for marketing purposes, making Turing 20-series GPUs look like a bigger upgrade than they actually are. deed.

Of course, the RTX models come with some cool new features that will unleash their full potential for the foreseeable future, and in terms of sheer performance, the latest Ampere-based graphics cards are pretty far ahead of the old Pascal-based GTX GPUs that were selling for about the same price. .

All things considered, we wouldn’t say that RTX GPUs are worth buying just for ray tracing and DLSS, as performance should always come first, especially if you want to get the most bang for your buck. On the other hand, these technologies will develop in the near future, and in a couple of years, GTX graphics chips will be frankly outdated.

If you’re looking to buy a new graphics card, it might be worth checking out this article where we’ve listed the best graphics cards available on the market right now.

Results

ASUS GTX 680 DirectCU II TOP is the most productive GeForce GTX 680 model with the highest nominal GPU frequency and efficient cooling system. A great option for a fine connoisseur of gaming art and the owner of a tight wallet. The cost of new items is impressive no less than its capabilities. Top video cards, in principle, are not cheap pleasure, but in this case there is an understanding of what you will have to pay a little extra for. Therefore, if there is an irresistible desire to get the most powerful single-chip video card available on the market, then this will be a worthy gift for … yourself.

+ Efficient and fairly quiet cooling system

+ Maximum performance

+ Significantly increased GPU frequency

+ Good overclocking potential

+ Reinforced power subsystem

— High price

Device provided for testing by ASUS, www. asus.ua

Test bench configuration

Processor Intel Core i7-3770K @ 4.5GHz Intel
Cooler Thermalright Archon Rev.A «1-Incom», www.1-incom.com.ua
Motherboard ASUS P8Z77-V LE (Intel Z77 Express) ASUS, www.asus.ua
RAM Team Xtreem TXD38192M2133HC9KDC-L (2×4 GB DDR3-2133) DC-Link www.dclink.com.ua
Accumulator WD WD1001FALS (1 TB, 7200 rpm) WD, www. wdc.com
Power supply Thermaltake Toughpower Grand TPG-1200M (1200 W) Thermaltake, www.thermaltakeusa.com
ASUS GTX680-DC2T-2GD5Notify when available
Memory size, MB 2048
Memory type GDDR5
Interface PCI Express 3.0 x16
Cooling system active
GPU operating frequencies, MHz 1137-1201
Memory frequencies, MHz 6008
Memory bus bit 256
Output connectors 2xDVI (1xDual Link DVI-I and 1xDual Link DVI-D), 1xHDMI, 1xDisplay Port (Regular DP)
Dimensions, mm 300x130x58
Additional power supply 1x6pin, 1x8pin
DirectX 10 support + (DirectX 11. 1)
Miscellaneous DirectCU II Cooling System, VGA Hotwire, 10-Phase DIGI+ Power System and Factory Overclocked to 1201MHz

Asus GeForce GTX 680 DirectCU II

42 points

Asus GeForce GTX 680 DirectCU II

Asus GeForce GTX 680 DirectCU II

Why is the Asus GeForce GTX 680 DirectCU II better than others?

  • PassMark result (G3D)?
    5683 vs 5024.12
  • 3DMark Vantage Texture Fill result?
    102.04GTexels/s vs 90.02GTexels/s
  • 3DMark Vantage Pixel Fill result?
    13.03GPixel/s vs 10.98GPixel/s
  • PassMark result (DirectCompute)?
    2910 vs 2102.76
  • DVI outputs?
    2 vs 0.79
  • Multi-GPU?
    3 vs 2.75

Which comparisons are the most popular?

Asus GeForce GTX 680 DirectCU II

vs

Nvidia Quadro K4000

Asus GeForce GTX 680 DirectCU II

vs

Asus ROG Strix GeForce GTX 1080 Ti OC

Asus GeForce GTX 680 DirectCU II

vs

Asus ROG Strix LC GeForce RTX 3090 Ti OC

Asus GeForce GTX 680 DirectCU II

vs

Asus TUF GeForce RTX 30

2. turbo GPU

1058MHz

When the GPU is running below its limits, it can jump to a higher clock speed to increase performance.

3.pixel rate

32.2 GPixel/s

The number of pixels that can be displayed on the screen every second.

4.flops

3.09 TFLOPS

FLOPS is a measurement of GPU processing power.

5.texture size

129 GTexels/s

The number of textured pixels that can be displayed on the screen every second.

6.GPU memory speed

1502MHz

Memory speed is one aspect that determines memory bandwidth.

7.shading patterns

Shading units (or stream processors) are small processors in a graphics card that are responsible for processing various aspects of an image.

8.textured units (TMUs)

TMUs accept textured units and bind them to the geometric layout of the 3D scene. More TMUs generally means texture information is processed faster.

9 ROPs

ROPs are responsible for some of the final steps of the rendering process, such as writing the final pixel data to memory and for performing other tasks such as anti-aliasing to improve the appearance of graphics.

Memory

1.memory effective speed

6008MHz

The effective memory clock frequency is calculated from the size and data transfer rate of the memory. A higher clock speed can give better performance in games and other applications.

2.max memory bandwidth

192GB/s

This is the maximum rate at which data can be read from or stored in memory.

3.VRAM

VRAM (video RAM) is the dedicated memory of the graphics card. More VRAM usually allows you to run games at higher settings, especially for things like texture resolution.

4. memory bus width

256bit

Wider memory bus means it can carry more data per cycle. This is an important factor in memory performance, and therefore the overall performance of the graphics card.

5. GDDR memory versions

Later versions of GDDR memory offer improvements such as higher data transfer rates, which improve performance.

6. Supports memory troubleshooting code

✖Asus GeForce GTX 680 DirectCU II

Memory troubleshooting code can detect and fix data corruption. It is used when necessary to avoid distortion, such as in scientific computing or when starting a server.

Functions

1.DirectX version

DirectX is used in games with a new version that supports better graphics.

OpenGL version 2.

The newer the OpenGL version, the better graphics quality in games.

OpenCL version 3.

Some applications use OpenCL to use the power of the graphics processing unit (GPU) for non-graphical computing. Newer versions are more functional and better quality.

4.Supports multi-monitor technology

✔Asus GeForce GTX 680 DirectCU II

The video card is capable of connecting multiple displays. This allows you to set up multiple monitors at the same time to create a more immersive gaming experience, such as a wider field of view.

5. GPU temperature at boot

Lower boot temperature means the card generates less heat and the cooling system works better.

6.supports ray tracing

✖Asus GeForce GTX 680 DirectCU II

Ray tracing is an advanced light rendering technique that provides more realistic lighting, shadows and reflections in games.

7. Supports 3D

✔Asus GeForce GTX 680 DirectCU II

Allows you to view in 3D (if you have a 3D screen and glasses).

8.supports DLSS

✖Asus GeForce GTX 680 DirectCU II

DLSS (Deep Learning Super Sampling) is an AI based scaling technology. This allows the graphics card to render games at lower resolutions and upscale them to higher resolutions with near-native visual quality and improved performance. DLSS is only available in some games.

9. PassMark result (G3D)

This test measures the graphics performance of a graphics card. Source: Pass Mark.

Ports

1.has HDMI output

✔Asus GeForce GTX 680 DirectCU II

Devices with HDMI or mini HDMI ports can stream HD video and audio to the connected display.

2.HDMI connectors

Unknown. Help us offer a price.

More HDMI connections allow you to connect multiple devices at the same time, such as game consoles and TVs.