Gtx 580 vs gtx 480: GeForce GTX 580 vs 480 [in 5 benchmarks]

Nvidia GeForce GTX 480 vs Nvidia GeForce GTX 580: What is the difference?

28points

Nvidia GeForce GTX 480

34points

Nvidia GeForce GTX 580

vs

54 facts in comparison

Nvidia GeForce GTX 480

Nvidia GeForce GTX 580

Why is Nvidia GeForce GTX 480 better than Nvidia GeForce GTX 580?

Why is Nvidia GeForce GTX 580 better than Nvidia GeForce GTX 480?

  • 71MHz faster GPU clock speed?
    772MHzvs701MHz
  • 0.24 TFLOPS higher floating-point performance?
    1.58 TFLOPSvs1.34 TFLOPS
  • 3.7 GPixel/s higher pixel rate?
    24.7 GPixel/svs21 GPixel/s
  • 78MHz faster memory clock speed?
    1002MHzvs924MHz
  • 312MHz higher effective memory clock speed?
    4008MHzvs3696MHz
  • 7.3 GTexels/s higher texture rate?
    49.4 GTexels/svs42. 1 GTexels/s
  • 15GB/s more memory bandwidth?
    192.4GB/svs177.4GB/s
  • 32 more shading units?
    512vs480

Which are the most popular comparisons?

Nvidia GeForce GTX 480

vs

Nvidia GeForce GTX 1050

Nvidia GeForce GTX 580

vs

Nvidia GeForce RTX 2070

Nvidia GeForce GTX 480

vs

Gainward GeForce GTX 660 Ti

Nvidia GeForce GTX 580

vs

AMD Radeon HD 6950

Nvidia GeForce GTX 480

vs

AMD Radeon HD 6950

Nvidia GeForce GTX 580

vs

Nvidia GeForce GTX 1050 Ti

Nvidia GeForce GTX 480

vs

ATI Radeon HD 5970

Nvidia GeForce GTX 580

vs

Nvidia GeForce GTX 1050

Nvidia GeForce GTX 480

vs

Nvidia GeForce GTX 560 Ti

Nvidia GeForce GTX 580

vs

Nvidia GeForce GTX 650 Ti

Nvidia GeForce GTX 480

vs

Nvidia GeForce GT 1030 DDR4

Nvidia GeForce GTX 580

vs

Nvidia GeForce GTX 980

Nvidia GeForce GTX 480

vs

Nvidia GeForce MX110

Nvidia GeForce GTX 580

vs

AMD Radeon 535

Nvidia GeForce GTX 480

vs

AMD Radeon RX 480

Nvidia GeForce GTX 580

vs

Nvidia GeForce GT 1030 DDR4

Nvidia GeForce GTX 480

vs

Nvidia GeForce GTX 750 Ti

Nvidia GeForce GTX 580

vs

AMD Radeon RX 550

Nvidia GeForce GTX 480

vs

Zotac GeForce GT 240 AMP! Edition

Nvidia GeForce GTX 580

vs

Nvidia GeForce GTX 660 Ti

Price comparison

User reviews

Performance

1. GPU clock speed

701MHz

772MHz

The graphics processing unit (GPU) has a higher clock speed.

2.GPU turbo

Unknown. Help us by suggesting a value. (Nvidia GeForce GTX 480)

Unknown. Help us by suggesting a value. (Nvidia GeForce GTX 580)

When the GPU is running below its limitations, it can boost to a higher clock speed in order to give increased performance.

3.pixel rate

21 GPixel/s

24.7 GPixel/s

The number of pixels that can be rendered to the screen every second.

4.floating-point performance

1.34 TFLOPS

1.58 TFLOPS

Floating-point performance is a measurement of the raw processing power of the GPU.

5.texture rate

42.1 GTexels/s

49.4 GTexels/s

The number of textured pixels that can be rendered to the screen every second.

6. GPU memory speed

924MHz

1002MHz

The memory clock speed is one aspect that determines the memory bandwidth.

7.shading units

Shading units (or stream processors) are small processors within the graphics card that are responsible for processing different aspects of the image.

8.texture mapping units (TMUs)

TMUs take textures and map them to the geometry of a 3D scene. More TMUs will typically mean that texture information is processed faster.

9.render output units (ROPs)

The ROPs are responsible for some of the final steps of the rendering process, writing the final pixel data to memory and carrying out other tasks such as anti-aliasing to improve the look of graphics.

Memory

1.effective memory speed

3696MHz

4008MHz

The effective memory clock speed is calculated from the size and data rate of the memory. Higher clock speeds can give increased performance in games and other apps.

2.maximum memory bandwidth

177.4GB/s

192.4GB/s

This is the maximum rate that data can be read from or stored into memory.

3.VRAM

VRAM (video RAM) is the dedicated memory of a graphics card. More VRAM generally allows you to run games at higher settings, especially for things like texture resolution.

4.memory bus width

384bit

384bit

A wider bus width means that it can carry more data per cycle. It is an important factor of memory performance, and therefore the general performance of the graphics card.

5.version of GDDR memory

Newer versions of GDDR memory offer improvements such as higher transfer rates that give increased performance.

6.Supports ECC memory

✖Nvidia GeForce GTX 480

✖Nvidia GeForce GTX 580

Error-correcting code memory can detect and correct data corruption. It is used when is it essential to avoid corruption, such as scientific computing or when running a server.

Features

1.DirectX version

DirectX is used in games, with newer versions supporting better graphics.

2.OpenGL version

OpenGL is used in games, with newer versions supporting better graphics.

3.OpenCL version

Some apps use OpenCL to apply the power of the graphics processing unit (GPU) for non-graphical computing. Newer versions introduce more functionality and better performance.

4.Supports multi-display technology

✔Nvidia GeForce GTX 480

✔Nvidia GeForce GTX 580

The graphics card supports multi-display technology. This allows you to configure multiple monitors in order to create a more immersive gaming experience, such as having a wider field of view.

5.load GPU temperature

Unknown. Help us by suggesting a value. (Nvidia GeForce GTX 480)

A lower load temperature means that the card produces less heat and its cooling system performs better.

6.supports ray tracing

✖Nvidia GeForce GTX 480

✖Nvidia GeForce GTX 580

Ray tracing is an advanced light rendering technique that provides more realistic lighting, shadows, and reflections in games.

7.Supports 3D

✔Nvidia GeForce GTX 480

✔Nvidia GeForce GTX 580

Allows you to view in 3D (if you have a 3D display and glasses).

8.supports DLSS

✖Nvidia GeForce GTX 480

✖Nvidia GeForce GTX 580

DLSS (Deep Learning Super Sampling) is an upscaling technology powered by AI. It allows the graphics card to render games at a lower resolution and upscale them to a higher resolution with near-native visual quality and increased performance. DLSS is only available on select games.

9.PassMark (G3D) result

This benchmark measures the graphics performance of a video card. Source: PassMark.

Ports

1.has an HDMI output

✔Nvidia GeForce GTX 480

✔Nvidia GeForce GTX 580

Devices with a HDMI or mini HDMI port can transfer high definition video and audio to a display.

2.HDMI ports

Unknown. Help us by suggesting a value. (Nvidia GeForce GTX 480)

Unknown. Help us by suggesting a value. (Nvidia GeForce GTX 580)

More HDMI ports mean that you can simultaneously connect numerous devices, such as video game consoles and set-top boxes.

3.HDMI version

Unknown. Help us by suggesting a value. (Nvidia GeForce GTX 480)

Unknown. Help us by suggesting a value. (Nvidia GeForce GTX 580)

Newer versions of HDMI support higher bandwidth, which allows for higher resolutions and frame rates.

4.DisplayPort outputs

Allows you to connect to a display using DisplayPort.

5.DVI outputs

Allows you to connect to a display using DVI.

6.mini DisplayPort outputs

Allows you to connect to a display using mini-DisplayPort.

Price comparison

Cancel

Which are the best graphics cards?

GeForce GTX 480 vs GeForce GTX 580 Graphics cards Comparison

When choosing between GeForce GTX 480 and GeForce GTX 580, it is worth examining the specifications of the models in detail. Do they meet the recommended requirements of modern games and software? Storage capacity, form factor, TDP, available ports, warranty and manufacturer support are all important. For example, the size of a PC case can limit the maximum thickness and length of the card. Often, instead of the factory overclocked card and RGB backlight, it is better to choose a reference model with a more efficient GPU. And make sure that your current power supply unit has the correct connection pins (using adapters is not recommended). This GPUs compare tool is meant to help you to choose the best graphics card for your build. Let’s find out the difference between GeForce GTX 480 and GeForce GTX 580.

GeForce GTX 480

Check Price

GeForce GTX 580

Check Price

Main Specs

  GeForce GTX 480 GeForce GTX 580
Power consumption (TDP) 250 Watt 244 Watt
Interface PCIe 2.0 x16 PCIe 2.0 x16
Supplementary power connectors 6-pin & 8-pin One 6-pin and One 8-pin
Memory type GDDR5 GDDR5
Maximum RAM amount 1536 MB 1536 MB
Display Connectors 2x DVI, 1x mini-HDMI 2x DVI, 1x mini-HDMI
 

Check Price

Check Price

  • GeForce GTX 480 has 2% more power consumption, than GeForce GTX 580.
  • Both video cards are using PCIe 2.0 x16 interface connection to a motherboard.
  • GeForce GTX 480 and GeForce GTX 580 have maximum RAM of 1536 GB.
  • Both cards are used in Desktops.
  • GeForce GTX 480 and GeForce GTX 580 are build with Fermi architecture.
  • Core clock speed of GeForce GTX 580 is 844 MHz higher, than GeForce GTX 480.
  • GeForce GTX 480 and GeForce GTX 580 are manufactured by 40 nm process technology.
  • Both graphics cards are the same length of 10 mm.
  • Memory clock speed of GeForce GTX 580 is 156 MHz higher, than GeForce GTX 480.

Game benchmarks

Assassin’s Creed OdysseyBattlefield 5Call of Duty: WarzoneCounter-Strike: Global OffensiveCyberpunk 2077Dota 2Far Cry 5FortniteForza Horizon 4Grand Theft Auto VMetro ExodusMinecraftPLAYERUNKNOWN’S BATTLEGROUNDSRed Dead Redemption 2The Witcher 3: Wild HuntWorld of Tanks
high / 1080p 21−24 21−24
ultra / 1080p 12−14 14−16
QHD / 1440p 6−7 8−9
4K / 2160p 5−6 6−7
low / 720p 40−45 40−45
medium / 1080p 24−27 27−30
The average gaming FPS of GeForce GTX 580 in Assassin’s Creed Odyssey is 5% more, than GeForce GTX 480.
high / 1080p 30−35 35−40
ultra / 1080p 27−30 30−35
QHD / 1440p 12−14 16−18
4K / 2160p 8−9 10−11
low / 720p 65−70 75−80
medium / 1080p 35−40 40−45
The average gaming FPS of GeForce GTX 580 in Battlefield 5 is 16% more, than GeForce GTX 480.
low / 768p 50−55 50−55
QHD / 1440p 0−1 0−1
GeForce GTX 480 and GeForce GTX 580 have the same average FPS in Call of Duty: Warzone.
low / 768p 230−240 240−250
medium / 768p 200−210 210−220
ultra / 1080p 120−130 130−140
QHD / 1440p 90−95 100−110
4K / 2160p 50−55 55−60
high / 768p 170−180 180−190
The average gaming FPS of GeForce GTX 580 in Counter-Strike: Global Offensive is 6% more, than GeForce GTX 480.
low / 768p 60−65 60−65
ultra / 1080p 50−55 55−60
medium / 1080p 55−60 55−60
The average gaming FPS of GeForce GTX 580 in Cyberpunk 2077 is 3% more, than GeForce GTX 480.
low / 768p 120−130 120−130
medium / 768p 100−110 110−120
ultra / 1080p 80−85 85−90
The average gaming FPS of GeForce GTX 580 in Dota 2 is 4% more, than GeForce GTX 480.
high / 1080p 24−27 27−30
ultra / 1080p 21−24 24−27
QHD / 1440p 18−20 21−24
4K / 2160p 8−9 9−10
low / 720p 50−55 55−60
medium / 1080p 27−30 30−35
The average gaming FPS of GeForce GTX 580 in Far Cry 5 is 11% more, than GeForce GTX 480.
high / 1080p 30−35 35−40
ultra / 1080p 24−27 27−30
QHD / 1440p 16−18 18−20
low / 720p 120−130 130−140
medium / 1080p 70−75 80−85
The average gaming FPS of GeForce GTX 580 in Fortnite is 11% more, than GeForce GTX 480.
high / 1080p 35−40 40−45
ultra / 1080p 27−30 30−33
QHD / 1440p 14−16 16−18
4K / 2160p 12−14 14−16
low / 720p 70−75 75−80
medium / 1080p 40−45 40−45
The average gaming FPS of GeForce GTX 580 in Forza Horizon 4 is 8% more, than GeForce GTX 480.
low / 768p 100−110 110−120
medium / 768p 95−100 100−110
high / 1080p 40−45 45−50
ultra / 1080p 16−18 18−20
QHD / 1440p 7−8 10−11
The average gaming FPS of GeForce GTX 580 in Grand Theft Auto V is 11% more, than GeForce GTX 480.
high / 1080p 12−14 14−16
ultra / 1080p 10−11 12−14
QHD / 1440p 10−11 10−12
4K / 2160p 3−4 4−5
low / 720p 40−45 45−50
medium / 1080p 18−20 21−24
The average gaming FPS of GeForce GTX 580 in Metro Exodus is 12% more, than GeForce GTX 480.
low / 768p 120−130 120−130
medium / 1080p 110−120 110−120
GeForce GTX 480 and GeForce GTX 580 have the same average FPS in Minecraft.
ultra / 1080p 14−16 14−16
low / 720p 70−75 75−80
medium / 1080p 18−20 18−20
The average gaming FPS of GeForce GTX 580 in PLAYERUNKNOWN’S BATTLEGROUNDS is 5% more, than GeForce GTX 480.
high / 1080p 16−18 16−18
ultra / 1080p 10−11 10−12
QHD / 1440p 2−3 3−4
4K / 2160p 1−2 2−3
low / 720p 40−45 45−50
medium / 1080p 21−24 21−24
The average gaming FPS of GeForce GTX 580 in Red Dead Redemption 2 is 6% more, than GeForce GTX 480.
low / 768p 75−80 85−90
medium / 768p 45−50 50−55
high / 1080p 24−27 27−30
ultra / 1080p 14−16 16−18
4K / 2160p 8−9 9−10
The average gaming FPS of GeForce GTX 580 in The Witcher 3: Wild Hunt is 14% more, than GeForce GTX 480.
low / 768p 90−95 90−95
medium / 768p 60−65 60−65
ultra / 1080p 40−45 45−50
high / 768p 55−60 55−60
The average gaming FPS of GeForce GTX 580 in World of Tanks is 3% more, than GeForce GTX 480.

Full Specs

  GeForce GTX 480 GeForce GTX 580
Architecture Fermi Fermi
Code name GF100 GF110
Type Desktop Desktop
Release date 7 December 2010 9 November 2010
Pipelines 480 512
Core clock speed 700 MHz 1544 MHz
Transistor count 3,100 million 3,000 million
Manufacturing process technology 40 nm 40 nm
Texture fill rate 42 billion/sec 49.4 billion/sec
Floating-point performance 1,345. 0 gflops 1,581.1 gflops
Length 10.5″ (267 mm) (26.7 cm) 10.5″ (267 mm) (26.7 cm)
Memory bus width 384 Bit 384 Bit
Memory clock speed 1848 MHz (3696 data rate) 2004 MHz (4008 data rate)
Memory bandwidth 177.4 GB/s 192.4 GB/s
Shared memory
DirectX 12 (11_0) 12 (11_0)
Shader Model 5.1 5.1
OpenGL 4.2 4.2
OpenCL 1.1 1.1
Vulkan N/A N/A
CUDA + +
CUDA cores 480 512
Bus support 16x PCI-E 2. 0 PCI-E 2.0 x 16
Height 4.376″ (111 mm) (11.1 cm) 4.376″ (111 mm) (11.1 cm)
SLI options + +
Multi monitor support + +
HDMI + +
HDCP +
Maximum VGA resolution 2048×1536 2048×1536
Audio input for HDMI Internal Internal
Bitcoin / BTC (SHA256) 117 Mh/s 142 Mh/s
 

Check Price

Check Price

Similar compares

  • GeForce GTX 480 vs Radeon RX 460
  • GeForce GTX 480 vs Quadro M2000
  • GeForce GTX 580 vs Radeon RX 460
  • GeForce GTX 580 vs Quadro M2000
  • GeForce GTX 480 vs Radeon HD 7870
  • GeForce GTX 480 vs Quadro M2200
  • GeForce GTX 580 vs Radeon HD 7870
  • GeForce GTX 580 vs Quadro M2200

(PhysX Test) GTX 680 vs GTX 580 vs GTX 480 in FluidMark

JeGX


Some readers asked me how much better is the GeForce GTX 680 in PhysX compared to previous GTX 580 and GTX 480. To bring an element of answer, I quickly tested the GTX 480, GTX 580 and GTX 680 with FluidMark 1.5.0 with different settings (on H67 testbed). Graphics drivers: R301.24.

The GeForce GTX 480 is the reference card (score is 100%).

Test 1 – Preset:720 (30000 SPH particles, 1280×720 fullscreen)

EVGA GeForce GTX 480 (GPU@700MHz, mem@1848MHz)
4699 points, 77 FPS (100%)
EVGA GeForce GTX 580 (GPU@797MHz, mem@2025MHz)
5590 points, 92 FPS (118%)
EVGA GeForce GTX 680 (GPU@1097MHz, mem@3004MHz)
8395 points, 137 FPS (178%)

Test 2 – Preset:1080 (60000 SPH particles, 1920×1080 fullscreen)

EVGA GeForce GTX 480 (GPU@700MHz, mem@1848MHz)
1509 points, 25 FPS (100%)
EVGA GeForce GTX 580 (GPU@797MHz, mem@2025MHz)
1807 points, 30 FPS (119%)
EVGA GeForce GTX 680 (GPU@1097MHz, mem@3004MHz)
3576 points, 59 FPS (236%)

Test 3 – Custom settings: 120000 SPH particles, 1920×1080 fullscreen

EVGA GeForce GTX 480 (GPU@700MHz, mem@1848MHz)
253 points, 21 FPS (100%)
EVGA GeForce GTX 580 (GPU@797MHz, mem@2025MHz)
300 points, 25 FPS (118%)
EVGA GeForce GTX 680 (GPU@1097MHz, mem@3004MHz)
530 points, 44 FPS (209%)

If we took the GTX 480 as the reference card (100%), the GTX 580 is around 19% faster while the GTX 680 is 78% up to 136% faster in these PhysX fluids tests. The performance boost of the GTX 680 scores is rather impressive…

FluidMark, NVIDIA PhysX, Tests

benchmark, fluidmark, geforce, gpu, gtx480, gtx580, gtx680, physx, test

SEARCH
Quick Links

Latest Graphics Drivers

  • NVIDIA GeForce 517.48 (VK_1.3.205)
  • NVIDIA Vulkan Dev 517.55 (VK_1.3.229)
  • AMD Radeon 22.9.2 (VK_1.3.217)
  • Intel Graphics 31.0.101.3616 (VK_1.3.224)

Geeks3D’s Tools

  • FurMark 1.31.0
  • GPU Caps Viewer 1.56.0
  • GPU Shark 0.27.1
  • GeeXLab 0.48.3
  • ASUS FurMark ROG Edition 0.8.13
  • MSI Kombustor 4.1.17
  • YAFFplayer 0.5.15
  • GL-Z 0.5.0
  • MadView3D 0.4.2
  • h5shG3n 0.5.1
⦿ Never miss a news
⦿ WebGL Caps Viewer



[ WebGL Caps Viewer ]

⦿ Sticky Articles
  • Vulkan API Programming Resources
  • How to Install NVIDIA Graphics Drivers on Linux Mint 17
  • Electronics: blinking LED with Raspberry Pi GPIO and GeeXLab
  • AMD Radeon and NVIDIA GeForce FP32/FP64 GFLOPS Table
  • Graphics card VRM explained
  • Graphics memory speeds demystified
  • Graphics card TDP database
Categories

CategoriesSelect CategoryCatchall  (7)   Guest Posts  (4)Demoscene  (111)General Tech  (685)GPU Computing  (279)   NVIDIA PhysX  (115)Hardware  (1,150)   ASUS Tinker Board  (3)   Graphics Cards  (920)   Overclocking  (46)   Raspberry Pi  (35)   Unboxing  (9)   VGA Coolers  (43)Howto  (22)Programming  (722)   Gamedev  (301)      Assets  (4)   Lua  (3)   Python  (32)   Shader Library  (33)Reviews  (59)Softwares  (1,644)   Benchmarks  (108)   Drivers  (607)      AMD Graphics Driver  (214)      Intel HD Graphics  (69)      NVIDIA Graphics Driver  (315)   Geeks3D  (568)      EVGA OC Scanner  (22)      FluidMark  (17)      FurMark  (107)      GeeXLab  (166)      GL-Z  (5)      GPU Caps Viewer  (86)      GPU-Shark  (63)      GpuTest  (15)      h5shG3n  (3)      MadShaders  (5)      MadView3D  (4)      MSI Kombustor  (37)      ShaderToyMark  (7)      TessMark  (20)      vkz  (6)      YAFFplayer  (6)   GPU Tools  (134)   Linux  (61)   tech demo  (89)   Utilities  (46)Tests  (174)

Archives

Archives
Select Month September 2022  (8) August 2022  (7) July 2022  (3) June 2022  (5) May 2022  (9) April 2022  (3) March 2022  (4) February 2022  (2) January 2022  (3) December 2021  (4) November 2021  (11) October 2021  (7) September 2021  (3) August 2021  (6) July 2021  (2) June 2021  (3) May 2021  (2) April 2021  (9) March 2021  (7) February 2021  (6) January 2021  (7) December 2020  (11) November 2020  (7) October 2020  (7) September 2020  (9) August 2020  (9) July 2020  (2) June 2020  (6) May 2020  (4) April 2020  (4) March 2020  (7) February 2020  (6) January 2020  (12) December 2019  (3) November 2019  (10) October 2019  (5) September 2019  (7) August 2019  (5) July 2019  (3) June 2019  (5) May 2019  (6) April 2019  (5) March 2019  (7) February 2019  (4) January 2019  (6) December 2018  (6) November 2018  (11) October 2018  (11) September 2018  (3) August 2018  (6) June 2018  (7) May 2018  (4) April 2018  (4) March 2018  (11) February 2018  (3) January 2018  (14) December 2017  (12) November 2017  (5) October 2017  (10) September 2017  (5) August 2017  (7) July 2017  (6) June 2017  (1) May 2017  (11) April 2017  (11) March 2017  (8) February 2017  (8) December 2016  (8) November 2016  (12) October 2016  (5) September 2016  (7) August 2016  (18) July 2016  (2) June 2016  (7) May 2016  (12) April 2016  (6) March 2016  (21) February 2016  (12) January 2016  (8) December 2015  (8) November 2015  (11) October 2015  (10) September 2015  (4) August 2015  (9) July 2015  (4) June 2015  (22) May 2015  (9) April 2015  (5) March 2015  (12) February 2015  (6) January 2015  (7) December 2014  (12) November 2014  (16) October 2014  (9) September 2014  (18) August 2014  (16) July 2014  (4) June 2014  (12) May 2014  (20) April 2014  (16) March 2014  (15) February 2014  (17) January 2014  (15) December 2013  (8) November 2013  (26) October 2013  (25) September 2013  (17) August 2013  (3) July 2013  (25) June 2013  (19) May 2013  (25) April 2013  (12) March 2013  (35) February 2013  (15) January 2013  (17) December 2012  (12) November 2012  (14) October 2012  (14) September 2012  (12) August 2012  (22) July 2012  (15) June 2012  (23) May 2012  (20) April 2012  (29) March 2012  (37) February 2012  (5) January 2012  (35) December 2011  (36) November 2011  (29) October 2011  (24) September 2011  (24) August 2011  (35) July 2011  (13) June 2011  (37) May 2011  (31) April 2011  (26) March 2011  (68) February 2011  (37) January 2011  (58) December 2010  (56) November 2010  (52) October 2010  (84) September 2010  (62) August 2010  (66) July 2010  (39) June 2010  (49) May 2010  (49) April 2010  (56) March 2010  (86) February 2010  (52) January 2010  (38) December 2009  (40) November 2009  (43) October 2009  (53) September 2009  (33) August 2009  (42) July 2009  (40) June 2009  (45) May 2009  (18) April 2009  (55) March 2009  (37) February 2009  (23) January 2009  (56) December 2008  (84) November 2008  (76) October 2008  (91) September 2008  (74) August 2008  (136) July 2008  (108) June 2008  (144) May 2008  (160)

Powered by WP-Forge & WordPress

8 Years Later: Does the GeForce GTX 580 Still Have Game in 2018?

Today we’re turning our clocks all the way back to November 2010 to revisit the once mighty GeForce GTX 580, a card that marked the transition from 55nm to 40nm for Nvidia’s high-end GPUs and helped the company sweep its Fermi-based GeForce GTX 480 under the rug.

If you recall, the GTX 480 and 470 (GF100) launched just eight months before the GTX 580 in March 2010, giving gamers an opportunity to buy a $500 graphics card that could double as a George Foreman grill. With a TDP of 250 watts, the GTX 480 didn’t raise too many eyebrows and around then AMD’s much smaller Radeon HD 5870 was rated at 228 watts while its dual-GPU 5970 smoked most power supplies with its 294 watt rating.

Although both the Radeon HD 5870 and GeForce GTX 480 were built on the same 40nm manufacturing process, the GeForce was massive. Its die was over 50% larger. Nvidia aimed to create a GPU that packed both strong gaming and strong compute performance, resulting in the GF100 die measuring an insane 529mm2.

This behemoth of a die caused problems for TSMC’s 40nm process which was still maturing at the time, which resulted in poor yields and leaky transistors. For gamers, the GTX 480 was simply too hot to handle — you couldn’t touch it even 10 minutes after shutting down — and it was far from quiet as well.

Fermi went down as one of Nvidia’s worst creations in history and at the time the company knew it had a real stinker on its hands so it went back to the drawing board and returned later that year to relaunch Fermi as the GTX 580. By November, TSMC had ironed out the kinks for their 40nm process and we got 7% more cores in a slightly smaller die. Of course, it wasn’t just TSMC that had time to work things out, as Nvidia also fixed what was wrong with GF100 when it created GF110 (the Fermi refresh).

Based on our own testing from 2010, power consumption was reduced by at least 5% and reference card load temperatures dropped 15%. Making those results more impressive, the GTX 580 was ~25% faster than the 480.

That’s a significant performance boost in the same release year and on the same process, while also saving power. The GTX 480 truly sucked. I didn’t like it on arrival but it wasn’t until the 580 came along eight months later that I understood just how rubbish it really was. I concluded my GTX 580 coverage by saying the following…

«The GeForce GTX 580 is unquestionably the fastest single-GPU graphics card available today, representing a better value than any other high-end video card. However, it’s not the only option for those looking to spend $500. Dual Radeon HD 6870s remain very attractive at just under $500 and deliver more performance than a single GeForce GTX 580 in most titles. However, multi-GPU technology isn’t without pitfalls, and given the choice we would always opt for a single high-end graphics card.


Nvidia’s GeForce GTX 580 may be the current king of the hill, but this could all change next month when AMD launches their new Radeon HD 6900 series. AMD was originally expected to deliver its Cayman XT and Pro-based Radeon HD 6970 and 6950 graphics cards sometime during November, but they have postponed their arrival until mid-December for undisclosed reasons. If you don’t mind holding off a few short weeks, the wait could be worth some savings or potentially more performance for the same dollars depending on what AMD has reserved for us. «

For those of you wondering, the Radeon HD 6970 turned out to be a bit of a disappointment, falling well short of the GTX 580 and as a result was priced to compete with the GTX 570, allowing Nvidia to hold the performance crown without dispute for the next few years. It would be interesting to see how those two stack up today, perhaps that’s something we can look at in the future.

So, we’ve established that the GTX 580 was able to save face for Nvidia in 2010 and retain the performance crown for a few years before 28nm GPUs arrived in 2012, reigniting the graphics war once more. What we’re looking to see now is just how well the tubby 520mm2 die performs today.

Can the GTX 580 and its 512 CUDA cores clocked at 772MHz handle modern games? Sure, it has GDDR5 memory on a rather wide 384-bit bus pumping out an impressive 192GB, but there’s only 1.5GB of it, at least on most models. There were 3GB variants floating around, but most people picked up the original 1. 5GB version.

To see how it handles I’m going to compare it with the Radeon HD 7950 along with the more recently released GeForce GTX 1050, 1050 Ti and Radeon RX 560. Let’s check out those results before taking a gander at some actual gameplay.

Battlefield 1, Dawn of War III, DiRT 4, For Honor

First up we have the Battlefield 1 results using the ultra quality preset, which as it turns out is a bad choice for the GTX 580 and its measly 1.5GB frame buffer. Don’t worry though, after we go over all the graphs I’ll show some gameplay performance using more agreeable settings. For now, we can see that the 580 simply isn’t cutting it here and fails to deliver what I would call playable performance at a little over 30% slower than the Radeon HD 7950.

Next up we have Dawn of War III, which was tested using the more appropriate medium quality preset. Here we see a reasonable 41fps on average, but again, the GTX 580 is haunted by that limited VRAM buffer as the minimum frame rate drops down to just 23fps. This meant stuttering was an issue.

Dirt 4 was also tested using the medium quality settings and here the experience was quite good with the GTX 580, certainly playable and for the most part was very smooth at 1080p.

Moving on we have For Honor and here we found a mostly playable experience if that makes sense. At times we dropped down below 30fps and this made gameplay noticeably more choppy than it was on say the RX 560 or HD 7950. It’s also interesting to note that the GTX 1050 absolutely obliterates 2010’s flagship part.

Testing GeForce GTX 480 vs GTX 580 for Gaming in Stereo 3D Mode

← Getting Ready for GeForce GTX 580 Upgrade for Stereo 3D Gaming
New GPU from Nvidia Announced Today, the GeForce GTX 570 →


A few days ago I told you that I’ve managed to finally get my hands on GeForce GTX 580 – the new top model GPU from Nvidia, and I was quite happy by the fact that the full cover water cooling blocks from the older GTX 480 fit nicely on the GTX 580. This was enough to convince me to upgrade to 580s, so I used this opportunity to also do some testing and compare the performance of GTX 480 and GTX 580 in a single and dual configurations (SLI). Of course there are a lot of reviews out there that have already done that, but not quite the way I wanted to test and by that I mean to test in stereoscopic 3D mode in some of the more recent and popular and of course more demanding games. So I’ve chosen 5 different games and started testing, I was eager to try the GTX 580, that is why I didn’t go for a lot of games on one side and on the other, I was having some trouble finding games that do not max out at 60 frames per eye with SLI in stereo 3D mode.

Below you will find the results from the five games I tested with, including Metro 2033 which is still one of the heaviest games, especially if you want to play it with max details and in stereo 3D mode as you will see from the charts. There are four different test scenarios in the charts – single GTX 480, single GTX 580, dual GTX 480 in SLI and dual GTX 580 in SLI, with the idea that you can compare not only the single card performance, but also to see how it scales in dual GPU configuration. The results in frames per second listed in the charts are for each eye, with 60 fps being the maximum per eye, because of the Vsync required to be forced when in stereo 3D mode. So if you see 60 fps in the chart, that actually means 120 fps average framerate for both eyes and since you cannot go further, maxing out at 60 fps per eye average simply means that the system can pretty much supply even higher fps. All the tests were done in 1920×1080 resolution with maximum detail levels and some AA/AF enabled as the specific game supports and these settings are also mentioned in the charts graphs. You may notice that the difference between single card configurations and dual card configurations are quite different with the single cards showing much bigger difference at times. The reason for that is quite simple, because of the Vsync in stereo 3D mode the top framerate is capped, so the difference in performance becomes less apparent for games where the 60 fps per eye is very close to the average achieved. And that is pretty much clearly visible in the SLI results for the first 4 games, so you should use the Metro 2033 results for judging the performance in SLI and how well does dual GTX 480 scale as compared to dual GTX 580. Also do not forget the fact that 5 frames in the chart are actually 10 frames difference, because the chart lists only the framerate achieved per eye and the actual fps is doubled, because in stereo 3D mode both eyes see different frames.



I’m starting with Battefield: Bad Company 2, a game that is still a bit challenging for single card configurations in stereo 3D mode as you can see from the results, however with dual GPU configurations the average fps almost hits 60 fps per eye. And as I already mentioned this is not good for comparing the scaling of the dual GPU configurations, although you can compare the single vs dual results. The difference between GTX 480 and GTX 580 in terms of performance is about 18% and the difference between single GTX 480 and dual GTX 480s is almost hitting 50%.



Moving to Formula 1 2010, a quite demanding game for single GPU configurations in stereo 3D mode, but not a challenge for SLI setup even with GTX 480 as the game is hitting 60 fps per eye average in both SLI setups. And this pretty much means that with two GTX 480 or two GTX 580 in SLI the minimum framerate pretty much does not drop below 60 fps per eye, so comparing between these two in the F1 game is pretty much pointless as the difference cannot be measured in stereo 3D mode. In single GPU configurations the GTX 480 is just about 7% slower than the GTX 580.



The next game in line, Fallout: New Vegas does pretty much the same as F1 2010 in terms of results, just about 8% difference between the single GPU configurations in favor of the GTX 580 of course and almost 60 fps per eye average for both SLI configurations.



Mafia II is another quire demanding games that can be a tough nut to crack for a single GPU in stereo 3D mode, however when in SLI with dual GPUs it again almost reaches the 60 fps average per eye, which makes it hard to compare the difference between SLI setup with 480 and 580. Around 14% difference between GTX 480 and GTX 580 in favor of the later of course.



And as a final, the Metro 2033 results. As I’ve already mentioned the game can stress well enough even two GTX 580s in SLI setup, and it is even more demanding for single GPU configurations in stereo 3D mode. The average fps of a single GTX 480 can make the game not so comfortable to be played in stereo 3D mode at times, because of the framerate dropping to about 20 fps per eye at maximum detail levels of course and although the GTX 580 performs better, you better go for SLI setup for that game if you want to play it in stereo 3D mode with everything to the max. The single GTX 580 is about 18% faster than the GTX 480, but if you go from a single GTX 480 to dual GTX 480s in SLI you will be gaining not only more comfortable framerate at about 70% scaling. The situation from a single to dual GTX 580s is pretty much similar with about 59% improvement in the framerate when using two cards in SLI, and the GTX 480 SLI versus GTX 580 SLI provides about 10% difference in performance.

So depending on the game when playing in stereo 3D mode you can get between 7% and 18% or an average of about 13% faster performance from the five tested games between the GTX 480 and GTX 580, so there is actually a point in upgrading a single GTX 480 to GTX 580. The scalability that a single to dual GTX 480 or GTX 580 also shows is quite nice, so you can also consider going for a second GTX 480 card to get a SLI setup instead of replacing the single GTX 480 with a single GTX 580 with a more limited budget for upgrades. Of course if you don’t have a limited budget for upgrading, then going for two GeForce GTX 580s in SLI and slightly overclocking them should solve your problem even with Metro 2033 in stereo 3D mode and will make sure you are absolutely ready for the upcoming new Crysis 2 game. And now I’m going to overclock the dual GTX 580s to see how far I can go into upping the framerate in the Metro 2033 game over the above results that were achieved with all of the video cards running at their stock parameters and not being overclocked.

Tags: 3d vision·Gaming in Stereo 3D Mode·geforce gtx 480·GeForce GTX 580·GTX 480 Benchmark·GTX 480 vs GTX 580·GTX 580 Benchmark·SLI in Stereo 3D·Stereo 3D Benchmark·Stereo 3D Game Benchmarks

The Shader Difference — GeForce GTX 580 at GTX 480 Clocks

Video Cards & GPUs

NVIDIA GeForce GPU

What would’ve happened if the GTX 480 launched at the same clocks, but with the 512 Shaders like we thought it would have had?

Published Dec 9, 2010 5:22 AM CST   |   Updated Tue, Nov 3 2020 7:02 PM CST

13 minute read time

Introduction & Underclocking

Introduction

Being able to test the difference that Shaders make isn’t something that we normally test or get the chance to test very often. The launch of the GTX 480, though, was full of so much drama in relation to yields and its performance.

The biggest change that seemed to happen right in front of our eyes was a shift from the 512 Shaders we thought the card would ship with, to 480 Shaders. Sure, we knew it was going to cause a performance hit, but how much of a performance hit? The decreased amount of Shaders was only one of the issues with the GTX 480, though, with other issues being heat, noise and power draw.

While the latter didn’t bother me, the first two were real issues; first we saw Galaxy attack the heat and noise with an awesome looking triple fan, triple slot cooler. We then saw GIGABYTE and MSI attack the model; in the end the GTX 480 looked like a great product and to be honest it still is, especially with some of the bargain prices it can be grabbed for now from some places.

We wonder, though, what would’ve happened if the GTX 580 we looked at today launched as the GTX 480 in March; the same clocks, but the new cooler and more importantly the 512 Shaders that we had hoped to have initially.

Underclocking

Something a bit different this time is the fact that we need to underclock our card here. Before we talk about that, let’s cover the similarities. Both cards carry 48 ROPs, 1536MB of GDDR5 on a 384Bit bus, and on a 40nm core.

The two big differences are the Shaders and the clocks. The GTX 480 offers 480 Shaders while the GTX 580 offers 512; a number that we always had hoped the GTX 480 would carry.

The other difference is the clock speeds. The core clock on the GTX 580 is 772MHz verse 701MHz; this makes the Shaders 1544MHz and 1401MHz. As for the memory, the difference there is 4008MHz QDR verse 3696MHz QDR.

So, with Afterburner we’ve pushed our GTX 580 clocks down to come in line with the GTX 480. This will let us know just what those extra 32 Shaders do for performance without any other factors coming into play.

Test System Setup and 3DMark Vantage

We would like to thank the following companies for supplying and supporting us with our test system hardware and equipment: Intel, ASRock, Kingston, Mittoni, Noctua and Corsair.

Because we just wanted to keep it simple today, we’ve just got the GIGABYTE GTX 480 and MSI GTX 580 included in our results. The GTX 580 is featured twice; once at default clocks and the other clocked down to GTX 480 speeds. Of course, we’ve then also got the GTX 480 at its stock speeds.

Let’s get started!

3DMark Vantage

Version and / or Patch Used: 1.0.1
Developer Homepage: http://www.futuremark.com
Product Homepage: http://www.futuremark.com/products/3dmarkvantage/
Buy It Here

3DMark Vantage is the new industry standard PC gaming performance benchmark from Futuremark, newly designed for Windows Vista and DirectX10. It includes two new graphics tests, two new CPU tests, several new feature tests, and support for the latest hardware.

3DMark Vantage is based on a completely new rendering engine, developed specifically to take full advantage of DirectX10, the new graphics API from Microsoft.

Straight away under Vantage you can see the kind of performance boost we get with those extra Shaders, even though the cards are clocked at exactly the same speed.

Unigine Heaven Benchmark

Version and / or Patch Used: 2
Developer Homepage: http://www.unigine.com
Product Homepage: http://unigine.com/press-releases/091022-heaven_benchmark//

New benchmark grants the power to unleash the DirectX 11 potential in the gift wrapping of impressively towering graphics capabilities. It reveals the enchanting magic of floating islands with a tiny village hidden in the cloudy skies. With the interactive mode emerging experience of exploring the intricate world is ensured within reach. Through its advanced renderer, Unigine is one of the first to set precedence in showcasing the art assets with tessellation, bringing compelling visual finesse, utilizing the technology to the full extend and exhibiting the possibilities of enriching 3D gaming.

You can see here that at the higher resolution the GTX 580 clocked down sits almost straight in the middle of the two cards.

Benchmarks — Resident Evil 5

Resident Evil 5

Version and / or Patch Used: Demo Benchmark
Developer Homepage: www.residentevil.com/
Product Homepage: http://www.residentevil.com/

Resident Evil 5 is a survival horror video game developed and published by Capcom. The game is the seventh installment in the Resident Evil survival horror series, and was released on September 18. Resident Evil 5 revolves around Chris Redfield and Sheva Alomar as they investigate a terrorist threat in Kijuju, a fictional town in Africa.

When we get into the real world gains you can see that the extra Shaders indeed help boost overall performance, but that extra clock really helps take the GTX 580 to the next level.

Benchmarks — Tom Clancy’s H.

A.W.X.

Tom Clancy’s H.A.W.X.

Version and / or Patch Used: Benchmark Demo
Timedemo or Level Used: Built-in Test
Developer Homepage: http://www.ubi.com/UK/default.aspx
Product Homepage: http://www.hawxgame.com/

Tom Clancy’s H.A.W.X is an arcade-style flight simulator video game developed by Ubisoft Romania and published by Ubisoft for Microsoft Windows, Xbox 360, PlayStation 3, and iPhone OS.

The fundamental gameplay mechanics are similar to those of other console-based flight series. Players take on enemies with over 50 aircraft available. Each mission is at real world locations in environments created with commercial satellite data. A cockpit, first person, and third person view are selectable. The third person view gives the player an external view of both their plane and the target.

Set above the skies of a near-future world, increasingly dependent on private military companies with elite mercenaries who have a relaxed view on the law. As these non-governmental organizations gain power, global conflict erupts with one powerful PMC attacking the United States.

We again see a nice little boost from those Shaders. Under a game like this, though, we’ve already got good numbers and while we see an even larger boost when the stock GTX 580 clocks come into it, the overall gaming experience doesn’t change.

Benchmarks — Mafia II

Mafia II

Version and / or Patch Used: Latest Steam Update
Timedemo or Level Used: Built in Benchmark
Developer Homepage: http://www.2kczech.com/
Product Homepage: http://www.mafia2game.com/
Buy It Here

Mafia II is a third-person action-adventure video game, the sequel to Mafia: The City of Lost Heaven. It is developed by 2K Czech, previously known as Illusion Softworks, and is published by 2K Games. The game is set from 1943 to 1951 in Empire Bay (the name is a reference to New York’s state nickname «The Empire State»), a fictional city based on San Francisco and New York City, with influences from Chicago and Detroit. The game features a completely open-ended game map of 10 square miles. No restrictions are included from the start of the game. There are around 50 vehicles in the game, as well as licensed music from the era.

This is a good test here; you can see that even with the extra Shaders we’re just falling short of that 60 FPS mark we want at the highest resolution. Once we add some extra MHz onto those 512 Shaders, though, we can see the GTX 580 is able to break the 60 FPS mark at 2560 x 1600 with no issue.

Benchmarks — Lost Planet 2

Lost Planet 2

Version and / or Patch Used: Benchmark Demo
Timedemo or Level Used: Built in Benchmark — Test A Scene 1
Developer Homepage: http://www.capcom.com/
Product Homepage: http://www.lostplanet2game.com/

Lost Planet 2 is a third-person shooter video game developed and published by Capcom. The game is the sequel to Lost Planet: Extreme Condition which is also made by Capcom, taking place ten years after the events of the first game, on the same fictional planet. The story takes place back on E.D.N. III 10 years after the events of the first game. The snow has melted to reveal jungles and more tropical areas that have taken the place of more frozen regions. The plot begins with Mercenaries fighting against Jungle Pirates. After destroying a mine, the Mercenaries continue on to evacuate the area, in which a Category-G Akrid appears and attacks them. After being rescued, they find out their evacuation point (Where the Category-G appeared) was a set-up and no pick up team awaited them. The last words imply possible DLC additions to the game, «There’s nothing to be gained by wiping out snow pirates… unless you had some kind of grudge.»

The extra Shaders manage to give us a nice boost at 2560 x 1600 which equates to 7 FPS. When you throw those extra MHz into the mix, that becomes a 12 FPS gain over the stock GTX 480 which is very impressive. Unfortunately we’re still below that 60 FPS number we want to see.

Benchmarks — Aliens vs.

Predator

Aliens vs. Predator

Version and / or Patch Used: Standalone Benchmark
Timedemo or Level Used: Built in Benchmark
Developer Homepage: http://www.rebellion.co.uk/
Product Homepage: http://www.sega.com/games/aliens-vs-predator/

Aliens vs. Predator is a science fiction first-person shooter video game, developed by Rebellion Developments, the team behind the 1999 original PC game, and published by Sega for Microsoft Windows, the PlayStation 3 and the Xbox 360. The game is based on the Alien vs. Predator franchise, a combination of the characters and creatures of the Alien franchise and the Predator franchise. There are three campaigns in the game, one for each race/faction (the Predators, the Aliens and the Colonial Marines), that, while separate in terms of individual plot and gameplay, form one overarching storyline.

Following the storyline of the campaign modes comes the multiplayer aspect of the game. In this Multiplayer section of the game, players face off in various different gametypes in various different ways.

Like Mafia II this is another interesting test; at 1920 x 1200 you can see the GTX 480 just misses out on that 60 FPS mark we aim for. Add in the extra Shaders and the 5 FPS boost takes us past that number. Of course, we add more MHz into the mix and we’ve all of a sudden gone from under 60 FPS to a nice rounded 70 FPS.

Benchmarks — Final Fantasy XIV

Final Fantasy XIV

Version and / or Patch Used: Standalone Benchmark
Timedemo or Level Used: Built in Benchmark — Elezen (Male)
Developer Homepage: http://www.square-enix.com/
Product Homepage: http://www.finalfantasyxiv.com/

Final Fantasy XIV, also known as Final Fantasy XIV Online, is the fourteenth installment in the Final Fantasy series. The game is a massively multiplayer online role-playing game and is developed and published by Square Enix. The game takes place in a land called Hydaelyn, mainly in a region named Eorzea, which will have a contemporaneously aesthetic blend of science fiction and classic fantasy elements.

The battle and job systems will be different from the one previously used in Final Fantasy XI, which utilized experience points and level-based progression. Final Fantasy XIV is being designed to utilize a skill-based progression[15] system similar to that of Final Fantasy II. Character races will resemble and allow players to create avatars similar to ones in Final Fantasy XI. Group play has been de-emphasized, and now solo and group play have been balanced. Weapon use will alter «character development».

Note: Final Fantasy XIV gives us a score and not a normal FPS rating, our understanding is that anything around 2000 points or above is considered playable.

There’s clear difference under all three setups here with those extra Shaders offering us about a 10% performance increase at the higher resolution.

Benchmarks — Street Fighter IV

Street Fighter IV

Version and / or Patch Used: Standalone Benchmark
Timedemo or Level Used: Built in Benchmark
Developer Homepage: http://www.capcom.com/
Product Homepage: http://www.streetfighter.com/

While Street Fighter IV features models and backgrounds rendered in 3D, the gameplay remains on a traditional 2D plane, with the camera having freedom to move in 3D at certain times during fights, for dramatic effect. Producer Yoshinori Ono has stated that he wanted to keep the game closer to Street Fighter II. A new system called «Focus Attacks» («Saving Attack» for the Japanese version) has been introduced, as well as Ultra moves. The traditional six-button control scheme returns, with new features and special moves integrated into the input system, mixing classic gameplay with additional innovations.

All the characters and environments in Street Fighter IV are rendered as 3D models with polygons, similar to the Street Fighter EX sub-series Capcom produced with Arika. However, there are a couple of key differences. Art director and character designer Daigo Ikeno, who previously worked on Street Fighter III 3rd Strike, opted for non-photorealistic rendering to give them a hand-drawn look, with visual effects accented in calligraphic strokes, ink smudges and ink sprays during the fights.

We already see some massive numbers under SF IV, but you can see the performance boost with the extra Shaders.

Benchmarks — Far Cry 2

Far Cry 2

Version and / or Patch Used: 1.01
Timedemo or Level Used: Ranch Long
Developer Homepage: http://www.ubi.com/
Product Homepage: http://www.farcry2.com/
Buy It Here

The Dunia Engine was built specifically for Far Cry 2 by the award-winning Ubisoft Montreal development team. It delivers the most realistic destructible environments, amazing special effects such as dynamic fire propagation and storm effects, real-time night-and-day cycle, dynamic music system, non-scripted enemy A.I. and so much more.

FC2 numbers are a little all over the place when it comes to minimums. We can see the averages move as you would expect, though, as we add more Shaders; and then add more speed to the core and memory.

Benchmarks — Batman Arkham Asylum

Batman Arkham Asylum

Version and / or Patch Used: 1.1
Timedemo or Level Used: Built-in Test
Developer Homepage: http://www.batmanarkhamasylum.com/
Product Homepage: http://www.batmanarkhamasylum.com/

Batman: Arkham Asylum exposes players to a unique, dark and atmospheric adventure that takes them to the depths of Arkham Asylum — Gotham’s psychiatric hospital for the criminally insane. Gamers will move in the shadows, instigate fear amongst their enemies and confront The Joker and Gotham City’s most notorious villains who have taken over the asylum.

Using a wide range of Batman’s gadgets and abilities, players will become the invisible predator and attempt to foil The Joker’s demented scheme.
Batman: Arkham Asylum features an original story penned exclusively for the game by famous Batman author and five-time Emmy award winner Paul Dini, whose credits include Lost season one and Batman: The Animated Series.

Batman AA numbers go a little all over the place, but for the most part you continue to see what we’ve seen in all our other tests.

Benchmarks — High Quality AA and AF

High Quality AA and AF

Our high quality tests let us separate the men from the boys and the ladies from the girls. If the cards weren’t struggling before they will start to now.

Mafia II again sees that with those extra Shaders on offer we’re able to break into the 60 FPS mark. The rest of our other tests don’t really show too many surprises.

Benchmarks — PhysX Tests

PhysX Tests

Here we’re able to find out when PhysX is turned on in games that support it what kind of frame rates we’re able to get. We always set PhysX to the highest possible in game settings while also keeping detail at its highest.

Similar to what we’ve seen all along, but when PhysX comes into the picture the differences aren’t as noticeable.

Temperature Test

Temperature Tests

The temperature of the core is pulled from MSI Afterburner with the max reading used after a completed run off 3DMark Vantage and the Performance preset.

With this fan and chip setup the GTX 580 at GTX 480 clocks comes in at a nicer looking 76c.

Sound Test

Sound Tests

Pulling out the TES 1350A Sound Level Meter we find ourselves quickly yelling into the top of it to see how loud we can be.

After five minutes of that we get a bit more serious and place the device two CM away from the fan on the card to find the maximum noise level of the card when idle (2D mode) and in load (3D mode).

Noise levels also look a whole lot more impressive with the card sitting a lot lower down the graph.

Power Consumption Tests

Using our new PROVA Power Analyzer WM-01 or «Power Thingy» as it has become quickly known as to our readers, we are now able to find out what kind of power is being used by our test system and the associated graphics cards installed. Keep in mind; it tests the complete system (minus LCD monitor, which is plugged directly into AC wall socket).

There are a few important notes to remember though; while our maximum power is taken in 3DMark06 at the same exact point, we have seen in particular tests the power being drawn as much as 10% more. We test at the exact same stage every time; therefore tests should be very consistent and accurate.

The other thing to remember is that our test system is bare minimum — only a SSD hard drive is used with a single CD ROM and minimal cooling fans.

So while the system might draw 400 watts in our test system, placing it into your own PC with a number of other items, the draw is going to be higher.

Power draw also comes down under 400 Watt which is nice.

Final Thoughts

The GTX 580 is very much a GTX 480 on steroids and while that doesn’t make it a bad card, it’s frustrating that NVIDIA chose to bring in a new series number for it when it probably wasn’t really deserved. Apart from the performance boost the GTX 580 offers over the GTX 480, there’s not really anything new coming to the table. Maybe if the new 500 series had the ability to run three monitors off the one card, implemented DisplayPort or added something more, then it could’ve been more justified, but just a performance boost doesn’t really seem to warrant it.

If the GTX 580 we’re looking at today was the GTX 480 in March, we have to wonder what would’ve changed. The heat’s a lot bitter, the noise levels are more impressive and the performance at GTX 480 speeds but with the 512 Shaders is strong. There’s no denying that it would’ve impacted AMD a little, but if the card launched in November ’09 like NVIDIA had hoped and expected it to, I think it’s safe to say that 2010 would’ve been a completely different year.

You can see that moving from 480 Shaders to 512 Shaders brings with it a really nice performance boost that can equate to 10% at the higher resolution sometimes. Throw in a mild overclock; which is the reference clocks that the GTX 580 carries, and you’ve got yourself some killer performance.

Get a little more wild with the OC like we did when we tested the GTX 580 Overclocked, and you see some massive performance gains at 900MHz over the competition.

NVIDIA has done a good job with the GTX 580 when it comes to overall performance since earlier in the year they dealt with the issues that the GTX 480 presented them with. They then released the GTX 465, GTX 460 and GTS 450 and in that time they managed to have ready the GTX 580 and in the following days, the GTX 570.

The 500 series might only be a slight refresh when compared to the 400 series, but at the end of it all if NVIDIA can offer us more performance at a similar or cheaper cost, who are we to complain?

Sure, it would’ve been nice to see the GTX 480 launch with the 512 Shaders, these cooling and noise numbers, but it didn’t. We can’t change the past and NVIDIA know that; instead they’ll work on the future. Their aim is to grab more market share then AMD and with what we’re seeing here in the late of 2010, 2011 is going to be a very interesting year from both sides.

Shopping Information

PRICING: You can find products similar to this one for sale below.

United States: Find other tech and computer products like this over at Amazon.com

United Kingdom: Find other tech and computer products like this over at Amazon. co.uk

Australia: Find other tech and computer products like this over at Amazon.com.au

Canada: Find other tech and computer products like this over at Amazon.ca

Deutschland: Finde andere Technik- und Computerprodukte wie dieses auf Amazon.de

Shawn Baker

Shawn takes care of all of our video card reviews. From 2009, Shawn is also taking care of our memory reviews, and from May 2011, Shawn also takes care of our CPU, chipset and motherboard reviews. As of December 2011, Shawn is based out of Taipei, Taiwan.

0027 772MHz vs 701MHz

  • 0.24 TFLOPS higher than FLOPS?
    1.58 TFLOPS vs 1.34 TFLOPS
  • 3.7 GPixel/s higher pixel rate?
    24.7 GPixel/s vs 21 GPixel/s
  • 78MHz faster memory speed?
    1002MHz vs 924MHz
  • 312MHz higher effective clock speed?
    4008MHz vs 3696MHz
  • 7. 3 GTexels/s higher number of textured pixels?
    49.4 GTexels/s vs 42.1 GTexels/s
  • 15GB/s more memory bandwidth?
    192.4GB/s vs 177.4GB/s
  • 32 more stream processors?
    512 vs 480
  • Which comparisons are the most popular?

    Nvidia GeForce GTX 480

    vs

    Nvidia GeForce GTX 1050

    Nvidia GeForce GTX 580

    vs

    Nvidia GeForce RTX 2070

    Nvidia GeForce GTX 480

    vs

    Gainward GeForce GTX 660 Ti

    Nvidia GeForce GTX 580

    vs

    AMD Radeon HD 6950

    Nvidia GeForce GTX 480

    vs

    AMD Radeon HD 6950

    Nvidia GeForce GTX 580

    VS

    NVIDIA GEFORCE GTX 1050 TI

    NVIDIA GEFORCE GTX 480

    VS

    ATI Radeon HD 5970

    NVIDIA GTX 580 9000 9000 95050503

    NVIDIA GTXE0003

    Nvidia GeForce GTX 480

    vs

    Nvidia GeForce GTX 560 Ti

    Nvidia GeForce GTX 580

    vs

    Nvidia GeForce GTX 650 Ti

    Nvidia GeForce GTX 480

    vs

    Nvidia GeForce GT 1030 DDR4

    Nvidia GeForce GTX 580

    vs

    Nvidia GeForce GTX 980

    Nvidia GeForce GTX 480

    vs

    Nvidia GeForce MX110

    Nvidia GeForce GTX 580

  • 003

    AMD Radeon 535

    Nvidia GeForce GTX 480

    vs

    AMD Radeon RX 480

    Nvidia GeForce GTX 580

    vs

    Nvidia GeForce GT 1030 DDR4

    Nvidia GeForce GTX 480

    vs

    Nvidia GeForce GTX 750 Ti

    Nvidia GeForce GTX 580

    vs

    AMD Radeon RX 550

    Nvidia GeForce GTX 480

    vs

    Zotac GeForce GT 240 AMP! Edition

    Nvidia GeForce GTX 580

    VS

    NVIDIA GEFORCE GTX 660 Ti

    Complexation of prices

    Users reviews

    Productivity

    1. TECTION FECTION GP

    701MHz

    772MHZ

    GPU GPU (GPU) has a higher so -so -so -so -so -so -so -so -so -so -so -so -so -so -so -free frequency frequency frequency frequency.

    2.turbo GPU

    Unknown. Help us offer a price. (Nvidia GeForce GTX 480)

    Unknown. Help us offer a price. (Nvidia GeForce GTX 580)

    When the GPU is running below its limits, it may jump to a higher clock speed to increase performance.

    3.pixel rate

    21 GPixel/s

    24.7 GPixel/s

    The number of pixels that can be displayed on the screen every second.

    4.flops

    1.34 TFLOPS

    1.58 TFLOPS

    FLOPS is a measure of GPU processing power.

    5. texture size

    42.1 GTexels/s

    49.4 GTexels/s

    Number of textured pixels that can be displayed on the screen every second.

    6. GPU memory speed

    924MHz

    1002MHz

    Memory speed is one aspect that determines memory bandwidth.

    7.shading patterns

    Shading units (or stream processors) are small processors in a video card that are responsible for processing various aspects of an image.

    8.textured units (TMUs)

    TMUs accept textured units and bind them to the geometric layout of the 3D scene. More TMUs generally means texture information is processed faster.

    9 ROPs

    ROPs are responsible for some of the final steps of the rendering process, such as writing the final pixel data to memory and for performing other tasks such as anti-aliasing to improve the appearance of graphics.

    Memory

    1.memory effective speed

    3696MHz

    4008MHz

    The effective memory clock frequency is calculated from the memory size and data transfer rate. A higher clock speed can give better performance in games and other applications.

    2.max memory bandwidth

    177.4GB/s

    192.4GB/s

    This is the maximum rate at which data can be read from or stored in memory.

    3.VRAM

    VRAM (video RAM) is the dedicated memory of the graphics card. More VRAM usually allows you to run games at higher settings, especially for things like texture resolution.

    4.memory bus width

    384bit

    384bit

    Wider memory bus means it can carry more data per cycle. This is an important factor in memory performance, and therefore the overall performance of the graphics card.

    5.versions of GDDR memory

    Later versions of GDDR memory offer improvements such as higher data transfer rates, which improve performance.

    6. Supports memory debug code

    ✖Nvidia GeForce GTX 480

    ✖Nvidia GeForce GTX 580

    Memory debug code can detect and fix data corruption. It is used when necessary to avoid distortion, such as in scientific computing or when starting a server.

    Functions

    1.DirectX version

    DirectX is used in games with a new version that supports better graphics.

    2nd version of OpenGL

    The newer version of OpenGL, the better graphics quality in games.

    OpenCL version 3.

    Some applications use OpenCL to use the power of the graphics processing unit (GPU) for non-graphical computing. Newer versions are more functional and better quality.

    4. Supports multi-monitor technology

    ✔Nvidia GeForce GTX 480

    ✔Nvidia GeForce GTX 580

    The video card has the ability to connect multiple screens. This allows you to set up multiple monitors at the same time to create a more immersive gaming experience, such as a wider field of view.

    5. GPU temperature at boot

    Unknown. Help us offer a price. (Nvidia GeForce GTX 480)

    Lower boot temperature means the card generates less heat and the cooling system works better.

    6.supports ray tracing

    ✖Nvidia GeForce GTX 480

    ✖Nvidia GeForce GTX 580

    Ray tracing is an advanced light rendering technique that provides more realistic lighting, shadows and reflections in games.

    7. Supports 3D

    ✔Nvidia GeForce GTX 480

    ✔Nvidia GeForce GTX 580

    Allows you to view in 3D (if you have a 3D screen and glasses).

    8.supports DLSS

    ✖Nvidia GeForce GTX 480

    ✖Nvidia GeForce GTX 580

    DLSS (Deep Learning Super Sampling) is an AI based scaling technology. This allows the graphics card to render games at lower resolutions and upscale them to higher resolutions with near-native visual quality and improved performance. DLSS is only available in some games.

    9. PassMark result (G3D)

    This test measures the graphics performance of a graphics card. Source: Pass Mark.

    Ports

    1.has HDMI output

    ✔Nvidia GeForce GTX 480

    ✔Nvidia GeForce GTX 580

    Devices with HDMI or mini HDMI ports can stream HD video and audio to an attached display.

    2.HDMI connectors

    Unknown. Help us offer a price. (Nvidia GeForce GTX 480)

    Unknown. Help us offer a price. (Nvidia GeForce GTX 580)

    More HDMI connectors allow you to connect multiple devices such as game consoles and TVs at the same time.

    HDMI version 3

    Unknown. Help us offer a price. (Nvidia GeForce GTX 480)

    Unknown. Help us offer a price. (Nvidia GeForce GTX 580)

    Newer versions of HDMI support higher bandwidth for higher resolutions and frame rates.

    4. DisplayPort outputs

    Allows connection to a display using DisplayPort.

    5.DVI outputs

    Allows connection to a display using DVI.

    6.mini DisplayPort outputs

    Allows connection to a display using mini DisplayPort.

    Price Match

    Cancel

    Which graphics cards are better?

    gtx 480 | Tags | Overclockers.ru

    golds

    July 10, 2021

    If this happens, it will be the first card in almost 10 years related to this

    series

    ddr22

    October 24, 2018

    12 flagship benchmarks: 8800 GTX, 8800 GT, 9800 GTX+, GTX 280, GTX 480, GTX 580, GTX 680, GTX 780 Ti, GTX 980, GTX 980 Ti, GTX 1080, GTX 1080 Ti

    Dmitry Vladimirovich

    October 19, 2015

    A lot of copies are broken due to old video cards and new drivers. And there is still no clarity on this issue. We decided to conduct a little research and for this purpose took the GeForce GTX 480 — the flagship of 2010, and several subsequent top models — the GTX 580, GTX 680 and GTX 780. Another no less relevant question is after how many years or generations of video cards does it make sense to upgrade ?

    Phoenix

    October 12, 2014

    Testing twenty-four NVIDIA GeForce GTX 4xx, GTX 5xx, GTX 6xx and GTX 7xx video cards in ten games, 1920×1080 resolution and two operating modes.

    Phoenix

    September 7, 2014

    Twelve graphics cards tested in ten games, 1920 x 1080 resolution, and four operating modes.

    Phoenix

    June 1, 2014

    Testing four processors and eight video cards in twelve games, one resolution and two operating modes.

    Phoenix

    May 25, 2014

    Testing four processors and eight video cards in twelve games, one resolution and two operating modes.

    Phoenix

    April 2, 2014

    Testing twenty-five video cards in twelve games, two operating modes and resolutions of 1920 x 1080 and 2560 x 1440.

    Dmitry Vladimirovich

    February 25, 2014

    Not even a day has passed since the announcement of the older model based on Maxwell, when video cards went on sale. This time, using the ASUS GeForce GTX 750 (GTX750-PHOC-1GD5) as an example, we will consider the younger solution — GTX 750. Without thinking twice, NVIDIA decided not to change the frequency characteristics of the new graphics processor, but to truncate the number of PS and texture units by exactly one GPC.

    Dmitry Vladimirovich

    February 18, 2014

    Official announcement of NVIDIA GeForce GTX 750 Ti graphics accelerator. Maxwell architecture and new model PCB features, overclocking, benchmarking with Radeon R7 240, R7 250, R7 260X, R7 265, GeForce GTX 480 and GTX 650 Ti Boost.

    Dmitry Vladimirovich

    February 10, 2014

    Let’s check how expedient it is to increase the speed of the graphics subsystem by adding a discrete video card to the Kaveri APU. For clarity, we will conduct tests both in joint mode and separately. The participants included the A10-7850K, A8-7600, Radeon R7 250 and R7 240. Let’s add to them the freestanding R7 260X, and for a change, we’ll take the top model of four years ago GeForce GTX 480.

    Phoenix

    July 2, 2013

    Summary testing of three generations of video cards in two resolutions and two operating modes.

    Phoenix

    May 12, 2013

    Reference information for AMD and Intel processors, and AMD and NVIDIA graphics cards.

    Phoenix

    September 5, 2012

    Author: Phoenix. Testing processors Intel Core i7-3770K, Core i5-3570K, Core i7-2600K, Core i5-2500K, Core i7-930, Core i7-860, Core i5-760, Core 2 Quad Q9500 in sixteen games, three resolutions and four operating modes.

    Phoenix

    August 29, 2012

    Author: Phoenix. Testing thirty-five video cards in sixteen games, four resolutions and two operating modes.

    Phoenix

    August 16, 2012

    Author: Phoenix. Testing twenty-eight processors in sixteen games, three resolutions and two operating modes.

    Phoenix

    July 30, 2012

    Author: Phoenix. AMD Radeon HD 7xx0, HD 6xx0, HD 5xx0 and NVIDIA GeForce GTX 6×0, GTX 5×0, GTX 4×0 against Crytek projects.

    Phoenix

    July 24, 2012

    Author: Phoenix. Testing processors Core i7-2600K, Core i5-2500K, Core i3-2120, Core i7-930, Core i7-860, Core i5-660, Pentium G6960 V sixteen games and three resolutions.

    Phoenix

    May 5, 2012

    Author: Phoenix. Testing of processors Pentium G850, Pentium G840, Pentium G630, Pentium G620, Celeron G540, Celeron G530, Pentium G6960, Athlon II X3 455, Athlon II X3 440, Athlon II X2 265, Athlon II X2 250 and Athlon II X2 240, in sixteen games, three resolutions and two modes of operation.

    Phoenix

    April 26, 2012

    Author: Phoenix. Testing processors FX-8150 BE, FX-8120 BE, FX-6100 BE, Core i7-2600K, Core i5-2500K, Core i5-2300, Core i7-860, Core i5-760, Phenom II X6 1100T BE, Phenom II X4 965 BE, Phenom II X4 850 and Athlon II X4 640 in sixteen games, three resolutions and two operating modes.

    Phoenix

    March 24, 2012

    Author: Phoenix. Testing twenty-four video cards in sixteen games, four resolutions and two operating modes.

    Phoenix

    March 3, 2012

    Author: Phoenix. Testing processors FX-4100, Core i3-2120, Pentium G860, Pentium G630, Core i5-660, Core i3-560, Pentium G6960, Phenom II X4 965 BE, Phenom II X4 850, Phenom II X2 565 BE, Athlon II X4 640, Athlon II X3 440, Athlon II X2 265 in sixteen games, three resolutions and two operating modes.

    Phoenix

    February 20, 2012

    Author: Phoenix. Testing thirty-eight processors in sixteen games, three resolutions and two operating modes.

    Dmitry Vladimirovich

    December 2, 2011

    What happens if the line of GeForce GTX 560 video cards based on GF114 is expanded by adding a graphics accelerator with the GF110 core?

    Phoenix

    August 7, 2011

    Author: Phoenix

    Phoenix

    July 17, 2011

    Author: Phoenix

    [Viru$]

    July 7, 2011

    We install the following processor coolers on the GeForce GTX 550 Ti GS, GTX 460 and GTX 480: Scythe Samurai ZZ, Zalman CPNS-7000AlCu and a boxed cooler from the Intel Q6600 processor.

    Phoenix

    July 3, 2011

    Author: Phoenix

    Phoenix

    June 28, 2011

    Author: Phoenix

    Dmitry Vladimirovich

    June 17, 2011

    Overclocking MSI GTX 580 Lightning with freon and comparison of two generations of Lightnings — GTX 480 and GTX 580. Testing in synthetic and gaming applications.

    [Viru$]

    June 10, 2011

    We continue our practical exercises. MSI AfterBurner, soft overclocking and stability testing with Inno3D GeForce GTX 480 iChiLL, Inno3d GeForce GTX 460, Gainward GTX 550 Ti GS, AMD Radeon HD 6870 and AMD Radeon HD 5850.

    Phoenix

    May 22, 2011

    Author: Phoenix

    Phoenix

    May 15, 2011

    Author: Phoenix

    Phoenix

    May 8, 2011

    Author: Phoenix

    Phoenix

    May 1, 2011

    Author: Phoenix

    Dmitry Vladimirovich

    March 24, 2011

    Testing the new NVIDIA flagship — GeForce GTX 590. Comparison with GTX 480, GTX 580, GTX 590, HD 6990.

    Dmitry Vladimirovich

    March 8, 2011

    The Radeon HD 6990 is AMD’s new dual processor graphics accelerator. Review and comparison with HD 5970, HD 6970, HD 6950, HD 6870, HD 6850, GTX 580, GTX 570, GTX 560 Ti, GTX 480, GTX 470, GTX 465, GTX 460.

    wildchaser

    March 3, 2011

    Two «frostbitten» graphics accelerators based on video processors from different manufacturers. Review, overclocking, comparison with Gainward GTX570 GLH and Zotac GeForce GTX 480 AMP!.

    Review and test NVIDIA GeForce GTX 580

    Released in March, the NVIDIA GeForce GTX 580 graphics card has been described as the fastest, hottest and most expensive. But it doesn’t make much sense to compete with the ATI Radeon HD 5870, yes, the main advantage of the video card is its speed (15-20%), but the high price does not justify the difference. There are a number of reasons that influenced the Fermi architecture. One of the most important is the development of an architecture aimed at professional use for complex computing, the increased complexity, entailed an increased number of transistors in the GPU, which led to an increase in it, and this contributed to certain production difficulties and increased cost, high power consumption and heat dissipation. But if we consider the model GeForce GTX 480 from the point of view of marketing, then it has definitely achieved high consumer ratings, and is considered the fastest single-chip video card.

    Today, the manufacturer is releasing a new GTX 580, the successor to the previous model, which promises to fix the success and eliminate some of the shortcomings. At the first inspection of the novelty, it is already possible to understand that the new released model is a modified old generation of graphics. But this is not a reproach, for example, AMD did exactly the same when it released, under the guise of new developments, upgraded and cheaper models of Radeon HD 6800 video cards. Also, for many current games, the performance of the video card is enough, so the performance increase is relevant only for the highest resolutions and 3D Vision technologies that increase the load on the GPU. According to experts, in the future, the number of gaming projects, especially resource-intensive ones, will be reduced, and the most basic of them will be developed with an emphasis on consoles, where the performance is very inferior to a PC.

    Architecture

    The new GeForce GTX 580 is based on the GF110 GPU which contains 3 billion transistors. It should be noted that the technological lines of the semiconductor company TSMC, which produces graphics processors for NVIDIA and AMD, have been significantly improved, in terms of power consumption and heat dissipation, the new chip will be improved.

    The GF110 processor combines the unambiguous structure of the predecessor model and the updates that have been achieved throughout the year. Unlike the previous model from the GeForce GTX 480, all blocks are active in the updated processor, they have 512 flexible and efficient shader processors, 64 texture units and 16 Polymoph Engine units. All other characteristics remained untouched, these are 48 rasterization units, and a 768 KB L2 cache, and 64-bit memory controllers that form a 384-bit bus, and the novelty is also equipped with 1.5 GB of memory. Based on all of the above, it seems that the new GTX 580 graphics card is not that far removed from the former leader, but there are other innovations. One of them is increased clock speeds, the current GPU runs at 772 MHz, and the shredder domain at 1544 MHz. Memory increased to 1 GHz.

    The second and most important innovation includes updated texture units, which first appeared on the GTX 460. Their difference lies in twice the speed increase, while working with FP16 textures, which are used in modern games, which means that HDR mode is processed at high speed and does not lose performance. And finally, another important improvement is a redesigned pixel culling algorithm (Z-cull), with which you can more accurately determine the hidden parts of the scene, extract them from processing, while freeing up resources. The manufacturer claims that the increase in this operation is 8%, which makes a significant contribution to productivity.

    Troubleshooting

    Naturally, the manufacturer paid special attention to Fermi features that had complaints, ie energy consumption, heat dissipation and noise. Absolutely all GF110 functional blocks are redesigned, taking into account all the flaws that were found in the GF100 blocks, and now the GPU has less current leakage, and therefore it heats up less. In the process of modernization, the power supply system has also changed, in terms of both the element base and management. Now it includes three sensors that measure the current consumed by the video card. If the driver notices an abnormal increase in current and the use of utilities, then the clock speeds are reduced to maintain the declared TDP level.

    Of course, it’s not good to compare with competitors, but NVIDIA can understand that very high power consumption rates are determined using utilities, and, as you know, many do not run them, and the load on the video card in modern games is not found at all. Today, we can observe similar situations with CPU thermal packs, in the understanding of manufacturers AMD and Intel, AMD applies the average consumption indicator (ACP), and Intel applies the maximum allowable TDP. However, it should be noted that this algorithm is used only in the above applications.

    As for the cooling system, it has been significantly changed, so instead of heat pipes, the radiator has a large chamber, which is located over the entire area of ​​its sole, and thin aluminum fins. The already familiar centrifugal fan blows hot air out. The cooler control and thermal control systems have also been upgraded. The characteristic sharp jump in the fan speed and its noise is reduced due to a smooth increase. As a result, at maximum load on the video card, the noise is clearly audible, but this does not interfere with comfortable work. It is also worth considering that the temperature regime has improved significantly. GeForce GTX 580, even at the highest load with the frequency reduction algorithm turned off, does not even reach 90 °.

    Test results

    While testing the GTX 580, you can see a bunch of minor improvements. Which led to a significant speed of the video card. The superiority of the new model lies in higher performance in texturing (11-28%), faster tessellation and increased frequencies. If earlier it was said that the GTX 480 is the fastest single-chip video card, and the dual-processor Radeon HD 5970 video card is significantly ahead of it, now we can safely say that the new product from NVIDIA occupies all the leading positions. In certain tests, the AMD graphics card outperforms the GTX 580, only in 3DMark Vantage and Aliens vs. Predator, and in the rest, the new Fermi, with a minimal margin ahead of the HD 59 graphics card70, and given that only the first version of the driver was released, we can assume that in the future, the gap will increase significantly. It should be noted that in heavy games, power consumption has significantly decreased, these figures have decreased by 30-35 watts. During the heaviest load of the FurMark boost mode, with the software current limit turned off, the GTX 580 has the same performance, similar to the previous version, and this attracts attention with the increase in functional blocks and the frequency of optimization of the core and power system.

    Conclusions and energy consumption is significantly reduced, as well as it is really fast. Of course, it’s too early to talk about the overthrow of the Radeon HD 5970 video card from its pedestal, but its performance and the performance of the new product are almost on a par, and if we also take into account ease of use, then the GeForce GTX 580 clearly occupies the highest positions. So far, it makes no sense to compare the new products from NVIDIA and AMD, although video cards have a difference in price, but they also have a greater difference in performance.

    NVIDIA GeForce GTX 580 Pricing News — WORLD NVIDIA

    GeForce GTX 580 price tags have already appeared online. , PNY, MSI, EVGA.

    So, the price tag for the ASUS GeForce GTX 580 has already slipped on Amazon — ASUS ENGTX580 / 2DI / 1536MD5 cost $ 519 there, however, after the network media publicized this case, the information was quickly covered up.

    However, a holy place is never empty — and now one of the American retailers has published the price of the GeForce GTX 580 from EVGA. The price is divine — only $ 499, shipping — on the 9th (i.e. tonight can be expected).

    Canadians are not far behind the Americans — their retailers are already selling PNY GeForce GTX 580 for 566 and GeForce MSI GTX 580 for 580 Canadian dollars.

    So it looks like the ice has broken…

    rumorsvideo cardsEVGANVIDIAFermiGeForce GTX 580

    comment related news

    Fudzilla

    Comparison with Radeon HD 5870, 6870 and GTX 480 plus SLI vs Crossfire.

    Information about the upcoming NVIDIA flagship GeForce GTX 580 continues to circulate on the network. This time, one of the Italian IT resources gave birth to slides allegedly from NVIDIA’s presentation containing «fat-free» results of comparative benchmarks of the GTX 580 with other video cards, both from NVIDIA and AMD.

    The other day we have already published data on the performance of the new GeForce, so those who follow the news will be interested to see the «second series»:

    the load will be lower than that of the GTX 480. It is even more difficult to believe in this than in such a total lead over competitors’ solutions.

    rumorsvideo cardsNVIDIAFermiGeForce GTX 580benchmarks

    comment on similar news

    News4it

    Faster than the GeForce GTX 480, but not fast enough to compete with future AMD solutions.

    The Chinese site PCOnline.com managed to get the performance results of the upcoming GeForce GTX 580 in comparison with the GeForce GTX 480 and AMD Radeon HD 5870. Testing was carried out in 18 games and benchmarks.

    The GeForce GTX 580 showed in some places (in the DirectX 11 Stone Giant and Heaven benchmarks) more than a twofold advantage over the Radeon HD 5870. The performance increase compared to the GTX 480 in most cases was no more than 15%. However, in DX9/10 applications GTX 580 is unlikely to be able to keep the crown of the leader after the release of solutions AMD Radeon 6990/6970.

    Some details have also appeared about other characteristics of the GTX 580 video cards. Information about the presence of 128 TMUs in the chip has been added to the already known information, which again raises the question of which chip the cards will be based on, GF100 or GF110, since the GF100 in the complete set has only 64 texture block. The “official” TDP has also become known — 244 W, which is even less than 250 W for the GTX 480. However, in applications like FurMark, the card will still “knock out” around 300 W. The date of the official announcement is determined by various astrologers and seers in the region of 8-9november.

    It should be remembered that all such information before the appearance of sample maps by reviewers of trustworthy resources should be taken with skepticism.

    PerekvidocardiafermigeForce GTX 580 Benchmarks

    Comment similar news

    VR-Zone

    In addition to the specifications, the newly release date from NVIDIA-GTX 580.

    . Previous ones are the previous ones there. the confirmation. It’s about:

    • GPU frequency 772 MHz;
    • shader frequency 1544 MHz;
    • GDDR5 memory frequency 2004 MHz;
    • video memory size 1536 (according to other sources — 1544) Mb;
    • memory interface width 384 bits;
    • memory bandwidth 192.4 Gb/s;
    • 512 CUDA cores;
    • TPD board 244W.

    In addition, many sources claim that journalists have begun to receive map samples and the exact release date of the map is called Tuesday, 9-th of November. If the latter is true, then in a week we will already see the «real» GF100, which will withstand the expansion of the Radeon HD 6970 from AMD.

    rumorsvideo cardsNVIDIAFermiGeForce GTX 580GF100b

    comment on similar news

    This is reportedly the GeForce RTX 30, and at least two models will be available for purchase in the coming weeks. These will be mid-range graphics cards, the RTX 3060 with 8 GB of VRAM, which is 4 GB less than the original model. The current model sells for $350, so the 8GB VRAM version will cost around $300.

    NVIDIA GeForce RTX 3060

    In addition, NVIDIA is also preparing an RTX 3060 Ti with 19Gb/s GDDR6X memory, similar to the RTX 3070 Ti. This card currently has 448 GB/s of memory bandwidth, while the upgraded card has 608 GB/s, 36% faster.

    As for the RTX 3070 Ti with GA102, there are no confirmations on it yet, however, such models are often released in local markets, for example, exclusively for China.

    Rumors Video Memory Video Cards NVIDIA GeForce RTX 3060 Ti3060

    Comment ​Related news

    Videocardz

    According to recent rumors, NVIDIA has completed the development of the specification for the RTX 4070 video card, while the company decided to create two versions of this video card at once.

    One of them will have 12 GB of GDDR6X video memory and 7680 CUDA cores, and the other one will have 10 GB and 7168 CUDA cores. In both cases, the throughput will be 21 Gb / s. The older version of the accelerators received the number PG141-SKU340/341. In addition to a 20% increase in video memory, this version also has a large thermal package — 285 W versus 250 W for the younger version PG141-SKU336/337.

    NVIDIA GeForce RTX

    In terms of performance, it will obviously be different. The older version will offer a speed of about 11,000 points in 3DMARK Time Spy Extreme, while the younger version will offer about 10,000, but in any case, this performance is comparable to the RTX 3090.

    NVIDIA plans to introduce its new video cards in September during the GTC , which means a delay in the release of the RTX 40 series, caused by large stocks of RTX 30 in warehouses. Some observers have noted that NVIDIA may start selling the RTX 40 series in the fourth quarter of this year. The top models will be available first.

    RumorsGeForce RTX 4070Video CardsNVIDIA

    Comment on ​Related News

    Obviously, the RTX 400 series of cards is still in development, and therefore their specifications continue to change. This time there are rumors that the upcoming RTX 4080 graphics card will receive GDDR6X memory with a bandwidth of 23 Gb / s, instead of the expected 21 Gb / s. At the same time, it also reports a reduction in power consumption from 420 W to 340 W. This information was spread by a well-known insider @kopite7kimi.

    The rest of the specifications remain unchanged, namely the AD103 GPU, 9728 CUDA cores and 16 GB of 256-bit VRAM.

    The RTX 4090 is also rumored to feature 23Gb/s memory, but a lot could change before release.

    rumorsGeForce RTX 4080video cardsNVIDIA

    comment on related news

    Well known insider kopite7kimi has revealed that the upcoming RTX 4080 will have 9728 CUDA cores, down from the 10240 CUDA cores originally expected. The card is also said to have a TDP of 420W and be based on the AD103-300-A1 GPU. All this will be assembled on a PG136/139-SKU360 board with 16 GB of GDDR6X memory.

    NVIDIA RTX

    graphics card Thus, the number of CUDA cores will be reduced by 5%. This is unlikely to have a noticeable impact on performance, and the RTX 4080 is still expected to score over 15,000 points in TimeSpy. 9000 points. This is 66% more than the RTX 3090 Ti and 82% more than the RTX 3090 compared to the top Ampere cards. In other tests and games, the result may be different. However, this is a huge performance boost at 4K resolution.

    When it comes to ray tracing performance, this number remains the biggest mystery. The card is known to feature 16,384 CUDA cores, 52% more than the RTX 3090 Ti. At the same time, the increase will be provided not only by an increase in the number of cores, but also by an increase in frequency and power consumption.

    GPU-based cards codenamed Ada are expected to arrive in the fourth quarter of this year. However, according to the latest rumors, this year we will see only the RTX 4090 model.

    rumors as well as some features of their specifications.

    So, it is noted that the RTX 4090 and RTX 4080 cards will have a lot more in common than previously expected. Although they will be based on different GPUs: AD102 and AD103, the circuit board will be the same — PG139. Only their design versions will differ: 330 for the RTX 4090 and 360 for the RTX 4080. It is not yet clear whether the GPUs will be compatible by pins, but it is already obvious that the boards will be the same.

    GeForce RTX 40

    NVIDIA has not announced RTX 4080 and RTX 4070 specifications at this time, for two reasons. Firstly, they are based on different GPUs, and secondly, the specifications have not yet been approved.

    In terms of timing, NVIDIA GeForce RTX 4090 is now expected in August, RTX 4080 in September, and RTX 4070 in October. However, these are not end dates. Manufacturing partners have a ton of RTX 30-series cards in stock, and obviously the last thing NVIDIA needs in these circumstances is to start shipping the next generation.

    rumorsGeForce RTX 40

    0video cardsNVIDIA

    comment on related news

    Videocardz

    Well-known insider Kopite7kimi has published fresh information that a new generation of video cards will be released in mid-July.

    NVIDIA’s flagship GeForce RTX 4090 accelerator will contain 126 streaming processors, which means 16128 CUDA cores. This is noticeably less than the previously expected 140-142 multiprocessors. At the same time, it is known that the AD102 chip contains 144 streaming multiprocessors, that is, 2304 cores less than possible. It is possible that the maximum configuration will be available in the RTX 4090 Ti.

    In addition, the insider talked about the heat dissipation of the RTX 4090. According to him, now we are talking not about 600 W, but only about 450 W TDP. Hotter should also be the top RTX 4090 Ti. For comparison, the RTX 3090 Ti has a TDP of 450W. In terms of performance, it will be twice as fast as the RTX 3090, which consumes 350W.

    In addition, Kopite reiterated its earlier claims that the RTX 4090 will have 24GB of GDDR6X video memory at 21Gb/s, which would mean 1TB/s of bandwidth on a 384-bit bus.

    The last statement made by an insider concerns the timing of the release of a new generation of NVIDIA graphics cards. He reports that the GeForce RTX 40 will appear in mid-July. At the same time, RTX 409 performance solutions will be the first to enter the market.0 Ti, i.e. 21 Gb/s.

    NVIDIA RXT Suprim package

    This will also use the modification of GPU AD102-300. As a result, power consumption will be increased to 600 watts, which is significantly more than 450-500 watts consumed by the current flagship RTX 3090 Ti.

    It will be very interesting to know the performance of this accelerator, because changes in the architecture, along with an increase in power consumption, should give a sharp jump in speed.

    Rumors Lovelace Video Memory Video Cards NVIDIA

    comment on related news

    TweakTown

    At CES 2022, NVIDIA unveiled the new GeForce RTX 3090 Ti, a card that is faster than the RTX 3090, the flagship of the current Ampere line.

    The company promised that more information about the new product would be published at the end of the month, that is, January. But so far nothing has been heard about the RTX 3090 Ti.

    When asked by The Verge, NVIDIA spokeswoman Jen Andersson reported: «We don’t have any more information on the RTX 3090 Ti at this time, but we’ll be in touch when it becomes available» .

    Video card NVIDIA GeForce RTX 3090 Ti

    The answer is frankly weak. The standard reply often used by press secretaries to make up for inaction. All this suggests that the RTX 3090 Ti is a failure.

    In January, shortly after CES, NVIDIA allegedly asked card manufacturers to stop producing the RTX 3090 Ti. A blogger and insider on the Moore’s Law Is Dead (MLID) channel also reported that according to his data, this map is «postponed indefinitely» . He claims that the reason for this is problems with the circuit board. The power consumption of the card is 450 W, and the creation of such a card turned out to be a difficult task.

    rumorsGeForce RTX 3090 Ti graphics cardsNVIDIA

    comment on related news

    Neowin

    Turing based NVIDIA GeForce MX550 graphics card designed for thin multimedia laptops. This is a very interesting segment as it delivers comparable performance to CPU-integrated solutions, raising the question of the need for a discrete graphics card at all.

    The first test of the MX550 has appeared on the PassMark website. This card scored 5014 in the G3D Mark test, which is almost identical to the Vega 8 GPU built into the AMD Ryzen 9 5900HS.

    The graphics in the AMD Ryzen 9 5900HS in the same benchmark show 4968 points, which is only 0.9% less, and more than fits into the statistical error. Of course, PassMark is not the most popular graphic benchmark, and the number of results in it is very limited. Only 9 iGPU 59 results are offered for comparison00HS.

    NVIDIA MX550 PassMark Test ResultsMX550 vs. Ryzen 9 5900HS PassMark

    The MX550 graphics card is based on the TU117 GPU, the slowest variant of the Turing processor. It doesn’t have ray tracing or DLSS, but I don’t think anyone would want to use them given its baseline performance. So far, NVIDIA hasn’t confirmed the heat dissipation of the MX550. Most likely, we will receive more information by the time of its release, which is scheduled for «this spring».

    testingrumorsVega 8Ryzen 9GPUMX550PassMarkvideocardslaptopsTuringNVIDIAbenchmarks

    Videocardz

    The specs of the new NVIDIA GeForce RTX 3090 Ti graphics card, which, as you are sure, will receive incredible bandwidth / 1 TB of video memory, have appeared on the Web

    The RTX 3090 Ti is reported to have 24GB of GDDR6X memory at 21Gb/s. Thus, with a 384-bit bus, the total throughput will reach 1008 GB / s.

    PCB GeForce RTX 3090 Ti

    As for the GPU, it will contain 84 SM. In total, it will offer 10752 CUDA cores (versus 10496 cores at 82 SM in the RTX 3090). Alongside them, the card will offer next-generation RT cores, Tensor cores, and a brand new streaming multiprocessing box. The base frequency of the GPU will be 1560 MHz, and in Boost mode it will increase to 1860 MHz. The card’s heat dissipation is rated at 450W, 100W more than the RTX 3090.

    PCIe 5.0 Auxiliary Power Slot

    In addition, the new GeForce RTX 3090 Ti will be the first PCIe Gen 5.0 compatible graphics card with a single 6-pin auxiliary power connector that can deliver up to 600W.

    rumorsGeForce RTX 3090 Ti graphics cardsNVIDIA

    comment on similar news

    According to rumors, the release of the video card will take place in January, but some information about it has already begun to appear. The GeForce RTX 3080 Ti Mobile graphics card will offer the highest performance in the Ampere family for mobile platforms. To do this, it will use faster memory than the RTX 3080, and increased heat dissipation. Thus, the memory speed will increase from 14 Gb / s to 16 Gb / s, and the heat dissipation will be increased from 165 watts for the current model to 175 watts.

    NVIDIA RTX Mobility

    The Ti version is rumored to have more CUDA and TMU cores, but these specifications are unknown. Instead of 6144 CUDA cores in the RTX 3080, the Ti version is expected to get 7424 cores.

    rumor RTX 3080 Ti Mobilevideo -Cardynovidia

    Comment similar news

    TechPowerUp

    All know that the NVIDIA is preparing a dimensional GEFORCE RTX 3050 for January, but it will be in two ways. video memory and based on different GPUs.

    According to recent rumors, the RTX 3050 will be released based on the GA106-150 with 2560 CUDA cores and 8GB of VRAM, and based on the GA106-140 with 2304 CUDA cores and 4GB of VRAM. This is contrary to previous rumors that the GPU will contain 3072 CUDA cores.

    NVIDIA GeForce RTX 3050

    At the moment I’m wondering why the GA106 processor was required to make this board, because the GA107 almost completely corresponds to the required configuration (2560 cores, 128 bit bus). The answer may lie in the need to use lower-quality processors of a higher level, or the company’s desire to later release a version of the RTX 3050 Ti. In the second case, the use of the same GPU on different models will reduce the labor costs for the production of video cards from production partners.

    Everything will be known for sure on January 4, when NVIDIA should officially present the video card. The release of the GeForce RTX 3050 should take place on January 27th.

    Perekvideo -CardynvidiageForce RTX 3050

    Comment similar news

    VideoCardz

    As you know, NVIDIA is preparing its RTX 30 video cards. Currently, the cheapest model for desktop systems is RTX 30660 , but according to rumors, NVIDIA will soon expand the lineup by adding the RTX 3050 to it.

    The RTX 3050 and RTX 3050 Ti are currently only available for laptops, making this a desktop model for the first time.

    NVIDIA GeForce RTX 3050

    The RTX 3050 will reportedly be based on the GA106-150 GPU with 3072 CUDA cores. The memory subsystem will be represented by 8 GB GDDR6 with a bus width of 128 bits. The desktop NVIDIA RTX 3050 is said to be faster than the GTX 1660 Super, but slower than the RTX 2060 12GB. Given these estimates, we can say that the new NVIDIA accelerator is designed to strengthen the competitive position against the upcoming AMD RX 6500 XT and Intel Arc Alchemist 128EU cards.

    rumorsNVIDIAGeForce RTX 3050

    video cards comment on similar news This time there were rumors that the company is preparing to re-release the GeForce RTX 2060 video card next year.

    NVIDIA already resumed the release of the RTX 2060 video card in January. which will receive exactly the same specifications as the existing model, with the exception of the amount of video memory, which will be 12 GB.

    NVIDIA GeForce RTX 2060

    The firm has already begun informing its manufacturing partners about this card, which should go on sale by the end of this year. This will give manufacturers enough time to launch the new model by 2022.

    Clearly, NVIDIA’s decision to relaunch the RTX 2060 with more memory is due to the catastrophic shortage of the RTX 3060, for which speculators are asking for a minimum of $650, double the recommended price.

    rumorsvideo cardsNVIDIAGeForce RTX 2060

    Comment ​Related news

    Kit Guru

    NVIDIA GeForce RTX 3060 and RTX 3060 Ti graphics cards will be back in limited supply in September. This is reported by the Chinese store IT Home.

    This message is sure to disappoint anyone who dreamed of buying a new video card, seeing how their prices slowly stabilized, and shipments in the second quarter rose to 123 million units.

    NVIDIA RTX 30 9 family0004 The site notes that shipments of the RTX 3060 and RTX 3060 Ti will drop by 50% in September compared to the first 20 days of August. This information is also confirmed by representatives of many manufacturers who are actively discussing the current situation on the forums. The situation with limited supplies will last at least until the end of September, after which, gradually, it should begin to improve.

    The VideoCardz site believes that the problem will not be limited to the above two models, but will spread to the entire production of cards from NVIDIA and AMD. The reason for all this shortage is called another reduction in the production of video cards in China due to new lockdowns caused by COVID-19.

    rumorsAMDNVIDIAGeForce RTX 3060 Ti3060

    video card market According to recent information, video cards based on them will consume 400-500 watts of energy and at the same time provide a performance increase compared to the current generation by a factor of two.

    Ada Lovelace GPUs are said to provide the same performance boost we saw from Maxwell to Pascal, and it’s been fantastic. Everyone remembers how good the GeForce GTX 9 was80 Ti, but the GeForce GTX 1080 Ti was a big technological breakthrough.

    NVIDIA Ada Lovelace

    Between now and the release of the 400-series graphics cards, we will have an update to the GeForce RTX 30 Ampere-series, which will receive the “SUPER” postfix. This should happen before the end of this year or at the very beginning of the next. As for the GeForce RTX 40 graphics cards with the Ada Lovelace processor, it should be expected in the second half of the year.

    Rumors Lovelace Video Cards NVIDIA GPUs

    Comment ​Related news

    TweakTown

    Next generation NVIDIA GPUs, Ada Lovelace, have been on everyone’s lips in recent days. It seems that the company has finished the design phase of the GPU, which means it’s time for rumors about its characteristics.

    So, the AD102 processor will become the flagship of the next consumer generation of NVIDIA video cards. It will be manufactured in 5nm at TSMC’s factory. The chip will contain 18,432 CUDA cores. For comparison, GA102 in RTX 3090 has 10,496 CUDA cores. The expected clock speed of the new GPU will be 2.2 GHz or higher, which will provide computing performance at the level of 81 teraflops. Again, for comparison, the RTX 3090 is 35.5 teraflops.

    NVIDIA

    GPU The biggest change will be the use of Micron’s new GDDR6X 24Gb/s video memory, which is noticeably faster than the current 19.5Gb/s memory in the RTX 3090. Since the AD102 has a 384-bit wide bus, the total throughput will be 1152 GB/s, which is 23% more than NVIDIA’s current flagship.

    The next stage in the preparation of the Ada Lovelace processor will be its pilot production. It is unlikely that a test sample will be ready before the end of the year. The final product should be expected in the fourth quarter of 2022. Given all the changes, we can assume that the performance of the future GeForce RTX 4090, compared to the GeForce RTX 3090, will double.

    rumorLovelacevideo cardsNVIDIAGeForce RTX 3090

    0616

    When it comes to the notebook market, consumers always want the best technology at the right price. Therefore, NVIDIA GTX 1050 Ti and GTX 1060 video cards remain the most popular. But soon everything can change thanks to the GeForce RTX 3050 Ti mobile accelerator.

    This video card will certainly appeal to anyone who wants to buy a gaming laptop for relatively little money. And now Videocardz has published a screenshot of GPU-Z with information about this card. As you can see, the graphics card has a peak frequency of 1485 MHz and contains 4 GB of GDDR6 video memory. This information is hardly extensive, but it’s safe to say that while this will be one of the simplest NVIDIA mobile cards on the market, it will provide sufficient performance in most games.

    It’s not yet known when this graphics card will be available, but reviewers expect it to be available for purchase in the summer.

    GPU-Z specs

    rumorsvideo cardsnotebooksNVIDIAGeForce RTX 3050 Ti

    comment on similar news Moreover, specifications and even benchmarks of the GeForce RTX 3080 Ti video card, which is based on this processor, have appeared.

    The GA102-225 GPU is expected to contain 10240 CUDA cores. On the board, it will run with 12GB of ultra-fast GDDR6X memory. And such a configuration will provide unthinkable ETH mining performance — 119 MH / s. This is even more than the GeForce RTX 3090 Founders Edition and MSI GeForce RTX 3090 SUPRIM X provide, which have mining performance in the 95-115 MX/s range.

    NVIDIA GA102-225 GPU

    Tested RTX 3080 Ti ran at 1365MHz base GPU and 1665MHz Boost, lower than GeForce RTX 3080 and GeForce RTX 30

    NVIDIA is rumored to be working on new GPUs codenamed GA106-302 to replace the GA106-300 found in the RTX 3060. These new chips will also have a new PCI Device ID, which means they won’t be able to work with older drivers and the mining cap should work again.

    GeForce RTX 3060

    In addition to fixing the problem of the leaked unlocked driver, the chip will also get “further mechanisms” to limit Ethereum mining on the RTX 3060. The same solutions will be used in the future on the RTX 3080 Ti and RTX 3070 Ti graphics cards .

    NVIDIA hasn’t announced anything yet. If the above information is correct, then the GA106-302 GPU will be in the news in a month.

    rumorminingvideo cardsNVIDIAGeForce RTX 3060

    comment on similar news

    KitGuru

    The 100th builds everyone!

    12 years ago

    When choosing competitors for the GTX 580, the main thing is to stop in time. Well, the GTX 480 is exactly right for comparison — how else can you evaluate success if you don’t put the previous very best video card next to it? And then there are some questions. Compare GTX 480 with HD 5870 or not? It’s a lie, don’t go to a fortune teller, but still, at the moment it’s the single-chip flagship of the Reds (yes, I’m from old memory), you need to know how the strongest from one camp feels in the ring with the strongest from another. For balance, then you need 59Harness 70 — well, it will help to decide whether to take the second 5870, and in general, if the 5870 is a uniprocessor flagship, then the 5970 is a multiprocessor flagship. Let the «two-headed», but the most productive creation of AMD, sold «in one piece» . .. And so on.

    Therefore, for the sake of simplicity, I took three more cards: Radeon HD 5870, GeForce GTX 480 and GeForce GTX 470. You can see their frequencies, video memory size and other characteristics in the table.
    Yes, that’s right, it was easier to refuse overclocking this time. Firstly, this would require a change in cooling, and secondly, the GTX 580, as mentioned in the first part, has tricky voltage control, so it didn’t work out to “add” volts to the core for more speed on the go, in half an hour. Thirdly, if you overclock, then at least GTX 480 and HD 5870, and this is time again. So they put it off for the future, deciding that if you do it, then do it properly, or it’s better not to do it at all.

    Eight most popular applications were selected from the variety of test applications. Of the «relevant» ones, perhaps, only the new shooter «Metro 2033» remained out of work due to extremely inadequate results for NVIDIA video cards. Given the normal performance in other applications, we can assume that the problem is either in the game or in the benchmark. Graphics settings in all games were set to maximum, eight times anisotropic filtering and four times multisampling were activated. There are only two resolutions — 1680 x 1050 and 1920 x 1080, the most common among the people. I did not set 1280 x 1024 because of the class of video cards being tested — it is unlikely that the owner of a high-end video camera will look at an «office» seed with such a low resolution. And 2560 x 1600 was not included in the tests due to its low prevalence — monitors capable of displaying a picture with this aspect ratio are still prohibitively expensive. And those who can buy them are unlikely to read this article. Well, having finished with the formalities, I turn to the description of the results.

    Battlefield: Bad Company 2
    Radeon HD 5870 in this test barely catches up with the GTX 470, there is no need to talk about the former and current single-chip «green» leaders. It is noteworthy that with increasing resolution, the performance drop for the GTX 480 and especially the GTX 580 is much greater than for the HD 5870. And it’s not at all a fact that a single GTX 580 will be able to provide an acceptable level of performance at a resolution of 2560 x 1600 — someone will get 40 average fps not enough.

    S.T.A.L.K.E.R.: Call of Pripyat
    The results of all video cards in this game could be called ideally indicative for NVIDIA: no sharp dips or jumps, the difference between the speed of cards in two resolutions is approximately the same, and the participants are lined up in the right order from a marketing point of view : at the very bottom of the HD 5870, it barely overtakes the GTX 470, and the first and second places are occupied by the GTX 580 and GTX 480, respectively. True, the owners of the GTX 470 will most likely have to give up some beauty: 40 average fps is clearly not enough, especially given the abundance of transitions from dark corridors to vast open spaces. But I do not advise turning off anti-aliasing — «ladders» terribly spoil the overall impression of the game.

    Crysis Warhead
    «Heavy» Crysis is so heavy… Only GTX 580 owners and monitors with a resolution of 1680 x 1050 can guarantee a good impression. If you stop worrying about every jerk, then the 480 will roll, but nothing more. The HD 5870 and GTX 470 provide a conditionally playable mode without lowering the average fps bar below 30. The owners of the last two video cards will definitely have to abandon the Ultra High mode in the graphics settings or seriously overclock — with a change in cooling, increasing the voltage on the GPU, and so on by the list.

    Tom Clancy’s H.A.W.X. 2
    NVIDIA strongly recommended that all reviewers include the newly minted air arcade (yes, «-arcade») in the list of test applications. The reason for her exhortation is clear from the first glance at the graph: the GTX 580 is gaining 2.5 times more fps than the HD 5870, and this is regardless of the resolution! It would be correct to say that it is reasonable to use this benchmark only to compare the performance of NVIDIA video cards of the 400th and 500th series, and Radeons, even the newest ones that are just coming out in mid-December, have nothing to do here. At all.

    Call of Duty: Modern Warfare 2
    We can say that MW2 is a very tolerant game: no matter what video card you stick into the slot, the fps indicator is everywhere, if not exorbitant, then “comfortable ++” (that is, with a good margin). I guess it’s the light engine combined with good gameplay that made this toy so popular. If we disregard comfort and evaluate only numbers, then the GTX 470 must be recognized as an outsider, the Radeon HD 5870 is second from the end (finally!), Well, 480 and 580 share the second and first places, respectively.

    Lost Planet 2
    The first version of this game was often used by Californians to demonstrate what kind of bullshit this Radeon HD 2900 XT is — the then leader, 8800 GTX, was ahead of it at times. Then, of course, the situation improved, but the brainchild of NVIDIA, in spite of everything, overtook the Reds in this game. The second part inherited the tastes of its predecessor (which is not surprising, the engine has not changed. — Approx. K.O.). And the innovations… Tessellation, of course! Now it is fashionable to have tessellation in the game. I saw one funny video about LP2, where the presenter of some presentation jumped at the screen and poked it with a pointer: they say, here is tessellation, here is a little tessellation, and here, look, look, solid tessellation! Of course, he didn’t say it directly, but … You understand (smile). Then I watched this video in better quality and realized that even carefully sticking into a 720p image (I earnestly ask the editor not to cut out the last three words — the expression is clumsy, but sensible) without indicating where to look, I don’t see megatessellation at all …

    A little distracted. So, what do we have with the results… Well, everything is clear: the owners of HD xxxx are assigned resolutions from 1680 x 1050 and lower, without anti-aliasing. Even if we assume that a glitch has crept in somewhere in the results, and add another ten fps to 5870, the picture will not change much. Having fun shooting alien creatures at maximum quality settings is possible only for owners of the eighties, those from NVIDIA. The rest will have to lose in quality and / or resolution.

    Far Cry 2
    To be honest, I never understood what fans find in this game. Video cards, on the other hand, reacted with humor, especially 470: the difference of 35% between the indicators in two resolutions against the background of 10-15% for the other three cards (two of which are older brothers 470 in terms of architecture) looks strange. However, this does not prevent the former and current flagship NVIDIA from sharing the second and first places, placing 470 in third place, and giving an outsider’s third place to the opponent from the «red» camp. If we forget about the numbers and look at the picture as a whole, we can see that it doesn’t really matter which of the four tested video cards is working on the game. The minimum 75 fps shown by the Radeon HD 5870 is enough for a comfortable game.

    3DMark Vantage
    Futuremark’s test/synthetic/game suite was the second of eight apps to rank the HD 5870 third instead of fourth overall. But, as has been repeatedly noted, it is not worth transferring the balance of power in parrots to those in efpees, there is a great chance of making a mistake. Old-timers can remember the events of five or seven years ago, when the number of approaches to building a picture was much smaller and the largest number of parrots guaranteed success in all more or less modern games with almost one hundred percent probability. Why go far for an example! Look at the gap between the GTX 480 and the GTX 580. And now for all the other charts… How many games show that difference (percentage)? With an interference — four. It can be said that the two remaining ones, Far Cry 2 and MW2, are too old in their essence to adequately “appreciate” the innovations in the 580, but then what about H.A.W.X. 2 echoes them? A BF2 at 1920 x 1080? Here. So…

    Conclusions
    …Advice: if you choose a high-end solution not for show off, but for good playability, make a list of the games you use most often and evaluate performance in them. Unfortunately, it will not be possible to turn this with toys that have not yet been released, be guided by your own wallet. Now for each video adapter in detail. The GeForce GTX 580 lived up to the expectations placed on the new architecture. The only downside is that it came out much later than planned, but there is also a plus — the GTX 480 made it possible to identify flaws and, if possible, fix them by the release of the 580. At least that’s how it looks from the point of view of an ordinary user. You can philosophize and argue a lot about the behind-the-scenes struggle, the efforts of marketers, and the repeated skimming of the same technology, but when you go to the store, these arguments, alas, will not bring any benefit.

    Should I buy 580 now? See why. It is definitely not worth changing 480 for it — the end will not justify the means. And those who are not interested in rustling paper banknotes are unlikely to think about buying for a long time. The advantage of the GTX 580 over AMD’s current single-chip flagship is significant, mainly due to architectural changes and improvements. If you are a fan of AMD products, you should wait for the announcement of 6970 and 6950 — who knows, suddenly the new product will turn out to be no worse, or maybe even better … the number of new products does not contribute to the installation of price tags corresponding to the actual ranking of video cards in terms of strength.

    The

    GeForce GTX 480 is still strong and shows adequate results — there are very few games that «poorly ran» at 480, but at 580 they do well. In my personal opinion, it’s easier to buy a second 480 right after the New Year and assemble an SLI pair. She will eat a lot, bask too, but in return you will get a tandem on which you can feel comfortable until the release of the successor 580 (read — another year). By the way, the water cooling system helps a lot in the fight against the hot temper of the 480. The one from the category of high-performance. It will cost the owner 500-1000 bucks, but, as practice shows, with the right choice of components, it will survive two or three upgrades without any problems.

    The

    GeForce GTX 470 is a decent gaming graphics card: moderately hot, moderately productive. The optimal combination of «price-performance» indicators for the middle segment. Someone may object that, they say, 460 bypasses 470 in this parameter. I will answer: this is a matter of personal preferences and desires. The Radeon HD 5870 is already rather weak, its replacement arrived just in time. In principle, you can buy another 5870 and assemble CrossFire X — this will help to hold out for another year. The performance level will be roughly comparable to the agility of a pair of GTX 470s and will greatly lose to a bunch of two 480s, but both the appetites and the price of such a solution will be much lower. Here is such a schedule. It can be shifted in one direction or another at will, changing the frequencies of the cards in both directions, upgrading the cooling, assembling various tandems, buying low-used cards (this is sometimes a very profitable move from a financial point of view). In general, you have points of reference, then be guided by personal preferences and the amount of funds available. U.P.

    How to measure gaming performance?
    A rare game maker cares about the needs of card testers. What did not go to pay for designers, artists and programmers is pumped into the marketing campaign: it is much easier (and cheaper!) to convince the user that the toy is good than to allow him to come to this conclusion himself (after making the game good, of course) . Therefore, built-in tools for measuring performance are not everywhere. In the list of «well done», for example, S.T.A.L.K.E.R. — it allows you to record and then play the demo launched from the console. For some games, such as Unreal Tournament 3, Crysis, Far Cry 2, third-party craftsmen write benchmark programs that carry out the entire routine process for the tester.

    And for other games that do not have built-in or external means for measuring «nimbleness», the Fraps utility is popular (it calculates the average fps) and scripts that repeat the route after the player. And then, scripts are a boon available only to those who understand programming (here I wanted to write to “users”, but stopped in time) and to those who are able to pay them money for writing a script. The rest from time to time sit in front of the PC, repeating the movements of the mouse. Badly? Yes. But even worse, the error in the results produced by Fraps can vary by 10-15%, and sometimes by 20%, from run to run. Well, if the script runs everything: I set it to run the game ten times, went for a walk, came in, collected the results and, say, out of ten passes, threw out two or three with abnormally high or low numbers. And if you walk with your “hands” every pass, then you can’t master it more than two or three times, time is precious.

    But, as they say, progress does not stand still, and, despite the dominance of game consoles and excellent projects for them, many continue to torment the PC with games. Consequently, there is interest in graphics cards and reviews like the one you are reading now, and there is no end to trying to make a thing that makes it easier to measure performance in games.
    P. S. The last paragraph was written by the author after a sleepless night and a battle with Metro 2033 in the hope that someone will hear him and give a link to the bench of this demonic fun.

    Review of MSI GeForce GTX 580. The top video card of today

    Introduction
    In today’s review, you will be presented with the top video card of our time — NVIDIA GeForce GTX 580. This video card can justifiably be considered a solution for enthusiasts, since such a high level of performance unlikely to be needed by the average gamer, but absolutely essential for the gamer who wants to play at resolutions above 1920×1280. Of course, if the user has the opportunity, then why not buy a top-end video card and forget about the need to replace it when a new gaming product appears?

    Actually, there is a reason for this. As a rule, the cost of top solutions is at the highest level. In six months, new video cards will be released that will replace the former top product from the top price line and its cost will be in the middle price range. Approximately the same fate befell the top-end video cards of the past generation — NVIDIA GeForce GTX 480, which today demonstrate performance at the level of a video card from the middle price range — NVIDIA GeForce GTX 570. Now, those who want to purchase the former top-end product from NVIDIA have an alternative with the opportunity to spend on product up to 40% less funds.
    To date, AMD has only one solution related to the previous generation of graphics products, which it can oppose to the top-end solution from NVIDIA presented at the end of last year. This product is called AMD Radeon HD 5970. This graphic solution is based on two graphics cores, and for quite a long time it was the most productive video card on the computer component market. Due to the need to separate graphics calculations between two video cards, the solution periodically had certain problems called friezes — periodic twitching of the image during the game. AMD’s detailed work with graphics card drivers has eliminated this problem and the demand for the previous generation of AMD Radeon HD 59 graphics cards70 still remains at a fairly high level.

    The absence of a top product haunted the engineers of NVIDIA for several months and in November 2010 they present a new video card — NVIDIA GeForce GTX 580. This is the first graphics product that fully uses 40 nanometer process technology and Fermi architecture. The graphics card has an increased number of stream processors from 480 to 512 compared to the graphics solution of the previous generation NVIDIA GeForce GTX 480.

    From a marketing point of view, the launch of the new GeForce GTX 580 was not in vain in November 2010. With its new two products GeForce GTX 580 and GeForce GTX 570, NVIDIA has seriously competed with existing solutions from AMD and flooded the world markets with new products by the beginning of New Year’s purchases. Who doesn’t want to get a new graphics card for the new year? — only to those who do not like to play computer games, and there are practically none today. It is quite possible that the product was prepared for release quite a long time ago, and the effect of surprise and the sales season were not enough for the announcement.

    AMD only in January 2011 introduced its new AMD Radeon HD 6970 and Radeon HD 6950 , which could not directly compete with the top-end solution from NVIDIA. These solutions adequately compete with weaker products — the GeForce GTX 570 and GeForce GTX 560 Ti, and only the dual-chip AMD Radeon HD 5970 remains to compete directly with the TOP.

    In today’s review, we will try to fully evaluate the capabilities of the new product from NVIDIA in the face of the MSI GeForce GTX 580 graphics solution.


    Equipment

    — The picture is clickable —

    The video card is delivered in a blue and white box. On the front side of the box, the software supplied with the video card is advertised — MSI AfterBurner, which allows you to control the parameters of the video card and overclock it with the accompanying softvolt mod. Enthusiasts have the ability to manually adjust the fan speed of the cooling system — we have never noted an urgent need for this function.

    The top bay lists the technologies supported by the graphics card. This set is not much different from the technologies of the previous generation video card — GeForce GTX 480. It should only be noted that there is 1536 MB of GDDR5 video memory on board. This circumstance requires criticism from our side. New video cards from AMD carry 2 GB of video memory on board, which can have a very sharp impact on the level of sales of video cards. Many modern users still choose a video card based on two characteristics:

    — cost,

    — the amount of video memory.

    The lower the cost of a video card and the higher the amount of its video memory, the more chances this solution has to be sold.

    — The image is clickable —

    On the back of the video card, the manufacturer lists the key features of the solution. At the same time, MSI focuses on its innovations in this product. In particular, the presence of MSI AfterBurner software, which works with almost any video card from the NVIDIA and AMD series. Working with the program is quite easy and convenient, especially for those who already have experience with the free Riva Tuner utility.

    The company notes the features of the capacitors used in this graphic solution. They are all solid-state with an increased resource, which serve up to 40 years in an office environment. In fact, an office computer does not need such a video card in any way, unless there is a web studio or a design studio there, even in such extreme cases, you can get by with more modest graphic products or solutions from the professional segment — Quadro.

    — Clickable image —

    Delivery includes:

    — video card,

    — instruction,

    — two power adapters,

    — driver disk,

    — DVI-to-VGA adapter,

    — mini-HDMI-to-HDMI cable.

    The inclusion of an HDMI cable in the package is a completely justified step, although its length is unlikely to be enough to solve any significant tasks, except for connecting a monitor. But he is able to demonstrate the presence in nature of cables with mini-HDMI ports at one end and HDMI at the other end. Visual inspection of the video card

    — The picture is clickable —

    MSI branded graphics card has a full reference design. The reference PCB design is confirmed by the NVIDIA inscription in the area of ​​the PCI-Exp slot. The cooling system cover is marked from the delivery box, however, it does not differ in any way from the original NVIDIA solution.

    If we compare this cooling system with the one installed on the previous generation video card — GeForce GTX 480, then in this solution you will not see heat pipes. The new cooling system retains only the cooling turbine, which blows air through the radiator and expels it outside the case. The radiator of the cooling system has undergone significant changes. First of all, the changes relate to the exclusion of heat pipes and the construction of the entire architecture on the principle of Vapor-X cooling systems from Sapphire. That is, between the copper base of the cooler and the radiator itself there is a heat distribution chamber, which performs the function of heat pipes. Only this function is performed more efficiently and at a higher level than the use of standard heat pipes.

    The turbine of the cooling system is equipped with a PWM speed controller. At low loads on the video card, the turbine speed is insignificant and its operation is not audible, with a gradual increase in load, the turbine speed increases with a simultaneous increase in operating noise from air flows. In general, the original cooling system from NVIDIA cannot be called noisy.

    — Clickable image —

    The video card has two connectors for SLI bridges, which allows you to combine up to three video cards into a single tandem. Naturally, for this, your motherboard must support the installation of so many video cards and 3-way-SLI technology.

    — Clickable image —

    At peak loads, the power consumption of the video card reaches 244 watts, so to ensure compatibility with PCI-Exp 16x slots of the first revisions, two connectors are installed for additional power supply of the video card. One connector is six-pin, the other connector is eight-pin.

    On these video cards, a standard set of ports for image output is soldered: 2xDVI and 1xmini-HDMI.


    Video card specifications

    1. Working frequency of universal computing units: 1544 MHz;

    2. Universal computing units: 512;

    3. Texture blocks: 64;

    4. Blending units: 48;

    5. Core operating frequency: 772MHz;

    6. Video memory size 1536 MB;

    7. Video memory type: GDDR5;

    8. Video memory operating frequency: 1002 MHz;

    9. Memory bus width: 384-bit;

    10. Maximum power consumption up to 244 watts.

    The MSI product presented to your attention operates at reference frequencies, although some time later the company announced solutions with a modified cooling system and factory overclocking. We would like to hope that we will be able to present these products to your attention in the next reviews. Test configuration

    The MSI GeForce GTX 580 video card was installed in our working configuration, which, although not top-notch, is quite capable of claiming the title of a modern configuration:

    1. Intel Core i7 920 processor.

    2. ASUS P6T motherboard.

    3.2×3 Gb Samsung Original DDR3-1600

    4.WD 1TB WD1001FALS Caviar Black SATAII

    5. Thermaltake Mambo case.

    6. ASRock USB 3.0 PCI-Exp x1 dual port board.

    7. The room is 27 degrees.

    8. The system is assembled in a closed case.

    Based on this configuration, all graphic solutions presented below in the review were tested. An OCZ power supply for 9 was used as a power source.20 watts.

    1. The temperature regime of the video card.

    From the presented test results, it can be seen that the reference cooling system for the GeForce GTX 580 video card, although it does not claim to be the most efficient, can quite cope with the cooling of a video card operating at reference frequencies.

    It should be noted a sharp increase in cooling efficiency compared to the cooling system of video cards of the GeForce GTX 480 series, which even then worked on the verge of overheating after a slight overclocking or when another hot summer in our country sets in.

    During the softvolt mod, the voltage on the core was increased to a maximum of 1.15 volts, which led to an increase in the overclocking potential of the video card with a simultaneous increase in the maximum temperature to 97 degrees. An increase in temperature to 92 degrees led to a sharp decrease in the overclocking potential of the video card, so the most effective voltage for the video card turned out to be 1.13 volts.

    It should be understood that any volt mod must be accompanied by adequate cooling. Previously, it was proved that the lower the temperature of the core, the better it responds to an increase in voltage by increasing the overclocking potential.

    2. Video card overclocking

    Any software can be used to overclock the MSI GeForce GTX 580 graphics card. The most convenient program seems to us from the package — MSI Afterburner.

    By default, the graphics card operates at the following operating voltages:

    — 0.96 volts in 2D mode,

    — 1.05 volts in 3D mode.

    Without volt mode, the video card overclocked to the following frequencies:

    — 850 MHz for the core,

    — 1174 MHz for video memory.

    Increasing the core voltage to 1.13 volts made it possible to achieve the following frequencies:

    — 894 MHz for the core,

    — 1174 MHz for video memory.

    3. Evaluation of the performance level of the video card in the game Crysis Warhead.

    A fairly popular game, which is an objective criterion for the level of performance of a video card today.

    The test results show that the tested video card GeForce GTX 580 can be considered top only among single-core solutions. The performance leader in this gaming test remains the dual-core solution from AMD Radeon HD 5970.

    If we compare the product with the previously released solution GeForce GTX 480, it should be noted that the new product has become more productive, while it began to consume less energy and work more quietly.

    4. Video card testing in Resident Evil 5

    This game test clearly demonstrates that the effectiveness of a dual-chip solution can only be evaluated in a game that supports such video cards. In this case, we see the absence of this support, so the graphics product from NVIDIA — GeForce GTX 580 is the leader in testing.

    5. Video card testing in Far Cry 2.

    This game is a popular product and it fully supports many modern video cards. Traditionally, it prefers graphics solutions from NVIDIA, which is clearly seen from the presented performance charts. Top solutions from AMD and NVIDIA demonstrate approximate parity in terms of performance.