8800 gtx review: GeForce 8800 GTX review: GeForce 8800 GTX

GeForce 8800 GTX review: GeForce 8800 GTX

Because of design changes in the GeForce 8800 GTX chip’s new architecture, we need to consider some of this card’s specs differently than we have in the past. The basics are the same. The GeForce 8800 GTX has a core clock speed of 575MHz, and it comes with 768MB of DDR3 RAM clocked to 900MHz with a 1,800MHz data rate. That memory rate is a significant uptick compared to the 800MHz RAM in Nvidia’s last flagship card, the GeForce 7950 GX2. But one of the main differences in the GeForce 8800 GTX’s architecture lies in how we consider its pipelines.

We’ve been waiting for the GeForce 8800 GTX since we first got wind that Nvidia’s next-generation 3D card would be out before the end of this year. It’s everything we’d hoped it would be. For a suggested street price of $599, the GeForce 8800 GTX brings tremendous processing power to current-generation games. It’s also the first card to market that will support all of the 3D gaming-related features of Windows Vista and . The initial release of next-gen games is a bit far off. The poster child, the 3D shooter Crysis, is set to debut in March 2007, and even that game might not put all of the next-gen bells and whistles into play. Still, the GeForce 8800 GTX is so powerful, even compared to ATI’s fastest dual card combination, that there’s no reason to spend roughly $1,000 on a pair of Radeon cards when you can outperform them with a single $600 GeForce 8800 GTX. That and the fact that Nvidia has finally caught up to ATI’s image-quality advantages earn Nvidia’s newest card our Editors’ Choice award for high-end 3D graphics cards.

In the past, we’ve said that a 3D chip has X amount of pixel pipelines and Y pipes for shader calculations. But because of the new specifications of DirectX 10, the GeForce 8800 GTX employs what’s called a unified architecture. In other words, no pipe is geared toward a particular task. Instead, the GeForce 8800 GTX comes with 128 stream processors, which can dynamically process whatever info is thrown their way. This means that if your card is processing a shader-intensive scene, it can tap from more of the pipeline pool to process that image, rather than being capped at 24 or 48 pipes because some of the other pipes are set aside for geometry only. This capability should give game designers much more flexibility in how they design games, knowing that if they can balance the workload properly, they can pump a lot of processing power into a given calculation.

What’s perhaps even more impressive about the GeForce 8800 GTX is its sheer horsepower. Its transistor count sits at 681 million on a 90-nanometer manufacturing process chip. That’s more than the two 271-million-transistor chips on the GeForce 7950 GX2 combined. To power a single GeForce 8800 GTX card, Nvidia recommends a 450-watt power supply in a PC with a high-end dual-core chip and a typical combination of internal hardware. But the trick is that the power supply must have two PCI Express card power connectors to plug into the two sockets on the back of the card. Most modern power supplies should have the necessary connectors. If you want to add two 8800 GTX cards in an SLI configuration, however, you’ve got a challenge on your hands.

Nvidia hasn’t released a driver that will run the GeForce 8800 GTX in SLI mode as of the time of this writing, but it may have one out soon. Thus, we didn’t get to test it, but Nvidia did share the power supply specs with us. To run two GeForce 8800 GTX cards in SLI mode, Nvidia recommends at least a 750-watt power supply. But some of the recommended models on its SLI compatibility list go as high as 850 and even 1,000 watts. We suspect those higher-wattage recommendation will allow you some headroom for adding multiple hard drives and optical drivers, as well as very high-end quad-core processors. Still, it’s clear that building a next-gen SLI rig will be no small undertaking, at least for now. Heck, many midtowers PC cases are too small to accept a 1,000-watt power supply.

With no DirectX 10 games available to test on at the moment, we can’t speak to the GeForce 8800 GTX’s next-generation performance, aside from the fact that it’s the only card on the market that claims DirectX 10 compatibility. ATI’s next-gen card, code-named R600, was rumored to be released in January 2007, but we haven’t heard much about it so far. We imagine that ATI (whose acquisition by AMD has been finalized) will have a DirectX 10 card sooner or later, but right now Nvidia is the only vendor with something to show. And while we can’t really say who will win the battle for next-generation performance, the GeForce 8800 GTX dominates every single other card on the market right now.

One of the most important things to note about the GeForce 8800 GTX and its performance is that you would be smart to pair this card with a capable monitor that can go to resolutions of 1,600×1,200 or above. Nvidia calls this XHD (extreme high definition) gaming. Whatever you want to call it, if you’re not playing at high resolutions with antialiasing, anisotropic filtering, and other image-quality tweaks cranked, you’ll likely hit a CPU bottleneck, which means that you’re not giving the card enough to do. But when you get up to those high-quality settings, the results are amazing.

GameSpot was kind enough to provide us with benchmarks, as per usual. We suggest you check out their story, too; there are a number of screenshots taken during testing that show off the image quality. We’ll focus on frame rates. Our highlight here is . That game has been considered the pinnacle of DirectX 9-based game programming and has humbled even ATI’s mighty Radeon X1950 XT CrossFire setup, which can barely pass 60 frames per second (fps) with no antialiasing. But the GeForce 8800 GTX blew past ATI’s highest-end configuration, scoring 64fps on that test.

Oblivion also lets us highlight how the GeForce 8800 GTX has pulled Nvidia even with ATI on current-gen image quality. ATI has had an advantage on certain games, most conspicuously Oblivion, because through an unofficial patch, Radeon cards let you turn on antialiasing and high dynamic range lighting simultaneously. The resulting image looks noticeably better than if you can do only one or the other, as you can with the GeForce 7000-series cards. Not only can the GeForce 8800 GTX do both AA and HDR lighting, it also does them faster than a Radeon X1950 XT CrossFire rig. On that Oblivion test, the 8800 GTX scored an impressive 45 frames per second, which is much smoother than ATI’s 28fps.

You might notice that the GeForce 8800GTX doesn’t win on every single test. On Half-Life 2: Episode One at 8X antialiasing, an ATI CrossFire setup edged it out. It’s worth noting that the GeForce 8800 GTX hit 80 frames per second, though, so it’s not exactly slow. But even better, at 16X antialiasing ,which is more demanding, the GeForce 8800 GTX’s score stayed basically the same at 84fps, where the CrossFire cards’ scores dropped off. This lends to Nvidia’s argument that the GeForce 8800 GTX delivers better performance on extremely high image-quality settings.

The other test it lost on was Quake 4, wherein the Radeon X1950 XT CrossFire beat it at both resolutions by about 15fps or so. Again, even at 2,048×1,536, the GeForce 8800 GTX scored 68fps, so it’s by no means slow. But it’s also worth noting that a Radeon X1950 XT CrossFire setup costs between $900 and $1,000 for the master and the slave cards, and they can’t do DirectX 10. The $600 GeForce 8800 GTX and its forward-looking capabilities are clearly the better deal.

We should note a couple of final thoughts here. The first is that with the GeForce 8800 GTX, Nvidia is also unveiling something called CUDA, which stands for Compute Unified Device Architecture. Because of the 8800’s complexity, Nvidia is offering a framework for programmers to write software to use the GPU for intense number calculations. For gamers, Nvidia showed us how developers might use CUDA to really ramp up game physics calculations, but Nvidia is also offering this capability to the medical community and anyone else who might benefit from a combination of intense image-processing and number-crunching power. Nvidia is still getting the word out on CUDA, so there’s no way to check it out right now. Nvidia also unveiled its new PureVideo HD software as a component of its new universal ForceWare driver, which debuts today and includes support for the GeForce 8800 cards. PureVideo HD will run on both the 8000-series and the older 7000-series GeForce cards, and it’s designed to enhance HD video content coming from your PC. We have a sample system in to play with, and we’re in the process of putting it through its paces. Look for a blog post on our impressions of PureVideo HD next week.

3DMark06
(Longer bars indicate better performance)

1,280×1,024   

Nvidia GeForce 8800 GTX

10959 

ATI Radeon X1950 XT CrossFire

10832 

Nvidia GeForce 7900 GTX SLI

10655 

Nvidia GeForce 8800 GTS

8890 

ATI Radeon X1950 XTX

6792 

Nvidia GeForce 7900 GTX

6453 

Need For Speed: Carbon
(Longer bars indicate better performance)

1,600×1,200, 8X AA, 16X AF    1,600×1,200, 4X AA, 16X AF   

Nvidia GeForce 8800 GTX

40 

43 

Nvidia GeForce 8800 GTS

28 

31 

ATI Radeon X1950 XT CrossFire*

26 

16 

ATI Radeon X1950 XTX*

25 

30 

Nvidia GeForce 7900 GTX SLI

17 

18 

Nvidia GeForce 7900 GTX

10 

17 

Half-Life 2: Episode One
(Longer bars indicate better performance)

2,048×1,536, 16X AA, 16X AF    2,048×1,536, 8X AA, 16X AF   

ATI Radeon X1950 XT CrossFire*

49 

98 

Nvidia GeForce 8800 GTX

84 

86 

ATI Radeon X1950 XTX*
N/A

76 

Nvidia GeForce 7900 GTX SLI

29 

74 

Nvidia GeForce 8800 GTS

61 

63 

Nvidia GeForce 7900 GTX
N/A

30 

The Elder Scrolls IV: Oblivion
(Longer bars indicate better performance)

1,600×1,200, 4X AA, 16X AF, HDR On    1,600×1,200 no AA, 16X AF, HDR On   

Nvidia GeForce 8800 GTX

45 

64 

Nvidia GeForce 8800 GTS

34 

54 

ATI Radeon X1950 XT CrossFire

28 

51 

Nvidia GeForce 7900 GTX SLI
N/A

51 

ATI Radeon X1950 XTX

18 

35 

Nvidia GeForce 7900 GTX
N/A

32 

Quake 4
(Longer bars indicate better performance)

2,048×1,536, 8X AA, 16X AF trans super    1,600×1,200, 8X AA, 16X AF trans super   

ATI Radeon X1950 XT CrossFire*

84 

115 

Nvidia GeForce 8800 GTX

68 

100 

Nvidia GeForce 8800 GTS

49 

73 

Nvidia GeForce 7900 GTX SLI

41 

60 

ATI Radeon X1950 XTX*

51 

56 

Nvidia GeForce 7900 GTX

16 

34 

Company of Heroes
(Longer bars indicate better performance)

2,048×1,536, 4X AA, 16X AF    1,600×1,200, 4X AA, 16X AF   

Nvidia GeForce 8800 GTX

45 

66. 5 

Nvidia GeForce 8800 GTS

16 

65 

Nvidia GeForce 7900 GTX SLI

29 

47 

Nvidia GeForce 7900 GTX

29 

43 

ATI Radeon X1950 XTX

26 

40 

ATI Radeon X1950 XT CrossFire

16 

25 

F.E.A.R.
(Longer bars indicate better performance)

2,048×1,536, 8X AA, 16X AF    1,600×1,200, 8X AA, 16X AF   

Nvidia GeForce 8800 GTX

45 

68 

ATI Radeon X1950 XT CrossFire*

33 

51 

Nvidia GeForce 8800 GTS

33 

48 

8800 GTX Reviews | TechPowerUp Review Database

    Review Database
  • Graphics Cards
  • NVIDIA
  • 8800 GTX

Sort items by:
DateReview TitleSite NamePopularity

2007
Oct 21, 2007 Zotac 8800GTX Amp! Edition DriverHeaven
Sep 10, 2007 EVGA e-GeForce 8800 GTS KO ACS3 Digit-Life
Sep 4, 2007 Point of View GF8800 GTX EXO SGOverclockers
Jul 30, 2007 ASUS EN8800GTX TOP AQUATANK Digit-Life
Jul 18, 2007 Zotac GeForce 8800 GTX AMP! Edition Bit-Tech
Jul 7, 2007 PNY XLR8 8800 GTX OC HotHardware
Jul 5, 2007 Zotac GeForce 8800GTX AMP! Edition Tweaktown
Jul 5, 2007 ASUS EN8800GTX AquaTank Hartware (de)
Jun 21, 2007 ASUS 8800GTX 768MB TheTechLounge
Jun 16, 2007 XFX GeForce 8800 GTX Extreme Digit-Life
May 30, 2007 Zotac GeForce 8800 GTX OC Edition TechPowerUp
May 25, 2007 OCZ GeForce 8800 GTX Legit Reviews
May 24, 2007 Foxconn GeForce 8800 GTX X-bit labs
May 22, 2007 MSI GeForce NX8800GTX OC Liquid Tweaktown
May 18, 2007 Asus EN8800GTX 768Mb Red and Blackness Mods
May 10, 2007 ASUS EN8800GTX TechARP
May 5, 2007 ASUS EN8800GTX AquaTank Hexus
May 4, 2007 Foxconn GeForce 8800GTX Hard h3o
Apr 17, 2007 MSI NX8800GTX OC Liquid HardwareZone
Apr 11, 2007 ASUS EN8800GTX AquaTank Bjorn3D
Apr 10, 2007 Foxconn GeForce 8800 GTX HotHardware
Mar 27, 2007 ASUS EN8800GTX AquaTank/HTDP/ 768M OC workbench
Mar 22, 2007 Sparkle Calibre 8800 GTX 768MB OC MVKTech
Mar 5, 2007 Foxconn 8800 GTX Bjorn3D
Mar 3, 2007 Sparkle Calibre 8800GTX P880+OC Bjorn3D
Feb 28, 2007 OCZ Technology GeForce 8800 GTX Overclock Intelligence Agency
Feb 28, 2007 XFX NVIDIA 8800 GTX XXX Technic3D (de)
Feb 21, 2007 BFG NVIDIA 8800 GTX OC Technic3D (de)
Feb 21, 2007 EVGA e-GeForce 8800GTX KO ACS3 768MB TechPowerUp
Feb 20, 2007 Shootout at the 8800 GTX corral: ECS vs OCZ Hexus
Feb 19, 2007 MSI NX8800GTX-T2D768E-HD PCStats
Feb 8, 2007 XFX GeForce 8800 GTX 768MB Hexus
Feb 8, 2007 ASUS EN8800GTX HotHardware
Feb 8, 2007 Gainward 8800GTX XSReviews
Feb 7, 2007 Gainward Bliss 8800 GTX Technic3D (de)
Jan 29, 2007 EVGA e-GeForce 8800 GTX Barry’s Rigs’n’Reviews
Jan 27, 2007 XFX GeFORCE 8800GTX XXX NeoSeeker
Jan 26, 2007 XFX GeForce 8800 GTX XXX Guru3D
Jan 25, 2007 GeForce 8800 in SLI Tech Report
Jan 25, 2007 MSI NX8800GTX NVNews
Jan 25, 2007 MSI GeForce 8800 GTX HotHardware
Jan 24, 2007 BFGTech GeForce 8800 GTX Water Cooled HardOCP
Jan 19, 2007 ASUS EN8800 GTX HardOCP
Jan 18, 2007 BFGTech 8800 GTX Watercooled Bit-Tech
Jan 18, 2007 BFG GeForce 8800 GTX Water Cooled Guru3D
Jan 18, 2007 Foxconn GeForce 8800 GTX HardwareOC (de)
2006
Dec 27, 2006 XFX GeForce 8800 GTX SLI Legit Reviews
Dec 27, 2006 Foxconn GeForce 8800GTX Digit-Life
Dec 26, 2006 GeForce 8800GTX Allround-PC (de)
Dec 20, 2006 MSI GeForce 8800 GTX TechSpot
Dec 15, 2006 MSI GeForce 8800 GTX in 2560×1600 Digit-Life
Dec 15, 2006 Nvidia GeForce 8800 GTX X-bit labs
Dec 15, 2006 GeForce 8800 GTX HardwareZone
Dec 14, 2006 BFG GeForce 8800 GTX Guru3D
Dec 10, 2006 XFX 8800 GTX: SLI Bjorn3D
Dec 10, 2006 NVIDIA GeForce 8800 GTX PCModdingMy
Dec 10, 2006 BFG GeForce 8800GTX
Dec 10, 2006 Foxconn GeForce 8800 GTX Hexus
Dec 7, 2006 MSI NX8800GTX-2D3768E-HD Viper’s Lair
Dec 1, 2006 XFX GeForce 8800GTX Nordic Hardware
Dec 1, 2006 BFG 8800gtx HardwareEcke (de)
Nov 11, 2006 ASUS GeForce EN8800GTX/HTDP/768M
Nov 11, 2006 NVIDIA GeForce 8800GTX OC workbench
Nov 11, 2006 Leadtek WinFast PX8800 GTX TDH TrustedReviews
Nov 10, 2006 NVIDIA GeForce 8800 GTX TechARP
Nov 10, 2006 XFX GeForce 8800 GTX Legit Reviews
Nov 10, 2006 XFX GeForce 8800 GTX 768MiB Hexus
Nov 10, 2006 G80: NVIDIA GeForce 8800 GTX Bit-Tech
Nov 10, 2006 NVIDIA GeForce 8800 GTX Hexus
Nov 10, 2006 Sparkle GeForce 8800GTX Bjorn3D
Nov 10, 2006 XFX GeForce 8800 GTX Bjorn3D
Nov 10, 2006 Leadtek Winfast PX8800 GTX TDH Bjorn3D
Nov 10, 2006 NVIDIA GeForce 8800 GTX Viper’s Lair

Foxconn GeForce 8800 GTX Review

Foxconn GeForce 8800 GTX


Written by John Yan on 1/1/2007 for
PC  


More On:

GeForce 8800 GTX


We’re on a Foxconn graphics card roll right now. First off, I reviewed their GeForce 7950 GT with 512MB. Today I look at the grand daddy of NVIDIA cards though. It’s been a few months since the release of the GeForce 8800 so with that here’s my review of Foxconn’s top end GeForce 8800 GTX.


The Foxconn GeForce 8800 GTX is the top of the line enthusiast card currently on the market. Featuring 128 individual stream processors running at 1.35GHz and 768MB of GDDR3 memory, the Foxconn GeForce 8800 GTX is a DirectX10 capable card. The core runs at 575MHz with the memory running at 1800MHz DDR.  Compared to the GTS version, the GTX runs 13% faster in the core, 11% faster for the stream processors, and 11% faster for the memory. The memory interface is 384-bit for the GTX while the GTS is at 320-bit. You can read about most of the new features of the GF8800 from my preview so I won’t go too much into it again. The GeForce 8800 GTX is the same as the GeForce 8800 GTS I previewed but faster.

So what’s different between this and the GTS version. For one, the GTX card is a bit longer. You can see in one of the comparison photographs just how much longer it is. Another change is there are TWO power connectors on this card. The GTX requires more juice and you’ll need two power connectors for this card to work. While most cards have their power connectors facing to the right of the card, the ones on the GTX face up so that you can attach them a little easier. I’ve found that a few cases interfered with the horizontal facing power connectors and this change will help in those situations especially with two of them to connect. Finally, there are two SLI connectors at the top instead of one. No support now for the secondary SLI connector but it’s there for the future. You can plug either one in for SLI to work between two GTX cards. You can imagine the monstrosity once you can connect more than two of these cards together and the number of power lines being strung through the inside of your computer. I’m thinking they should just build an interface and case to allow another power supply to be installed so that the video cards can have it’s own dedicated power rather than sharing it with the entire system.

Physically, the card is pretty much the same as the reference cards out there. The over-sized cooler is big but it runs very quiet. I could hardly hear the fan spinning even under load. The venting of the hot air out the back of the secondary slot will help keep the inside of your case cooler. There is one unique twist that Foxconn has added to the cooler though. Situated around underneath the outer shell are blue LEDs that light up when the computer is turned on. There aren’t any blinking actions to make it annoying so if you have a window inside your case and love to show off lights, this card will emit a blue glow for you. If you know a little bit about electronics, you can adjust the lights yourself to suite your needs.

If you want to watch high definition video on your high definition display, then you’ll be glad to know that this card is HDCP compliant so as long as you connect it to an HDCP compliant display source then you’ll be able to enjoy Blu-Ray or HD DVDs in full resolution. I’ve been eyeing the Xbox 360 HD DVD drive for my console and PC and this card will come in handy for HD viewing. You’ll still need software capable of playing the new formats of course such as Cyberlink’s Power DVD 7. With this card, you’ll be able to enjoy some smooth HD playback as it will help decode the higher bandwidth video.

Dual-DVI connectors and various TV outs via a pigtail lets you connect this card in various ways. The Dual-DVI connection is capable of a maximum resolution of 2560×1600 with 32-bit color at 60Hz. If you need to connect this card via the old VGA connector, Foxconn has included two DVI to VGA adapters for you.

Foxconn bundles their video cards with two utilities and a USB gamepad. The two utilities are RestoreIt! and Virtual Drive. RestoreIt! is similiar to the more popular Symantec Ghost while Virtual Drive lets you copy a CD or DVD to the hard drive and run it from there. These two can come in handy and they’re free. The USB gamepad is pretty serviceable so if you need another controller then you’ll get one with the card. It’s modeled somewhat after the PlayStation 2 style with the dual analog sticks and four trigger buttons along with the four top buttons and directional pad.

First up is Futuremark’s 3D Mark 06.

3DMark®06 is the worldwide standard in advanced 3D game performance benchmarking. A fundamental tool for every company in the PC industry as well as PC users and gamers, 3DMark06 uses advanced real-time 3D game workloads to measure PC performance using a suite of DirectX 9 3D graphics tests, CPU tests, and 3D feature tests. 3DMark06 tests include all new HDR/SM3.0 graphics tests, SM2.0 graphics tests, AI and physics driven single and multiple cores or processor CPU tests and a collection of comprehensive feature tests to reliably measure next generation gaming performance today. We tested at the standard 1280×1024 resolution.

Quake 4 is Raven Software’s true sequel to the id classic. The game uses an improved Doom 3 engine for some great graphics. For the test we ran a demo featuring a few enemies and some squad mates. We set the graphics qualities at maximum and ran it on three different resolutions.


One of the surprise hits out of Monolith was F.E.A.R. This supernatural FPS looks incredible and really pushes a video card to its limits. For the benchmark, we ran three resolutions using the in game benchmark with all the settings set at max.

Prey has been in development for many years but the folks at Human Head finally released the game this year. The game utilizes the Doom 3 engine like Quake 4 and features the really cool Portal technology to garner some interesting game play aspects. All settings were set to maximum and three resolutions were chosen for the test.


Company of Heroes is an RTS that really pushes video cards. The game by the fine folks at Relic Entertainment is set in WWII and features deformable terrain as well as great physics. The level of detail in the game for an RTS is amazing. For the tests, we set everything at maximum or ultra to ensure that the card was taxed as much as possible

While there’s still the traditional anti-aliasing and anisotropic filtering modes, the GeForce 8800 introduces CSAA. We’ll test the regular modes first and then I’ll show you how NVIDIA’s CSAA doesn’t bog the card down as much while offering some very nice anti-aliasing quality. First up is 4xAA with 8xAF.


Next up we turn it up a little with the traditional 8xAA with 16xAF.

Now we move onto NVIDIA’s latest achievement in image quality and that’s Coverage Sampling Anti-Aliasing. I kept the anisotropic filtering at 16X for these tests and used 16x for CSAA. You might be surprised by the results below.

NVIDIA’s new antialiasing technique really shines with some great performance. With this card, you can turn up everything and get some great framerates at high resolutions. It’s a truly impressive result and one of the aspects that really makes this card stand out among the others.

The power of this card is truly impressive. When you look at the performance gains over the GeForce 7950GT as well as the GeForce 8800 GTS, the GeForce 8800GTX is hands down the best card to own right now. I don’t have the Radeon X1950 XTX with me to compare anymore but I’d sure like to see how much this card would beat ATI’s current flagship card. To be able to enable NVIDIA’s new 16x AA mode and get great playable performance at high resolutions is just outstanding. Any other card would be brought to it’s knees even at a lower AA mode running at 1600×1200. Foxconn gives you a card with a unique bundle and blue LEDs that light up the card. If you can afford it, it’s definitely a great purchase and Foxconn’s card ran solid throughout testing and in general usage. I’ll be putting up a second part of this article showing how much gain you get by SLI’ing two of these bad boys. For now, Foxconn’s offering is a real winner.


It’s damn expensive but it’s damn fast. Foxconn has a unique bundle and the LEDs make this card glow. If you have the money, go pick yourself up one for the ultimate in gaming performance.

Rating: 9.5 Exquisite

* The product in this article was sent to us by the developer/company.

About Author

I’ve been reviewing products since 1997 and started out at Gaming Nexus. As one of the original writers, I was tapped to do action games and hardware. Nowadays, I work with a great group of folks on here to bring to you news and reviews on all things PC and consoles.

As for what I enjoy, I love action and survival games. I’m more of a PC gamer now than I used to be, but still enjoy the occasional console fair. Lately, I’ve been really playing a ton of retro games after building an arcade cabinet for myself and the kids. There’s some old games I love to revisit and the cabinet really does a great job at bringing back that nostalgic feeling of going to the arcade.

View Profile

GeForce 8800GTX — lost the throne, but retained the crown

  • Introduction
  • How it was
  • Radeon HD 2900 XT — «Suppressed Revolution»
  • 8800 GTX three years later
  • Enemies of the gray king
  • Test applications, test bench configuration
  • Overclocking 8800GTX
  • Testing
  • Summing up

Not so long ago (in November) the G80 video chip turned exactly three years old. This is a huge time for the graphics chip and for the video card as a whole. Usually one year is enough for the once powerful and productive video chip to move from the «high society», at best, to the bottom of the middle class. In three years, the manufacturer manages to update the lineup of its video cards at least two or three times, a new API, new generations of CPUs may well appear, not to mention the release of a whole bunch of new generation games. If a video card becomes obsolete so quickly in a year, what is there to talk about when three whole years pass ?! Often it turns into a piece of textolite of little use, and shows only a shadow of its former glory — neither new games, nor high resolutions will be beyond its power. Recall at least Geforce 3Ti, released in 2001. Was he capable of such heavyweights as Doom 3, Half-Life 2 or FarCry? Well, they certainly ran, if we talk about the fact of work, but it is unlikely that users with Geforce 3Ti/Radeon 8500 could afford high graphics settings, anti-aliasing or anisotropic filtering.

However, we have a different situation. Despite the fact that Geforce 8800GTX is 3 years old, it is not only able to demonstrate high FPS in the most modern games, but also allows you to set high resolutions, turn on anti-aliasing and be content with free anisotropic filtering.

Moreover, about 10% of all users of the Steam online service are «armed» with video cards based on Geforce 8800 — this already says a lot.


However, first things first…

recommendations

In order to help the reader remember the dawn of the 8800GTX, we offer something to remember from our previous articles:

  • 04.12.2006 Comparative testing of 8800GTX and X1950XTX
  • 02/08/2007 Review of several Geforce 8800GTX
  • models

  • 04/27/2007 FAQ on 8800GTX
  • 07/09/2007 Overview of G80 and R600 architectures
  • 08/01/2007 Overclocking, observations and tricks 8800GTX
  • 08/26/2009 HD 4870 1GB vs 8800GTX

The history of the legendary G80 dates back to 2004, when Nvidia had just begun designing the latest GPU, which would depart from the usual “pixel-vertex” architecture and introduce the implementation of a new, more flexible, more advanced and advanced architecture — unified.

It should be noted that the implementation of the new architecture in the face of the G80 (or NV50) began almost immediately after the release of the NV40, which at one time gave life to the productive GeForce 6800 line. pipelines and GDDR3 memory brought support for DirectX 9with and Shader Model 3.0

The NV40 had quite a lot of potential. Its architecture was developed several times — first with the release of the G70, which was essentially the same NV40, but “on steroids”, with an increased number of pixel and vertex pipelines, and then with the G71 with a reduced technical process and increased frequencies.

After a long exchange of blows between ATI and Nvidia, when their video cards robbed each other of the championship belt as they grew in power, the Californians finally managed to make a revolution. A real one that hasn’t been seen since the GeForce 3Ti or Radeon 9700. Nvidia has announced the most powerful, hottest, largest and most expensive desktop GPU in history, the G80, which is also the first DX10 compatible video chip and the first streaming architecture desktop GPU. (Before that, in 2005, the ATI Xenos R500 chip was released for the Xbox 360, which has 48 superscalar processors (48×5 = 240 scalar) and absorbed the best from the R580 and R600).

What was so revolutionary about this huge, hot and power-hungry chip? First, the hardware — 128 programmable stream processors that play the role of both pixel and vertex pipelines; 32 texture units, 24(!!!) ROP’s. Add to this a 384-bit bus and 768 megabytes of video memory — a record amount for a single desktop GPU at the time.

Secondly, G80, as already mentioned, became the first DX10 compatible video chip, providing hardware support for Shader Model 4.0

The unified shader architecture surpassed even the wildest expectations — the 8800GTX turned out to be two or three times more productive than the former tops! It was a phenomenal success, because in all respects the new generation of video cards from Nvidia overtook its time by at least a year. Although the design of 8800GTX was not perfect — the length of the PCB had impressive dimensions — 27cm, the chip was manufactured by 90nm process technology, the video card needed as many as two 6-pin power connectors, and a huge cooling system that covered not only the chip itself, but also the power subsystem with memory chips, made the 8800GTX extremely heavy. What can we say about a separate NVIO chip, which, due to the enormous size of the G80, had to be moved outside the main chip. And the full potential of the 8800GTX could only be unleashed on ultra-modern Core 2 Duo processors operating at >3GHz.

But they turned a blind eye to all these shortcomings — the excellent performance magically turned the disadvantages of the 8800GTX into its features

A year later, Nvidia announced a new chip — G92, which, in fact, was the same G80, but with cosmetic changes. Instead of 24 ROP’s, it was decided to leave only 16, a 256-bit bus and higher-frequency GDDR3 memory replaced the non-standard 384-bit one. The amount of video memory was reduced from 768 megabytes to 512, and a thinner 65nm process technology allowed not only to reduce heat dissipation and GPU size, but also to increase clock speeds and integrate NVIO into the main chip. The release of the 8800GTS 512 MB was marred only by its dubious name, but in all other respects it was an excellent product — the PCB was gracefully shortened, the price was not too high, and the performance was very close to the 8800GTX. But here is the rebranding and appearance of 9Nobody expected 800GTX. It was the same 8800GTS 512 mb, but on a longer PCB, with a reinforced power subsystem, a new cooling system, and higher clock frequencies. For some time, it became the flagship of the entire GeForce line, although in fact it turned out to be cut down compared to the 8800GTX\Ultra and, often, could not surpass them in performance.

Nvidia programmers even resorted to “slowing down” the 8800GTX at the driver level, thereby artificially increasing performance 9800GTX.

Two things prevented the absolute success of the 8800GTX. The first is, of course, a rather high price. But the second was more prosaic. The thing is that ATI, already absorbed by AMD by that time, was preparing a super-secret and, according to rumors, super-powerful weapon, codenamed R600. It was the first of the scalar architecture and, like the G80, was in development for a long time. Its alternative, slightly lighter and modified version of the R500 Xenos, by November 2006, was already widely used in the new Xbox 360, demonstrating great features and amazing performance. Thus, a fairly large audience of AMD fans was waiting for a response, which was to follow immediately after the release of 8800GTX\GTS. Many factors spoke in favor of the R600. First, the price — it was an order of magnitude less than the 8800GTX and amounted to about 399$. Secondly, the hardware component is a 512-bit memory bus and 320 scalar processors. It would seem that in all respects, the new product should have outperformed the Nvidia flagship, especially since it was half a year late — enough time to “tighten the screws” and release a product that is far ahead of its competitor in performance. Nvidia even went on the counterattack with the 8800ULTRA, an overclocked version of the 8800GTX. However, time went on, the R600 was late, and when its time finally came, it turned out to be not at all the product that ATI fans were waiting for. In addition to loyal fans, there were also independent citizens who wanted to get the maximum performance and waited for ATI’s response in order to choose the “best of the best”. But the miracle did not happen — video card 2900XT, which became the embodiment of the R600 architecture, turned out to be not so powerful. Yes, its chip was released on 80nm process technology. The length of the PCB was much shorter than that of the competitor, the 512-bit bus provided a throughput of 100Gb / s, and 320 scalar processors in synthetics showed miracles. But if everything was so great, probably you, dear readers, would not read these lines.

As it turned out, the only real trump card of the 2900XT was the memory bus. 512 bits is a lot, and even with fairly low-frequency GDDR3 memory, there was excellent bandwidth, and the video card did not lack it at all. However, only 16 raster operation units and, most importantly, only 16 texture units (the 8800GTX\ULTRA had twice as many), did 2900XT was extremely vulnerable in scenes with massive textures, resulting in a severe FPS drop. In addition, due to R600 architecture peculiarities, video cards based on HD 2900XT crashed loudly in anti-aliased modes. The fall reached 40-45%, while the competitor did not exceed 25%. Scalar processors (of which there were actually 64 (x5)) although they showed good speed, especially in scenes rich in special effects (working with shaders), they could not outweigh the two main drawbacks of the video card and provide a tangible advantage. All that fit 2900XT is the role of the rival 8800GTS 640mb. Even if the competition in terms of performance was quite successful, the timely release of the 8800GTS 320, which has almost identical performance to the 8800GTS 640mb and an extremely tempting price, reduced the few chances of 2900XT for success to almost zero.

The long-awaited revolution failed, 8800GTX\Ultra remained the most powerful video card.

A little later, exactly two months before the announcement of the RV670 (a slightly lightweight R600 made according to the 55nm process technology), ATI put all the R600 chips under the knife, releasing two fairly powerful and very inexpensive products — HD 2900PRO and HD 2900GT. The first differed from the original XT only in a 256-bit bus and slightly lower frequencies (although the first versions of the 2900PRO had a 512-bit bus), and the second, weaker 2900GT product, already lost 80 scalar processors and 4 texture units at the hardware level, having received even lower clock speeds. The result is an attractive price ($249 and $199, respectively) and a performance level that surpasses the entire Middle-End, including 8800GTS 320/8600GTS.

It’s worth noting that despite the failure of the R600 and the dubious success of the RV670, the scalar architecture should not be considered unsuccessful and less perfect than Nvidia’s streaming (or shader, if you like) architecture. The release of RV770 showed that the elimination of weak points — slow AA operation (by redesigning the algorithm) and a small number of texture units, can unleash the full potential of this architecture on the positive side. A gradual «build-up» of the execution units (scalar, texture and raster) makes it possible to achieve an almost linear increase in performance (let’s pay attention to the HD 5870). In addition, the architecture design allows the chip to operate at higher clock frequencies than its competitor (compare at least RV790 and G200b). Although the shader unit of Nvidia chips operates at twice the frequency compared to the GPU.

The WinFast 8800GTX we are reviewing today has been returned from the other world. As many have already guessed, we are talking about a well-known problem — unsoldering some contacts of a huge chip from the PCB board, which leads to the appearance of artifacts, incorrect operation of the video card or even its “death” — the absence of a signal on the monitor. Although, it is worth making a small correction — this leads rather not to “death”, but to “coma”. There are several ways to return the 8800GTX from the other world, but all of them come down to “warming up” the chip, thereby allowing the loose contacts to be “soldered” again:

  • “Warm up” the chip under a special IR lamp. Its high temperature will allow the tin BGA balls to take a plastic shape and re-attach to the PCB.
  • A more homely way that most people use is to “warm up” the video card in a conventional oven. We get the same effect — the contacts are soldered in place.

But these methods have two big drawbacks. Firstly, using a powerful IR Lamp, you can burn the textolite. Secondly, heating in the oven can melt the plastic parts of the video card — DVI and TV-Out connectors, not to mention the fact that PCB elements can easily “float” due to high temperature — the soldering will melt, and a separate transistor or memory chip will slide off its rightful place. And it’s not a fact that with such a “warm-up” an even greater number of contacts on the chip or board elements are not soldered.

So what to do? The answer is quite simple and elementary — soldered contacts should not be soldered, but simply “pressed”. This is not difficult to do — change the cooling system to something not very expensive and efficient (Zalman VF1000, for example), screw it to the board and tighten the screws harder. As a result, the radiator of the cooling system will press the huge chip to the board, and the contacts will naturally fall into place. You can guarantee 100% of this method only if you are sure that the matter is in the soldered contacts of the chip, and not in a failed memory chip or a burned out NVIO. To check this, it is enough to get hold of a simple CO similar to Zalman VF700. Naturally, it will definitely not be enough to fully cool the chip, but it will be quite enough to boot Windows and pass a short 3D test.

After wiping off a thick layer of dust and giving the board a real alcohol bath with further drying, it returns to its former gloss of 2006. The native cooling system is no longer needed — it will not add efficiency in work, you need to look for a replacement.

It was found quite quickly — a set of branded Zalman heatsinks, specially for 8800GTX\8800GTS — ZALMAN ZM-RHS88 and a great cooler Z-machine GV1000. An excellent choice for the 8800GTX. Not the cheapest of course, but one of the most effective.

However, let’s move on from theory to long-awaited practice. Three years have passed, let’s take a look at what the 8800GTX is capable of today — it has the latest games and two resolutions 1280×1024 and 1680×1050, including modes with AA and AF. The following video cards will be put forward as rivals:

Radeon HD 2900XT — a “repressed revolution”, inferior to 8800GTX from birth. Let’s see how the “unsuccessful response to the G80” will manifest itself today, and how much it will lose to our hero in modern games.


Radeon HD 2900Pro — almost the same card but with a 256-bit bus. We introduce it for pure interest — let’s see how far-sighted the ATI engineers were when they endowed the 2900XT with a 512-bit bus. Will a higher memory bandwidth provide an advantage? Let’s try to figure it out.


GeForce 9800GTX\GTX+ is a direct competitor to 8800GTX. It came out almost a year and a half later and turned out to be cut in terms of memory bandwidth, memory size and raster blocks. But it has higher clock frequencies. So what is more important — hardware stuffing or frequency advantage?


HD 4850 is another direct competitor for today’s hero. This was supposed to be the R600, but, alas, it did not.


HD 4870 — We add this video card as a reference card that provides the highest performance among today’s participants. The 8800GTX will strive for it.

8800GTX — he will perform in two roles. In the first, it will be a standard 8800GTX. In the second, 8800GTX overclocked to 8800ULTRA clock speeds. Well, at least let’s try to disperse it like that …


  • 3DMark 2005, 3DMark 2006, 3DMark Vantage. Unremarkable synthetics. From year to year, it is used to roughly estimate the power of a video card. Why approximate? Because the results of synthetics do not always coincide with the results of gaming applications. However, some conclusions can be drawn.
  • Crysis is the most demanding game to date. She has no competitors, this is the minimum performance benchmark that any video card should strive for. The higher the FPS, the more power reserve a particular video card has.
  • FarCry 2 is a beautiful and special effects-filled game that squeezes all the juice out of video cards. Not Crysis of course, but still one of the most demanding games to date.
  • Batman: Arkham Asylum is without a doubt an outstanding game that has won the hearts of many players. It has gorgeous graphics that demonstrate all the beauty of Unreal Engine 3. Since a huge number of games are now being released on this engine, we decided to include it in testing. One of the most demanding games on this engine.
  • Risen is not the most perfect game in terms of graphics, but it still has some nice «chips» like good shadows and good lighting.
  • Resident Evil 5 is a platform game, but with amazing graphics. To date, one of the most beautiful games.
  • GTA 4 is the same Grand Theft Auto 4. The latest patches smoothed out some of the shortcomings and significantly increased the FPS, but the game for the most part still remained quite resource-intensive. Many users, when upgrading, strive to get the highest frame rate in this particular game.

All benchmarks (3DMark) were tested at standard settings, in games the value High or Ultra High (ie the maximum possible) was used for all graphics settings. Optionally, some games were tested in AAx4 AFx16 mode.

Test stand configuration:

  • Processor — Core i7 920 3400 MHz
  • Motherboard — Asus P6T
  • RAM — 3 x 1 GB DDR3 RAM 1333 MHz
  • Hard Drive — WD 500 GB HDD
  • Power Supply — Thermaltake 650 Watt

Operating system — Windows Vista Ultimate SP2 32 bit

It was decided to overclock the 8800GTX exactly to the clock frequencies of 8800 ULTRA. Why not more? The answer is simple — because, firstly, not all 8800GTX can be overclocked to the ULTRA version frequencies, and conquering its frequencies is already a «feat», and, secondly, this will be quite enough, we will get a significant increase in speed due, mainly , increased frequencies of the shader unit.

So, our hero reached the frequencies of the ULTRA version (612/1512/1080MHz) without any problems, which is very, very good.



Map

2900Pro

2900XT

HD4850

HD 4870

9800 GTX

9800 GTX+

8800GTX

8800 ULTRA
Video chip

R600

R600

RV670

RV670

G92

G92

G80

G80
Process

80nm

80nm

55nm

55nm

65nm

65nm

90nm

90nm
Number of transistors, million

700

700

956

956

754

754

681

681
Chip area (mm2)

420

420

256

256

330

330

484

484
Number of unified processors, pcs.

320 (64×5)

320 (64×5)

800 (160×5)

800 (160×5)

128

128

128

128
ROP’s, pcs.

16

16

16

16

16

16

24

24
Texture blocks

16

16

40

40

32 (64)

32 (64)

32

32
GPU frequency

601

750

625

750

675

765

576

612
Shader clock

601

750

625

750

1688

1898

1350

1512
Video memory frequency

1600

1660

2000

3600

2200

2200

1800

2160
Memory type

GDDR3

GDDR3

GDDR3

GDDR5

GDDR3

GDDR3

GDDR3

GDDR3
Memory size

512

512

512

512

512

512

768

768
Memory bus, bit

256

512

256

256

256

256

384

384