8800 xtx: GeForce 8800 GTX vs Radeon RX 6900 XTX

G80: NVIDIA GeForce 8800 GTX

Written by

Tim Smalley

November 8, 2006 | 18:59

Tags: #8800 #benchmark #evaluation #experience #g80 #gameplay #geforce #gtx #performance #pictures #review #score

Companies: #nvidia

1 — Introduction2 — Under the heatsink3 — Background — Traditional Pipelines4 — Why Unify?5 — DirectX 10 Highlights6 — Stream Processing Architecture7 — ROPs (Pixel Output Engines)8 — Coverage Sampling AA9 — Anisotropic Filtering Quality 110 — Anisotropic Filtering Quality 211 — Test Setup12 — Elder Scrolls IV: Oblivion13 — Company Of Heroes14 — Battlefield 214215 — F.E.A.R. Extraction Point16 — Ghost Recon Advanced Warfighter17 — Half-Life 2: Episode One18 — Power Consumption & Heat19 — Final Thoughts

The Elder Scrolls IV: Oblivion

Publisher:2K Games

We used the latest addition to the impressive Elder Scrolls series of titles, Elder Scrolls IV: Oblivion with the 1. 1 patch applied. It uses the Gamebyro engine and features DirectX 9.0 shaders, the Havok physics engine and Bethesda use SpeedTree for rendering the trees. The world is made up of trees, stunning landscapes, lush grass and features High Dynamic Range (HDR) lighting and soft shadowing. If you want to learn more about The Elder Scrolls IV: Oblivion, we recommend giving our graphics and gameplay review a read.

The graphics options are hugely comprehensive, with four screens of options available for you to tweak to your heart’s content. There is also the configuration file too, but we’ve kept things as simple as possible by leaving that in its out of the box state. For our testing, we did several manual run throughs to test the game in a variety of scenarios ranging from large amounts of draw distance, indoors and also large amounts of vegetation. Our vegetation run through is the result that we have shown, as it proved to be the most stressful — we walked up the hill to Kvach, where the first Oblivion gate is located.

________________________________________________________________________________

24″ widescreen gaming:

NVIDIA GeForce 8800 GTX / ATI Radeon X1950 XTX / BFG Tech GeForce 7950 GX2

Using a 24″ widescreen monitor, the GeForce 8800 GTX was easily able to step up to the plate — this was simply the best gaming experience we’ve ever had in Oblivion at 1920×1200. However, in the driver that we used for testing, there was a slight bug, where we would occasionally see blotches of colour flashing on the screen.

We were in discussion with NVIDIA for a few days about this bug, and we were given a fix last night — the fix worked, and the performance appeared to not change with the new driver. NVIDIA has assured us that this fix will appear in the final driver released on NVIDIA’s home page later today. We literally set the game’s settings to their maximum, including enabling HDR before turning 8xQAA with transparency multi-sampling on for good measure. Add that to the fact that we’ve turned 16xAF on, and you’ve got one hell of a gaming experience.

The GeForce 7950 GX2 handled itself reasonably well, but it was simply no match for the GeForce 8800 GTX. However, it’s lacking anti-aliasing because the card doesn’t support HDR and anti-aliasing at the same time. It also suffers from poor texture filtering quality too — NVIDIA’s texture filtering algorithms on GeForce 7-series hardware is angle dependant and texture shimmering can be very apparent in Oblivion, especially when you’re walking along the many paths and roads in the world.

ATI’s Radeon X1950 XTX delivered a better gaming experience than the GeForce 7950 GX2, simply because of the fact it was capable of delivering higher image quality — quality that NVIDIA’s GeForce 7950 GX2 can’t match. The Radeon X1950 XTX played the game with similar settings to the GeForce 7950 GX2, but with 2xAA and 16xHQ AF. With that said though, ATI’s image quality crown is no more — the GeForce 8800 GTX has better anisotropic filtering with absolutely no angle dependency on its anisotropic filtering algorithm.

________________________________________________________________________________

30″ widescreen gaming:

NVIDIA GeForce 8800 GTX / ATI Radeon X1950 XTX / BFG Tech GeForce 7950 GX2

At 2560×1600, the GeForce 8800 GTX just kept on going — we were able to leave the in-game detail settings at their maximum settings and leave 2xAA with transparency supersampling enabled. Despite the relatively low anti-aliasing setting, this game looked simply awesome at 2560×1600. It was so awesome that I took a few hours out and spent a good amount of time just oggling over how gorgeous this game looks at high-resolution with anti-aliasing enabled.

With both the Radeon X1950 XTX and GeForce 7950 GX2, we had to turn grass off altogether, and also lower shadow quality. The Radeon X1950 XTX was slower than the GeForce 7950 GX2 and NVIDIA’s previous flagship card was capable of playing the game at higher in-game detail settings. However, there is no getting away from its poor anisotropic filtering quality.
Both the other cards suffer from a lack of grass detail and the lack of shadow detail, as well as the fact neither card can handle the AA. Elder Scrolls is a shader-intensive game, and the unified pipelines on the 8800 really gobble this up.

1 — Introduction2 — Under the heatsink3 — Background — Traditional Pipelines4 — Why Unify?5 — DirectX 10 Highlights6 — Stream Processing Architecture7 — ROPs (Pixel Output Engines)8 — Coverage Sampling AA9 — Anisotropic Filtering Quality 110 — Anisotropic Filtering Quality 211 — Test Setup12 — Elder Scrolls IV: Oblivion13 — Company Of Heroes14 — Battlefield 214215 — F.E.A.R. Extraction Point16 — Ghost Recon Advanced Warfighter17 — Half-Life 2: Episode One18 — Power Consumption & Heat19 — Final Thoughts

NVIDIA GeForce 8800 GT vs ATI Radeon X1950 XTX


Comparative analysis of NVIDIA GeForce 8800 GT and ATI Radeon X1950 XTX videocards for all known characteristics in the following categories: Essentials, Technical info, Video outputs and ports, Compatibility, dimensions and requirements, API support, Memory, Technologies.
Benchmark videocards performance analysis: PassMark — G3D Mark, PassMark — G2D Mark, GFXBench 4.0 — T-Rex (Frames), GFXBench 4.0 — T-Rex (Fps).

NVIDIA GeForce 8800 GT

Buy on Amazon


vs

ATI Radeon X1950 XTX

Buy on Amazon

 

Differences

Reasons to consider the NVIDIA GeForce 8800 GT

  • Videocard is newer: launch date 1 year(s) 0 month(s) later
  • 2.3x more core clock speed: 1500 MHz vs 650 MHz
  • 3.2x more texture fill rate: 33.6 billion / sec vs 10.4 GTexel / s
  • A newer manufacturing process allows for a more powerful, yet cooler running videocard: 65 nm vs 90 nm
  • Around 19% lower typical power consumption: 105 Watt vs 125 Watt






Launch date 29 October 2007 vs 17 October 2006
Core clock speed 1500 MHz vs 650 MHz
Texture fill rate 33. 6 billion / sec vs 10.4 GTexel / s
Manufacturing process technology 65 nm vs 90 nm
Thermal Design Power (TDP) 105 Watt vs 125 Watt

Reasons to consider the ATI Radeon X1950 XTX

  • 2.2x more memory clock speed: 2000 MHz vs 900 MHz


Memory clock speed 2000 MHz vs 900 MHz

Compare benchmarks


GPU 1: NVIDIA GeForce 8800 GT
GPU 2: ATI Radeon X1950 XTX


Name NVIDIA GeForce 8800 GT ATI Radeon X1950 XTX
PassMark — G3D Mark 516
PassMark — G2D Mark 81
GFXBench 4. 0 — T-Rex (Frames) 3346
GFXBench 4.0 — T-Rex (Fps) 3346

Compare specifications (specs)
























NVIDIA GeForce 8800 GT ATI Radeon X1950 XTX
Architecture Tesla R500
Code name G92 R580+
Launch date 29 October 2007 17 October 2006
Launch price (MSRP) $349 $449
Place in performance rating 1200 not rated
Type Desktop Desktop
Core clock speed 1500 MHz 650 MHz
CUDA cores 112
Floating-point performance 336. 0 gflops
Manufacturing process technology 65 nm 90 nm
Maximum GPU temperature 105 °C
Pipelines 112
Texture fill rate 33.6 billion / sec 10.4 GTexel / s
Thermal Design Power (TDP) 105 Watt 125 Watt
Transistor count 754 million 384 million
Audio input for HDMI S / PDIF
Display Connectors 2x DVI, 1x S-Video, Dual Link DVIHDTV 2x DVI, 1x S-Video
Maximum VGA resolution 2048×1536
Multi monitor support
Bus support PCI-E 2. 0
Interface PCIe 2.0 x16 PCIe 1.0 x16
Length 9″ (22.9 cm) 230 mm
SLI options 2-way
Supplementary power connectors 6-pin & 8-pin 1x 6-pin
DirectX 10.0 9.0c
OpenGL 2.1 2.0
Maximum RAM amount 512 MB 512 MB
Memory bandwidth 57. 6 GB / s 64.0 GB / s
Memory bus width 256 Bit 256 Bit
Memory clock speed 900 MHz 2000 MHz
Memory type GDDR3 GDDR4
3D Vision
CUDA
High Dynamic-Range Lighting (HDRR) 128bit

Navigation

Choose a GPU

Compare videocards

Compare NVIDIA GeForce 8800 GT with others




NVIDIA
GeForce 8800 GT



vs



ATI
All-In-Wonder 9000 PRO




NVIDIA
GeForce 8800 GT



vs



NVIDIA
GeForce 8600 GS




NVIDIA
GeForce 8800 GT



vs



NVIDIA
Quadro NVS 420




NVIDIA
GeForce 8800 GT



vs



ATI
Radeon HD 4860




NVIDIA
GeForce 8800 GT



vs



AMD
Radeon HD 7400G




NVIDIA
GeForce 8800 GT



vs



AMD
Radeon R6 M435DX

GeForce 8800 Ultra vs Radeon HD 2900 XTX in 3DMark’06

With the permission of Dell, we have already
we know the approximate frequencies of the video card GeForce 8800 Ultra (650/2160 GHz), the announcement of which, according to updated data, should take place in mid-May. Because OEMs sometimes choose to stick to more conservative frequencies for the sake of power control, Dell’s specifications for the GeForce 8800 Ultra are subject to change. For example site

The Inquirer

reports that the frequencies of the GeForce 8800 Ultra will be 675/2350 MHz. Perhaps colleagues are referring to some overclocked modification from one of NVIDIA’s partners. As for the shader domain frequency, it will not be increased as high as predicted — from 1350 to 1550 MHz. It is reported that the video card will be equipped with a cooler reminiscent of the GeForce 7800 GTX cooling system.

Despite the predicted small number, it is the GeForce 8800 Ultra and Radeon HD 29 that will fight for the honor of the manufacturer in the upper price segment00 XTX. According to the Fudzilla website, the Radeon HD 2900 XTX will operate at 800/2200 MHz. Recall that the Radeon HD 2900 XT operates at 742/1650 MHz. The older video card has 1 GB of GDDR-4 memory, the younger one is content with 512 MB of GDDR-3 memory. Although the same source previously reported that the Radeon HD 2900 XTX could be delayed until the third quarter, the presence of engineering samples in the hands of observers suggests that the formal announcement of the flagship version of the R600 will take place in mid-May. Retail video cards may be delayed if we talk about bulk deliveries.

As for the notorious lag of the Radeon HD 2900 XTX from the Radeon HD 2900 XT, it can be partially eliminated with the help of driver optimization. Fudzilla has data on the performance of GeForce 8800 Ultra and Radeon HD 2900 XTX in 3DMark’06 using Core 2 Extreme QX6800 (2.93 GHz) and Catalyst 8.361 RC4 drivers.

  • GeForce 8800 Ultra (650/2160 MHz) -> approximately 14,000 parrots;
  • Radeon HD 2900 XTX (800/2200 MHz) -> approximately 13,500 «parrots»;
  • Radeon HD 2900 XT (845/1990 MHz) -> 14,005 parrots.

The result for the overclocked Radeon HD 2900 XT was obtained by an authoritative American website yesterday. Considering overclocking, the R600 version with 512 MB of GDDR-3 memory can actually be faster than the version with 1 GB of GDDR-4 memory. The price of the latter has not yet been determined, but hardly anyone will dare to overpay for the GeForce 8800 Ultra for the sake of an advantage of 500 3DMark’06 «parrots». We emphasize that the nominal frequencies of the GeForce 8800 Ultra can be increased, but the price reduction ($999) while no one speaks.

recommendations

Colleagues report that these tests were carried out using the latest revision of the Radeon HD 2900 XTX, which already has improved performance. As you can see, overclocking brings the Radeon HD 2900 XT closer to the Radeon HD 2900 XTX in this particular test. We hope that the latter will also overclock, this will create a gap in performance.

Detailed photos of the GeForce 8800 GTX: studying the anatomy

Collective efforts to discuss yesterday’s
The news has led us to identify the Asus video card pictured above as the reference design GeForce 8800 GTS. It is shorter than the GeForce 8800 GTX, has one additional power connector, and has four slots on the cooler casing instead of five. However, these two video cards have other differences, we will discuss them below.

Pages
this
Asian forum has new photos of video cards GeForce 8800 GTX and GeForce 8800 GTS, which served as a source of some unexpected revelations.


First of all, the dimensions of the G80 video chip are impressive! Yes, we knew that it would be possible to accommodate 700 million transistors using a 0.09 micron process technology only on a sufficiently large chip, but seeing once is still better than hearing a hundred times :). The chip has a protective edging and a heat spreader.

Note that the first photo shows exactly the GeForce 8800 GTS video card, it has a shorter length, one additional power connector and one MIO connector for working in SLI mode.

By the way, about the purpose of the chip with a green substrate to the left of the G80, forum participants build a variety of guesses: this is a dedicated chip for calculating «physics», and part of the memory controller, and a transition bridge for AGP support, and just a video chip of the previous generation attached for comparison . What he really is, we will probably find out soon. On the GeForce 8800 GTX photo below, this chip is simply smeared with black:


The video card GeForce 8800 GTX has a length of about 280 mm — this can be judged by the scale of the image. On it you can find two six-pin power connectors and two MIO connectors for working in SLI mode. Obviously, NVIDIA is preparing this video card to work in Quad SLI bundles of four single graphics cards.


This is what a GeForce 8800 GTX video card looks like with a reference design cooler — five slots in the casing. Power adapters with four Molex connectors dangle nearby.


There are no memory chips on the back of the printed circuit board. The aluminum heat sink has two heat pipes.


By the way, on the rear panel of the video card you can find a socket for connecting an external power supply. Obviously, it is provided in case the power supply of the system unit «does not pull» this video card.