Nvidia 6200 agp review: NVIDIA’s GeForce 6200 & 6600 non-GT: Affordable Gaming

NVIDIA’s GeForce 6200 & 6600 non-GT: Affordable Gaming

by Anand Lal Shimpion October 11, 2004 9:00 AM EST

  • Posted in
  • GPUs

44 Comments
|

44 Comments

IndexNV4x’s Video Processor – What Happened?The ContendersPower ConsumptionThe TestDoom 3 PerformanceHalf Life 2 (Source) Visual Stress TestStar Wars Battlefront PerformanceThe Sims 2 PerformanceUnreal Tournament 2004 PerformanceBattlefield Vietnam PerformanceHalo PerformanceFarCry PerformanceAnti-Aliasing PerformanceAnti-Aliasing Image QualityAnisotropic Filtering PerformanceAnisotropic Filtering Image QualityFinal Words

Special thanks to Newegg for supplying hardware for this comparison test.

Although it’s seeing very slow adoption among end users, PCI Express platforms are getting out there and the two graphics giants are wasting no time in shifting the competition for king of the hill over to the PCI Express realm.

ATI and NVIDIA have both traded shots in the mid-range with the release of the Radeon X700 and GeForce 6600. Today, the battle continues in the entry-level space with NVIDIA’s latest launch — the GeForce 6200.


The GeForce 6 series is now composed of 3 GPUs: the high end 6800, the mid-range 6600 and now the entry-level 6200. True to NVIDIA’s promise of one common feature set, all three of the aforementioned GPUs boast full DirectX 9 compliance, and thus, can all run the same games, just at different speeds.

What has NVIDIA done to make the 6200 slower than the 6600 and 6800?

For starters, the 6200 features half the pixel pipes of the 6600, and 1/4 that of the 6800. Next, the 6200 will be available in two versions: one with a 128-bit memory bus like the 6600 and one with a 64-bit memory bus, effectively cutting memory bandwidth in half. Finally, NVIDIA cut the core clock on the 6200 down to 300MHz as the final guarantee that it would not cannibalize sales of their more expensive cards.

The 6200 is a NV43 derivative, meaning it is built on the same 0.11-micron (110nm) process on which the 6600 is built. In fact, the two chips are virtually identical with the 6200 having only 4 active pixel pipelines on its die. There is one other architectural difference between the 6200 and the rest of the GeForce 6 family, and that is the lack of any color or z-compression support in the memory controller. Color and Z-compression are wonderful ways of reducing the memory bandwidth overhead of enabling technologies such as anti-aliasing. So, without support for that compression, we can expect the 6200 to take a bigger hit when turning on AA and anisotropic filtering. The benefit here is that the 6200 doesn’t have the fill rate or the memory bandwidth to run most games at higher resolutions. Therefore, those who buy the 6200 won’t be able to play at resolutions where the lack of color and z-compression would really matter with AA enabled. We’ll investigate this a bit more in our performance tests.


Here’s a quick table summarizing what the 6200 is and how it compares to the rest of the GeForce 6 family:





 GPU  Manufacturing Process  Vertex Engines  Pixel Pipelines  Memory Bus Width
GeForce 6200 0.11-micron 3 4 64/128-bit
GeForce 6600 0. 11-micron 3 8 128-bit
GeForce 6800 0.13-micron 6 16 256-bit


The first thing to notice here is that the 6200 supports either a 64-bit or 128-bit memory bus, and as far as NVIDIA is concerned, they are not going to be distinguishing cards equipped with either a 64-bit or 128-bit memory configuration. While NVIDIA insists that they cannot force their vendor partners to distinguish the two card configurations apart, we’re more inclined to believe that NVIDIA simply would like all 6200 based cards to be known as a GeForce 6200, regardless of whether or not they have half the memory bandwidth. NVIDIA makes a «suggestion» to their card partners that they should add the 64-bit or 128-bit designation somewhere on their boxes, model numbers or website, but the suggestion goes no further than just being a suggestion.

The next issue of variability comes in the topic of clock speeds. NVIDIA has «put a stake in the ground» at 300MHz as the desired clock speed for the 6200 GPUs regardless of configuration, and it does seem that add-in board vendors would have no reason to clock their 6200s any differently, since they are all paying for a 300MHz part. The variability really comes when you start talking about memory speeds. The 6200 only supports DDR1 memory and is spec’d to run at 275MHz (effectively 550MHz). However, as we’ve seen in the past, this is only a suggestion — it is up to the manufacturers as to whether or not they will use cheaper memory.

NVIDIA is also only releasing the 6200 as a PCI Express product — there will be no AGP variant at this point in time. The problem is that the 6200 is a much improved architecture compared to the current entry-level NVIDIA card in the market (the FX 5200), yet the 5200 is still selling quite well as it is not really purchased as a hardcore gaming card. In order to avoid cannibalizing AGP FX 5200 sales, the 6200 is kept out of competition by being a strictly PCI Express product. While there is a PCI Express version of the FX 5200, its hold on the market is not nearly as strong as the AGP version, so losing some sales to the 6200 isn’t as big of a deal.

In talking about AGP versions of recently released cards, NVIDIA has given us an update on the status of the AGP version of the highly anticipated GeForce 6600GT. We should have samples by the end of this month and NVIDIA is looking to have them available for purchase before the end of November. There are currently no plans for retail availability of the PCI Express GeForce 6800 Ultras — those are mostly going to tier 1 OEMs.

The 6200 will be shipping in November and what’s interesting is that some of the very first 6200 cards to hit the street will most likely be bundled with PCI Express motherboards. It seems like ATI and NVIDIA are doing a better job of selling 925X motherboards than Intel these days.

The expected street price of the GeForce 6200 is between $129 and $149 for the 128-bit 128MB version. This price range is just under that of the vanilla ATI X700 and the regular GeForce 6600 (non-GT), both of which are included in our performance comparison — so in order for the 6200 to truly remain competitive, its street price will have to be closer to the $99 mark.

The direct competition to the 6200 from ATI are the PCI Express X300 and X300SE (128-bit and 64-bit versions respectively). ATI has a bit of a disadvantage here because the X300 and X300SE are still based on the old Radeon 9600 architecture and not a derivative of the X800 and X700. ATI is undoubtedly working on a 4-pipe version of the X800, but for this review, the advantage is definitely in NVIDIA’s court.

NV4x’s Video Processor – What Happened?
IndexNV4x’s Video Processor – What Happened?The ContendersPower ConsumptionThe TestDoom 3 PerformanceHalf Life 2 (Source) Visual Stress TestStar Wars Battlefront PerformanceThe Sims 2 PerformanceUnreal Tournament 2004 PerformanceBattlefield Vietnam PerformanceHalo PerformanceFarCry PerformanceAnti-Aliasing PerformanceAnti-Aliasing Image QualityAnisotropic Filtering PerformanceAnisotropic Filtering Image QualityFinal Words

Tweet

PRINT THIS ARTICLE

NVIDIA’s GeForce 6200 graphics processor

WHEN NVIDIA first announced the GeForce 6800 series, the company boasted that its new graphics architecture would scale down to mid-range and value markets by the end of the year. GeForce 6 trickle-down has already spawned the GeForce 6600 series, whose performance and feature set are a revelation for the mid-range market. Today, NVIDIA extends the GeForce 6 series even further into the value segment with the GeForce 6200. This four-pipe GeForce 6 brings Shader Model 3.0 support to graphics cards in and around the $129 mark, giving cash-strapped gamers an intriguing new low-end option.

How does the GeForce 6200 fare against competition that includes ATI’s budget Radeons and Intel’s Graphics Media Accelerator 900? Read on to find out.

The GeForce 6200
The GeForce 6200 graphics chip is a four-pipe derivative of the NV43 GPU that powers the GeForce 6600 series. Like the rest of the GeForce 6 line, the 6200 utilizes a fragment crossbar to link pixel shaders and raster operators (ROPs) within the pixel pipeline. Rather than being bound to a single pixel shader, ROPs are free to tackle output from any of the chip’s pixel shaders. This rather promiscuous arrangement allows NVIDIA to pair eight pixel shaders with only four ROPs on the GeForce 6600, saving transistors without catastrophically bottlenecking performance. With the GeForce 6200, NVIDIA pairs four pixel pipes with four ROPs. There’s no transistor savings, but the fragment crossbar may offer a clock-for-clock performance advantage over more traditional designs.

Like the GeForce 6600 series, the GeForce 6200 has full support for DirectX 9, Shader Model 3.0, and 32-bit floating point data types. The 6200 packs three vertex shader units, just like the 6600, as well. The two also share a programmable video processor that we’ll have more to tell you about soon. The GeForce 6200 differs from the rest of the GeForce 6 line when it comes to antialiasing, though: its Intellisample 3.0 implementation lacks color and Z-compression. Since low-end cards generally lack the pixel pushing horsepower to make antialiasing viable in games, the lack of Intellisample color and Z-compression isn’t a major flaw.

Looks like NV43 to me

The GeForce 6200 GPU is manufactured by TSMC on a 0.11-micron fabrication process. The die measures 12mm x 13mm according to my tape measure, making it identical in size to the NV43 GPU that powers the GeForce 6600. Isn’t that interesting? When we asked NVIDIA for the 6200’s code name according to the “NV4x” convention, the company would only say the chip was a “derivative” of the NV43. It’s entirely possible that the GeForce 6200 GPU is simply an NV43 with four pixel pipes and Intellisample color and Z-compression disabled. If this is the case, we may see enterprising enthusiasts attempt to unlock the extra pipelines with hardware or software modifications.

Unlike other members of the GeForce 6 line, there will only be one version of the GeForce 6200—no GT, XT, Ultra, or Turbo Golden Sample Special Edition. Clock speeds for the 6200 aren’t written in stone, though. NVIDIA recommends a core clock of 300MHz, but board vendors are free to run faster. There’s also flexibility on the memory clock front. Our GeForce 6200 reference card has DDR memory clocked at an effective 500MHz, which thanks to the 6200’s 128-bit memory bus, gives the card an even 8. 0GB/sec of memory bandwidth. Board manufacturers will be free to run higher or lower memory clocks, and they’ll also be able to make cheaper cards that have a narrower 64-bit path to memory.

Our GeForce 6200 reference card

As you can see, the GeForce 6200 reference card is a PCI Express affair. NVIDIA doesn’t plan to make an AGP version of the GeForce 6200, leaving the existing GeForce FX products for AGP systems. Since PC builders are already producing lots of machines based on Intel’s 900-series chipsets and PCI Express chipets are coming soon for the Athlon 64, there should be a burgeoning market for PCI Express graphics cards in the coming months.

The GeForce 6200 is primarily targeted at major OEMs and system integrators, so retail products may not make it to store shelves at places like Best Buy, CompUSA, or Fry’s any time soon. Cards should be available from online retailers for between $129 and $149, if not less. Expect 64-bit flavors of the GeForce 6200 to be even cheaper and, hopefully, clearly marked.

Finally, notice that the GeForce 6200 card lacks “golden fingers.” The 6200 doesn’t support SLI, so you won’t be able to team up two cards in a single system.

 

Our testing methods
All tests were run three times, and the results were averaged, using the following test systems.

Processor Intel Pentium 4 520 2.8GHz
Front-side bus 800MHz (200MHz quad pumped)
Motherboard Gigabyte GA-8I915G-MF
North bridge Intel 915G
South bridge Intel ICH6
Chipset drivers 6. 0.1.1002
Memory size 1GB (2 DIMMs)
Memory type OCZ PC3200 EL Platinum Rev 2 DDR SDRAM at 400MHz
CAS latency 2
Cycle time 5
RAS to CAS delay 2
RAS precharge 2
Hard drives Western Digital Raptor WD360GD 37GB
Audio ICH6/ALC850
Graphics ATI Radeon X600 Pro
ATI Radeon X300
NVIDIA GeForce 6200 Intel GMA 900
Graphics driver CATALYST 4. 10 hotfix ForceWare 66.81 14.7
OS Microsoft Windows XP Professional with Service Pack 2

We’ll be comparing the GeForce 6200’s performance with a couple of Radeons and Intel’s Graphics Media Accelerator (GMA) 900. In lieu of a real Radeon X600 Pro, I used a Radeon X600 XT clocked Pro speeds. I also had to underclock our Radeon X300 to get it running at the correct 325MHz core and effective 200MHz memory clock speeds. The reference card I received from ATI was running a 400MHz core and 290MHz memory clock—much faster than cards you can buy on the market.

I should also note that the X300 card has 256MB of memory. This is common practice for low-end cards as manufacturers try to dazzle less savvy buyers with higher numbers. However, we’ve found that low-end cards just don’t have the horsepower to take advantage of 256MB of memory, so the advantage is dubious at best.

We used the following versions of our test applications:

  • DOOM 3 with trhaze and trdelta1 demos
  • Unreal Tournament 2004 v3270 with trdemo1
  • Counter-Strike Source with trdemo1
  • Far Cry 1.2 with tr3-pier and tr1-volcano demos
  • FutureMark 3DMark05 v110
  • Xpand Rally single player demo with trtest
  • FRAPS 2.2.5

The test systems’ Windows desktop was set at 1280×1024 in 32-bit color at an 85Hz screen refresh rate. Vertical refresh sync (vsync) was disabled for all tests and drivers were left at their default image quality settings. Both ATI and NVIDIA’s default image quality driver settings use adaptive anisotropic filtering algorithms.

All the tests and methods we employed are publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.

 

Pixel filling power
We’ll kick things off with a look at theoretical peak fill rates and memory bandwidth. Theoretical peaks don’t necessarily determine in-game performance, but they’re a good place to start sizing up the GeForce 6200’s capabilities. I’ve sorted the list below, which includes an array of low-end and mid-range PCI Express graphics options, according to multitextured fill rate.

  Core clock (MHz) Pixel pipelines  Peak fill rate (Mpixels/s) Texture units per pixel pipeline Peak fill rate (Mtexels/s) Memory clock (MHz) Memory bus width (bits) Peak memory bandwidth (GB/s)
GeForce 6200 300 4 1200 1 1200 500 128 8. 0
Radeon X300 SE 325 4 1300 1 1300 400 64 3.2
Radeon X300 325 4 1300 1 1300 400 128 6. 4
GMA 900 333 4 1333 1 1333 400 128 6.4
Radeon X600 Pro 400 4 1600 1 1600 600 128 9. 6
Radeon X600 XT 500 4 2000 1 2000 740 128 11.8
GeForce 6600 300 8* 2400 1 2400 TBD 128 TBD
Radeon X700 400 8 3200 1 3200 600 128 9. 6
Radeon X700 Pro 420 8 3360 1 3360 864 128 13.8
Radeon X700 XT 475 8 3800 1 3800 1050 128 16. 8
GeForce 6600 GT 500 8* 2000 1 4000 1000 128 16.0

In terms of fill rate, the GeForce 6200 brings up the rear. With four pixel pipes and a 300MHz core clock, it can’t even match the peak theoretical fill rates of the Radeon X300 series. The low core clock speed means that the card’s shader units are going to be running slower than the competition, too.

In the memory bandwidth department, the GeForce 6200 looks a little more competitive. The card’s 128-bit memory bus and effective 500MHz memory clock yield 8GB/sec of bandwidth—better than the X300s but shy of the Radeon X600 Pro.

To see how these theoretical peaks pan out in the real world, let’s have a look at 3DMark05’s synthetic fill rate tests. Note that the drivers we’re using for the 6200, X600 Pro and X300, and even the GMA 900 aren’t approved by FutureMark for use with 3DMark05.

The GeForce 6200’s single texture fill rate is just a hair behind the X600 Pro, but when we start multitexturing, the 6200 is relegated to the back of the pack. Given that the 6200 has the slowest clock speed of the lot, its relatively modest performance isn’t surprising. What is surprising, however, is how close the Intel GMA 900 integrated graphics core gets to its theoretical peak fill rates.

Shader performance
While we’re looking at synthetic tests, let’s have a peek at how the GeForce 6200 fares in 3DMark05’s shader tests. I ran all the cards using 3DMark05’s Shader Model 2.0 code path. Since the 6200 also supports Shader Model 3.0, I also ran it using the SM 3.0 code path.

The 6200’s shader power is impressive even when running the Shader Model 2.0 codepath. Based on these scores, I wouldn’t expect much from the Intel GMA 900 in our game tests. It might have fill rate to spare, but shader power is sorely lacking.

 

DOOM 3
I used a couple of DOOM 3 gameplay demos to test the 6200’s performance in what’s arguably the most visually stunning game around. I ran the game with the High Quality detail setting, which enables 8X anisotropic filtering. High Quality might seem a little excessive for a low-end graphics card, but the GeForce 6200 handled it with aplomb.

The first demo takes place in the Delta Labs and is representative of the dark, shadow-filled environments you’ll encounter in the bulk of the game’s levels.

The 6200 wipes the floor with the competition. It’s not even close.

Next, we move onto our heat haze demo. This demo takes place in one of DOOM 3’s hell levels, which are visually quite different from the rest of the game. These levels make liberal use of some snazzy pixel shader-powered heat shimmer effects.

Again, the 6200 dominates. The race is a little closer this time, but not by much. The GMA 900 continues to stumble through DOOM 3 and has some serious problems displaying the heat haze effect properly.

 

Far Cry
Before DOOM 3 hit, Far Cry was arguably the best looking first-person shooter around. The game is loaded with shader effects and, perhaps more importantly, diverse indoor and outdoor environments. We’ll be looking at two of those environments today, both with the game’s high detail image quality setting.

First up we have the Pier level. Welcome to the jungle, folks.

The GeForce 6200 isn’t nearly as dominant in Far Cry as it was in DOOM 3. In fact, the Radeon X600 Pro beats it this time around. The 6200 is clearly faster than the Radeon X300, though, and undoubtedly superior to the bottom-dwelling GMA 900.

From lush jungles to underground interiors, the next Far Cry environment we’ll be looking at is the Volcano level. Like DOOM 3, this level employs heat shimmer effects in a number of places.

Again, the GeForce 6200 plays second fiddle to the Radeon X600 Pro. It’s pretty close, though, and the 6200 definitely has an edge over the Radeon X300.

As we saw in DOOM 3, the GMA 900 is way off the pace in Far Cry. The GMA 900 makes a visual mess of Far Cry’s heat effects shaders, too. The GMA 900’s DirectX 9 compatibility is an, er, far cry from DX9 competence.

 

Counter-Strike: Source
Counter-Strike: Source has officially been released and should hold anxious gamers over until Half-Life 2 hits. CS: Source also includes Valve’s shader-filled video stress test, which showcases the material and shader effects found in Half-Life 2. We used CS: Source’s high detail image quality settings and DirectX 9 code path for all but the GMA 900. The game refused to run the GMA 900 with anything but the DX 8.1 code path, so scores on that front aren’t directly comparable.

In the CS: Source video stress test, the GeForce 6200 is wedged between the Radeon X300 and X600 Pro. The GMA 900’s performance looks comparably better here.

Next, we’re looking at in-game Counter-Strike performance with a demo of online gameplay on the cs_italy map.

The GeForce 6200 is stuck between the Radeons again. The card isn’t much slower than the Radeon X600 Pro, although the game seems to be CPU-bound at lower resolutions.

 

Unreal Tournament 2004
Although DOOM 3 and Far Cry’s visuals are far more impressive than Unreal Tournament 2004, the engine has been licensed by scores of developers. A number of other titles already make use of the Unreal engine and we’re likely to see more in the coming months. Since Unreal Tournament 2004 is a little older, I was able to max out the in-game detail levels and still get playable frame rates in our custom-recorded demo of Onslaught gameplay.

Notice a pattern yet? The GeForce 6200 is faster than the Radeon X300 in Unreal Tournament 2004, but slower than the Radeon X600 Pro. Even in this older game engine, the GMA 900 is no match for the low-end graphics cards we’ve assembled. There’s definitely something wrong with the GMA 900’s performance at 800×600, too.

 

Xpand Rally
The Xpand Rally single-player demo is a new addition to our graphics benchmark suite. To test this game, I used FRAPS to capture frame rates during the replay of a custom-recorded demo. I used the game’s “balanced” image quality settings to achieve playable frames. The game uses a particularly nice color glow shader effect that doesn’t appear to translate well to the screenshot below. It doesn’t translate well to the GMA 900, either. The GMA 900 doesn’t appear to be applying most of the game’s shader effects, for whatever reason.

Ouch. The GeForce 6200 takes a bit of a beating in Xpand Rally and is just barely able to keep up with the Radeon X300. Let’s have a look at frame rates across the length of our 180-second replay.

The GeForce 6200’s frame rates are reasonably consistent across the length of the demo, at least when compared with its competition. Still, it’s disappointing that the 6200 can’t even best the Radeon X300, especially since Xpand Rally carries NVIDIA’s “The way it’s meant to be played” logo.

 

Antialiasing
To test the GeForce 6200’s antialiasing performance, I used the same Unreal Tournament 2004 demo as in our previous tests. I kept the same in-game detail levels and used a display resolution of 1024×768. The GeForce 6200 was tested with 2X, 4X, and 8X antialiasing while the Radeons were tested with 2X, 4X, and 6X AA. The GMA 900 can’t do antialiasing, so I’ve only included its score without AA.

The Radeons’ antialiasing performance scales much better than the GeForce 6200, perhaps in part because the 6200 lacks color and Z-compression. The difference in performance is especially glaring with 4X antialiasing, where the GeForce 6200 is trounced by the Radeon X600 Pro. Even the Radeon X300 squeaks ahead by a few frames per second.

That’s how the GeForce 6200’s antialiasing performs and here’s how it looks. Click on the images below to open an uncompressed PNG in a new window.

No antialiasing, 2X, 4X, 8X – GeForce 6200

No antialiasing, 2X, 4X, 6X – Radeon X600 Pro

For 2X and 4X antialiasing, the Radeon X600 Pro’s gamma-corrected antialiasing looks better than the GeForce 6200’s output. Comparing the GeForce 6200 at 8X to the X600 at 6X is a little more complicated because NVIDIA’s 8X antialiasing algorithm combines both multi and supersampling and affects more than just jagged edges.

 

Anisotropic filtering
For our anisotropic texture filtering tests, I used the same Unreal Tournament 2004 demo once more. Again, I left in-game detail levels at their highest setting and used a display resolution of 1024×768.

There isn’t much difference in aniso scaling between the GeForce 6200, Radeon X600 Pro, and Radeon X300. The 6200 suffers from a slightly less dramatic performance hit between 4X and 16X, but it’s not nearly as dramatic as our antialiasing results.

Moving from performance to quality, here’s how the GeForce 6200’s anisotropic filtering looks up to 8X. Click on the images below to open an uncompressed PNG in a new window.

No aniso, 2X, 4X, 8X – GeForce 6200

No aniso, 2X, 4X, 8X – Radeon X600 Pro

Anisotropic filtering levels are comparable between the GeForce 6200 and Radeon X600 Pro, at least to my eye.

 

3DMark05 image quality – Game test 1
When NVIDIA extended its GeForce FX line down to the low end with the GeForce FX 5200, they resorted to all sorts of partial precision tricks to improve performance. Unfortunately, dropping precision degraded image quality. At launch, the FX 5200’s DirectX 9 image quality was noticeably inferior to that of high-end GeForce FX cards. To make sure that NVIDIA isn’t doing the same thing again with the GeForce 6200, I compared its output to a GeForce 6800 GT in 3DMark05’s game tests using the Shader Model 3.0 path. The X600 Pro can only use 3DMark05’s Shader Model 2.0 path, which will produce slightly different images by default, so I haven’t included it here.

Click on the images below to open an uncompressed PNG in a new window.

Game test 1 – GeForce 6200

Game test 1 – GeForce 6800 GT

Everything looks good in game test 1. At least to my eye, the 6200’s output doesn’t look any better or worse than the 6800 GT’s.

 

3DMark05 image quality – Game test 2

Game test 2 – GeForce 6200

Game test 2 – GeForce 6800 GT

The same goes for game test 2…

 

3DMark05 image quality – Game test 3

Game test 3 – GeForce 6200

Game test 3 – GeForce 6800 GT

Game test 3 also shows no apparent differences between the cards. If NVIDIA’s cutting precision, it’s not having a detrimental impact on image quality.

 

Power consumption
I broke out our trusty watt meter to measure overall system power consumption, sans monitor, at the outlet. Power consumption was measured at idle and under a 3DMark05 Return to Proxycon game test load.

System power consumption with the GeForce 6200 closely mirrors the Radeon X300. It’s particularly interesting to note that system power consumption with the integrated GMA 900 IGP isn’t much lower than with our discrete graphics cards.

 

Conclusions
When NVIDIA launched the GeForce 6800, they talked up the new architecture as a scalable design that would power a top-to-bottom line of graphics cards. With the GeForce 6200, that top-to-bottom line is complete, at least as far as add-in desktop graphics cards are concerned. The GeForce 6200 brings the impressive rendering capabilities of the GeForce 6 series to the budget space with fewer compromises than one might expect. The GeForce 6200’s strong performance in DOOM 3 shows that this budget card isn’t too short on pixel processing power.

Priced between $129 and $149, the GeForce 6200 will do battle with ATI’s Radeon X300 and Radeon X600 Pro. Against the X300, the GeForce 6200 is clearly superior. However, with the exception of DOOM 3, the X600 Pro’s performance is tough to match. The GeForce 6200 performs exceptionally well given its relatively low clock speeds, but against a card with a 100MHz core and memory clock advantage, there’s only so much it can do.

In some circles, the GeForce 6200 will also compete with Intel’s Graphics Media Accelerator 900. Those who purchase PCs from the big PC makers will likely have the option of going with integrated graphics or trading up to something like the GeForce 6200. Based the performance of the GMA 900, trading up looks like the only viable option for gaming, at least with newer titles. The GMA 900’s lack of shader power has a devastating impact on performance, and it’s rare that pixel shader effects are even displayed correctly. To make matters worse, the GMA isn’t detected as a DirectX 9 graphics option by some games.

As I wrap things up, I can’t help but be struck by the GeForce 6200 graphic chip’s relatively large die size. It appears to be an NV43 with some of its pixel shaders disabled, and it’s a big chip for what should be a very high volume part. I wouldn’t be surprised to see a GeForce 6100 or 6300 emerge at some point down the road with a smaller die size and perhaps four pixel pipes bound to two ROPs using that fancy fragment crossbar.

Whatever happens in the future, right now the GeForce 6200 is a pretty compelling graphics card for budget-minded gamers. It’s clearly a better option than the Radeon X300, but add-in board manufacturers are going to have to have to break out some Turbo Golden Sample Special Editions with higher clock speeds to catch the Radeon X600 Pro. 

GeForce 6200 AGP — Technical City


NVIDIA
GeForce 6200 AGP

Buy

  • Interface AGP 8x
  • Core clock speed 230 MHz
  • Max video memory 128 MB
  • Memory type DDR
  • Memory clock speed 132 MHz
  • Maximum resolution

Summary

NVIDIA started GeForce 6200 AGP sales 14 December 2003. This is Celsius architecture desktop card based on 150 nm manufacturing process and primarily aimed at gamers. 128 MB of DDR memory clocked at 0.13 GHz are supplied, and together with 64 Bit memory interface this creates a bandwidth of 1.056 GB/s.

Compatibility-wise, this is single-slot card attached via AGP 8x interface. Its manufacturer default version has a length of 168 mm.

We have no data on GeForce 6200 AGP benchmark results.

General info


Of GeForce 6200 AGP’s architecture, market segment and release date.

Place in performance rating not rated
Architecture Celsius (1999−2005)
GPU code name NV18 C1
Market segment Desktop
Release date 14 December 2003 (18 years ago)
Current price $162 of 49999 (A100 SXM4)

Technical specs


GeForce 6200 AGP’s general performance parameters such as number of shaders, GPU base clock, manufacturing process, texturing and calculation speed. These parameters indirectly speak of GeForce 6200 AGP’s performance, but for precise assessment you have to consider its benchmark and gaming test results.

Core clock speed 230 MHz of 2610 (Radeon RX 6500 XT)
Number of transistors 29 million of 14400 (GeForce GTX 1080 SLI Mobile)
Manufacturing process technology 150 nm of 4 (h200 PCIe)
Texture fill rate 0.92 of 939.8 (h200 SXM5)

Compatibility, dimensions and requirements


Information on GeForce 6200 AGP’s compatibility with other computer components. Useful when choosing a future computer configuration or upgrading an existing one. For desktop video cards it’s interface and bus (motherboard compatibility), additional power connectors (power supply compatibility).

Interface AGP 8x
Length 168 mm
Width 1-slot
Supplementary power connectors None

Memory


Parameters of memory installed on GeForce 6200 AGP: its type, size, bus, clock and resulting bandwidth. Note that GPUs integrated into processors don’t have dedicated memory and use a shared part of system RAM.

Memory type DDR
Maximum RAM amount 128 MB of 128 (Radeon Instinct MI250X)
Memory bus width 64 Bit of 8192 (Radeon Instinct MI250X)
Memory clock speed 132 MHz of 21000 (GeForce RTX 3090 Ti)
Memory bandwidth 1.056 GB/s of 14400 (Radeon R7 M260)

Video outputs and ports


Types and number of video connectors present on GeForce 6200 AGP. As a rule, this section is relevant only for desktop reference video cards, since for notebook ones the availability of certain video outputs depends on the laptop model.

Display Connectors 1x VGA, 1x S-Video

API support


APIs supported by GeForce 6200 AGP, sometimes including their particular versions.

DirectX 8.0
OpenGL 1.3 of 4.6 (GeForce GTX 1080 Mobile)
OpenCL N/A
Vulkan N/A

Benchmark performance


Non-gaming benchmark performance of GeForce 6200 AGP. Note that overall benchmark performance is measured in points in 0-100 range.


Similar GPUs

Here is our recommendation of several graphics cards that are more or less close in performance to the one reviewed.

Recommended processors

These processors are most commonly used with GeForce 6200 AGP according to our statistics.


Pentium 4
P4 3.0

13%


Core 2
Duo E4500

8. 7%


Athlon XP
3200+

4.3%


Pentium 4
630

4.3%


Athlon XP
1700+

4.3%


Pentium III
1133

4.3%


Pentium
E5400

4.3%


Core 2
Duo E7300

4.3%


Athlon II
X2 250

4.3%


Pentium D
945

4. 3%

User rating


Here you can see the user rating of the graphics card, as well as rate it yourself.


Questions and comments


Here you can ask a question about GeForce 6200 AGP, agree or disagree with our judgements, or report an error or mismatch.


Please enable JavaScript to view the comments powered by Disqus.

XFX nVIDIA GeForce 6200 AGP Display Card PVT44AWANG B&H Photo

BH #XFGF6200256M • MFR #PVT44AWANG

Key Features

  • 256MB DDR2 RAM
  • DVI + VGA
  • S-Video

The nVIDIA GeForce 6200 AGP Display Card from XFX Force delivers powerful 3D graphics to your desktop computer. The card installs in an available AGP slot and features DVI, VGA, and TV outputs. You can even connect a TV via S-Video. The card is fully compatible with such standards as DirectX 9 and OpenGL 2. 0, making it an excellent choice for your computer.

More Details

No Longer Available

Share

PrintAsk Our Experts

  • Overview

  • Specs

  • Reviews0

  • Q&A

  • Accessories

XFX PVT44AWANG Overview

  • 1Description
  • 2Full Microsoft DirectX 9 Shader Model 3.0 Support
  • 3AGP Support
  • 4Dual Monitor
  • 5Windows Vista Support

The nVIDIA GeForce 6200 AGP Display Card from XFX Force delivers powerful 3D graphics to your desktop computer. The card installs in an available AGP slot and features DVI, VGA, and TV outputs. You can even connect a TV via S-Video. The card is fully compatible with such standards as DirectX 9 and OpenGL 2. 0, making it an excellent choice for your computer.

The video card has full support for the DirectX 9 Shader Model 3.0. This brings stunning graphics capabilities to gaming, geometry, vertex, physics, and pixel shading operations.

The card installs in an available AGP 8x slot. This interface is specifically designed with graphics applications in mind.

The video card can drive dual monitors. It features a DVI-I port and a VGA port; each port is capable of driving a 1600 x 1200 display.

This card is fully compatible with Microsoft’s Windows Vista operating system. It provides more than enough horsepower to drive Vista’s Aero graphics.

In the Box
  • XFX nVIDIA GeForce 6200 AGP Display Card
  • Software CD-ROM with Drivers
  • Double Lifetime Protection Warranty
    • Description
    • Full Microsoft DirectX 9 Shader Model 3. 0 Support
    • AGP Support
    • Dual Monitor
    • Windows Vista Support

    XFX PVT44AWANG Specs

    Hardware
    Stream Processors nVIDIA GeForce 6200
    GPU Clock Core: 350MHz
    Memory Amount 256MB
    Memory Clock 533MHz
    Memory Type DDR2
    Memory Interface 128-bit
    Bus Type AGP
    Bus Speed 8X
    Rendering Pipelines 1
    Geometry Engines None
    Geometry Rate None
    Pixel Fill Rate 1. 4Gpixels/sec
    RAMDAC 2x 400MHz
    Multimedia Support
    FM Tuner No
    TV Tuner No
    DTV Tuner No
    HDTV Capable No
    Hardware MPEG None
    Display Support
    Computer Analog 1x VGA
    Computer Digital 1x DVI-I
    Video 1x S-Video
    Multiple Display Configuration Dual Display
    Display Resolutions
    Analog 1600 x 1200
    Digital 1600 x 1200
    Max Resolution 1600 x 1200
    I/O Connections
    Analog (PC) 1x VGA
    Digital (PC) 1x DVI-I
    Video 1x S-Video
    System Requirements
    Software Operating System: Windows XP, Vista
    Hardware AGP slot
    300W power supply

    XFX PVT44AWANG Reviews

    See any errors on this page? Let us know

    YOUR RECENTLY VIEWED ITEMS

    Review: NVIDIA’s GeForce 6200 Launch — Graphics




    At the end of my recent preview of NVIDIA’s GeForce 6600 GT, powered by their NV43 GPU, I remarked, «Now there’s only one piece of the pie left», in reference to NVIDIA’s claim that they’d offer top-to-bottom GeForce 6-series models before the end of 2004, based on the same technology that debuted this April with NV40 and 6800 Ultra.

    With the mid-range 6600 and high-end 6800 range of hardware covered, the bit that’s left is 6200, which makes its initial mark today. With Euro-zone review samples held up at customs for the majority of last week and only just released to NVIDIA in Europe today, from what we can make out, it’s up to our American and Canadian counterparts to provide you with reviews. However, I can still go over the basics with you before our own sample arrives in due course.

    PCI Express once again



    The push to get everyone using PCI Express graphics cards on the desktop marches on at full speed. 6200 assists in that endeavour by being PCI Express only, including for the forseeable future. NVIDIA have no concrete plans for an AGP bridged version, especially since that requires a new GPU package and added cost, something that’s not attractive for the 6200’s price point.

    NV43’s basic building blocks



    Like GeForce 6600, 6200s use NV43 GPUs. In 6600 that’s eight fragment pipelines, three vertex shaders and four ROPs fed by a fragment crossbar. That means up to eight pixel fragments output per clock, into four output units, with three of NV40’s six vertex shaders. In 6200 everything’s the same as far as basic architecture is concerned, just four of the fragment pipelines are disabled, presumably after failing QA testing at the fab.

    4 pipes, 4 ROPs, three vertex units. Easy peasy.

    But there’s more



    But on top of those basic building blocks, 6200 has other differentiating features that mark it out as more than just four-pipe 6600. 6200 has no support for NVIDIA’s SLI, meaning you won’t be able to combine a pair of 6200’s in a dual PCI Express system for more graphics power. It also has none of NV43’s colour or Z-buffer optimisations enabled. So there’s no colour compression on pixel fragments, no Z-buffer compression, no fast Z-clear. You can argue that at this end of the market, where the GPU is less capable and clocked slower than its mid-range counterparts, that colour and Z optimisations, under NVIDIA’s Intellisample technology umbrella, are needed more at the low end to aid performance, than in the higher performing parts.

    6200 also has no support for 64-bit texture formats, meaning that it can’t do 64-bit texture blending, combining or rendering of any kind where a 64-bit precision surface is required, integer or otherwise. That has knock on effects for certain kinds of rendering techniques and means 6200 is even less capable than a four-pipe 6600 would suggest.

    It’s not all bad



    NVIDIA UltraShadow II technology is complete and working in 6200, offering rendering speedups for graphics techniques that make heavy use of stencil shadows. You’ve also got full support for Direct X Shader Model 3.0, NVIDIA’s newest 4X RGMS multi-sampling anti-aliasing, 16X angle-adaptive aniso-tropic filtering and the rest of what defines NV4x GPUs.

    Cost



    NVIDIA’s price points for 6200 boards sit at $129 for a 128MB version (all 6200 variants will have a 128-bit memory bus initially, with AIBs able to implement even lower cost 64-bit versions if they choose, at a later date) and $149 for a 256MB version. NVIDIA in the UK fully expect that to translate to

    To sum up



    Think 6600 and NV43. Think the same 0.11µ process production at TSMC. Think half the pixel power at the same clocks, without some optimisations before pixels are generated (Z opts) and after they’re written to their render buffers (colour opts), no SLI, but complete SM3.0. If that makes sense, you’re most of the way there.

    With NVIDIA’s basic reference clocks for 6200 at 300MHz core and 275MHz (550MHz DDR) memory, it’s an X300/X600 competitor, and I’d expect 6600 GT and ATI’s X700 XT to be at least twice as fast in most situations. That pitches 6200 as the step up from integrated graphics that most people look at when they’re considering low-cost systems. It’ll be interesting to see how that works out.

    Pretty pictures



    While we didn’t get actual boards for today, we did get pretty pictures. So here’s some geek porn to satisfy that low-end graphics lust. Kleenex at the ready, budget fans!




    It’s a single slot board using a small PCB and the cooler many of you might remember from some GeForce3 boards back in the day. Equipped with S-Video input and output, DVI-I and VGA, the pictured reference board doesn’t need any external power source and should run very cool and quiet, to steal a phrase from AMD.

    We’ll have our own review up as soon as possible. In the meantime, The Tech Report were kind enough to let me preview their article, full of performance numbers, an hour or so before you’ll get to read this. It’s great stuff as always from those guys, so check it out here.

    Their reference board has 128MB of memory clocked at 500MHz DDR, with the core at 300MHz, and Geoff pits it against GMA900 and a couple of PCI Express Radeons from the X300 and X600 camps. Performance sits just shy of X600 Pro in most cases. Be sure and read their review for full details.

    GeForce 6 Tech Specs|NVIDIA

    CineFX 3.0 Shading Architecture

    • Vertex Shaders
      • Support for Microsoft DirectX 9.0 Vertex Shader 3.0
      • Displacement mapping
      • Geometry instancing
      • Infinite length vertex programs
    • Pixel Shaders
      • Support for DirectX 9. 0 Pixel Shader 3.0
      • Full pixel branching support
      • Support for Multiple Render Targets (MRTs)
      • Infinite length pixel programs
    • Next-Generation Texture Engine
      • Up to 16 textures per rendering pass
      • Support for 16-bit floating point format and 32-bit floating point format
      • Support for non-power of two textures
      • Support for sRGB texture format for gamma textures
      • DirectX and S3TC texture compression
      • Full 128-bit studio-quality floating point precision through the entire rendering pipeline with native hardware support for 32bpp, 64bpp, and 128bpp rendering modes

    64-Bit Texture Filtering and Blending

    • Full floating point support throughout entire pipeline
    • Floating point filtering improves the quality of images in motion
    • Floating point texturing drives new levels of clarity and image detail
    • Floating point frame buffer blending gives detail to special effects like motion blur and explosions

    Intellisample 3. 0 Technology

    • Advanced 16x anisotropic filtering
    • Blistering-fast antialiasing and compression performance
    • New rotated-grid antialiasing removes jagged edges for incredible edge quality
    • Support for advanced lossless compression algorithms for color, texture, and z-data at even higher resolutions and frame rates
    • Fast z-clear
    • High-resolution compression technology (HCT) increases performance at higher resolutions through advances in compression technology

    UltraShadow II Technology

    • Designed to enhance the performance of shadow-intensive games, like id Software’s Doom 3

    TurboCache Technology

    • Shares the capacity and bandwidth of dedicated video memory and dynamically available system memory for optimal system performance.

    PureVideo Technology

    • Adaptable programmable video processor
    • MPEG video encode and decode
    • High-definition MPEG-2 hardware acceleration
    • High-quality video scaling and filtering
    • DVD and HDTV-ready MPEG-2 decoding up to 1920x1080i resolutions
    • Dual integrated 400 MHz RAMDACs for display resolutions up to and including 2048 × 1536 at 85Hz
    • Display gamma correction
    • Microsoft® Video Mixing Renderer (VMR) supports multiple video windows with full video quality and features in each window

    Advanced Display Functionality

    • Dual integrated 400MHz RAMDACs for display resolutions up to and including 2048×1536 at 85hz
    • Dual DVO ports for interfacing to external TMDS transmitters and external TV encoders
    • Full NVIDIA® nView™ multi-display technology capability

    Advanced Engineering

    • Designed for PCI Express x16
    • Support for AGP 8X including Fast Writes and sideband addressing
    • Designed for high-speed GDDR3 memory
    • Advanced thermal management and thermal monitoring

    NVIDIA® Digital Vibrance Control™ (DVC) 3. 0

    • DVC color controls
    • DVC image sharpening controls

    Operating Systems

    • Windows XP
    • Windows ME
    • Windows 2000
    • Windows 9X
    • Macintosh OS, including OS X
    • Linux

    API Support

    • Complete DirectX support, including the latest version of Microsoft DirectX 9.0 Shader Model 3.0
    • Full OpenGL support, including OpenGL 2.0

    GeForce 6 Series GPUs Features Comparison:

    Microsoft® DirectX® 9.0 SM 3.0 SM 3.0 SM 3.0 SM 3.0
    Graphics Bus Technology AGP 8X/PCI Express AGP 8X/PCI Express AGP 8X/PCI Express PCI Express
    NVIDIA® Intellisample™ Technology 3. 0 3.0 3.01 3.01
    NVIDIA® SLI™ Technology X2 X2 n/a n/a
    NVIDIA® PureVideo™ Technology X3 X3 X3 X3
    NVIDIA® TurboCache™ Technology n/a n/a n/a X4
    64-bit Texture Filtering and Bending X X n/a n/a
    Effective Memory Interface 256-bit 128-bit 128-bit 128/64-bit
    Memory GDDR35 and DDR GDDR35 and DDR DDR DDR
    Process 0. 13 micron 0.11 micron 0.11 micron 0.11 micron
    RAMDACs 400 MHz 400 MHz 400 MHz 400 MHz

    1. GeForce 6200 models do not include compression technology.
    2. NVIDIA SLI: Available on SLI-certified versions of GeForce 6800 Ultra, 6800 GT, 6800 GS, 6800, 6800 XT, 6800 LE, 6600 GT, 6600, and 6600 LE PCI Express GPUs only.
    3. Features may vary by product.  Some features may require additional software.
    4. Available on select GeForce 6200 models only.
    5. GDDR3 — GeForce 6800 Ultra, 6800 GT, 6800 GS, and 6600 GT only.

    Turning NVIDIA GeForce 6200 into 6600

    The PCI-Express x16 interface is gradually becoming popular. Already, there are a huge number of PCI-E video cards, represented by both computer giants ATI and NVIDIA. It is also important that PCI-Express x16 cards of the GeForce 6200/6600/6800 and Radeon X800 series are, as a rule, cheaper than their AGP counterparts, because exclude the use of PCI-E -> AGP bridges due to their initial focus on the new interface and, as a result, a more simplified design.

    In today’s review, you will be presented with a video card from the lower price range from NVIDIA — Gigabyte GeForce 6200. The 6200 model is presented on the market in two versions — for the AGP interface and, of course, PCI-Express x16. When the GeForce 6200 was preparing for its official announcement, it was known that it would be based on the NV43 chip (with half of the texture pipelines disabled), which is known to be used in GeForce 6600 series cards. based on GeForce 6200 should be painlessly converted to GeForce 6600 without any «surgical» interventions. So, today, when the G6200 has already flooded retail, there is an opportunity to test the assumptions in practice.


    Specifications NVIDIA NV43

    Manufacturer nVidia
    Core name NV43
    Technological process, microns 0. 11
    Core frequency, MHz 300
    Number of texture pipelines 4/8
    Number of texture units per pipeline 1
    Number of textures per pixel per pass 16
    Hardware T&L +
    Memory type DDR
    Memory bus width, bit 128
    Memory frequency, MHz 250
    Memory bandwidth, GB/s 8
    Anisotropic filtering level 16x
    FSAA Rotated grating multisampling, combination of MSAA and SSAA
    SSAA coefficients 2x
    MSAA Sample Number 2x-8x
    Vertex shaders 3
    Pixel Shaders 3
    Frequency of the 1st RAMDAC, MHz 400
    2nd RAMDAC frequency, MHz 400
    Dual output technology nView
    Number of top conveyors 3

    As you can see, the number of texture pipelines varies depending on the use of the NV43 chip in the GeForce 6200 and GeForce 6600. Accordingly, the NVIDIA GeForce 6200 uses the 4×1 formula, and the 6600 uses the 8×1 formula, while the number of vertex pipelines is always three . I will not explain that the inclusion of all eight pipelines gives a significant jump in the performance of the GF6200 and equates it to the results shown by the GF6600.

    Now let’s move on to our specific instance — Gigabyte NVIDIA GeForce 6200.

    So, the card is made on Gigabyte’s standard blue textolite and has a PCI-Express x16 interface. Standard frequencies are 300/500MHz. Onboard is 128MB of Hynix DDR memory with a 4ns access time.

    The standard cooling is a pretty nice copper cooler, which in terms of noise characteristics resembles overclocked boxed coolers from Intel.

    recommendations

    The ear spoiled by Zalman’s silence, to be honest, is very hard to endure such a strong noise, so it was decided to replace the standard cooling with a cooling system for video cards from the same Zalman — ZM80C-HP with an optional cooler ZM-OP1.

    Zalman ZM80C-HP (including ZM0P1) is not a powerful overclocking tool, it’s just a set of two heatsinks, a heat pipe and a silent fan, which allow you to maintain the achieved overclocking performance on efficient stock cooling, only while maintaining the overall silence of the system. The price of such a miracle of nature is around $ 30.

    But back to our topic. Ultimately, after lengthy tests, the card showed the following results in overclocking — the core operated at a frequency of 550MHz, the memory at a frequency of 670MHz. To tell the truth, I have not seen a 4ns memory for a long time, which, working almost at its maximum (500MHz), was able to add another 170MHz in frequency! Brilliant result.

    Enabling eight texture pipelines

    In order to save a beginner in this business from the question, in fact, «how to enable these same pipelines», I will briefly describe the rework process.

    For a successful result, we need to have, first of all, the latest version of the RivaTuner program. At the time of this writing, v2.0 RC 15.4. After starting the program, we should open the «Low-Level system tweaks» window, which can be accessed by clicking on the «Low-Level system setting» button in the Customize section.


    In the lower part of the window (Graphics processor configuration), enable the «Allow enabling hardware masked units» parameter, set «Custom» in the «Active pixel/vertex units configuration» item and click the Customize button.


    In the window that appears, check one of the first two lines (Pixel unit 0 or Pixel unit 1, depending on which four pipelines are disabled).


    After pressing the OK button and rebooting, we get a full-fledged GeForce 6600.


    The EVEREST Home Edition program also speaks of the use of all eight conveyors.


    Testing

    After miraculous transformation of GeForce 6200 into GeForce 6600 and overclocking to 550/670MHz, let’s compare the results.

    Test stand:

    • CPU: Intel Celeron D [email protected] (190×19).
    • MoBo: ASUS P5GD1 (LGA775; i915P/ICH6R).
    • Memory: 2x256MB PC3200 Kingston KVR400X64C3A/256 (2.5/3/3/6).
    • HDD: Samsung 160GB S-ATA.
    • Video: Gigabyte (GV-NX62128D) NVDIA GeForce 6200.
    • MS Windows XP Service Pack 2.
    • NVIDIA ForceWare 71.24.
    • Intel Inf Update v.6.3.0.1007.

    Synthetic tests:




    Real gaming applications:

    For comparison, the results obtained at a resolution of 1280x1024x32 are shown, graphics settings were set to maximum, anti-aliasing was not used, 16-bit anisotropic filtering was enabled in Half-Life 2. I want to note that we specifically used such modes in order to test the prospects of the map in future games.




    Conclusions

    Summing up, I want to say that the video card NVDIA GeForce 6200 did not repeat the history of its predecessor NVIDIA GeForce FX 5200 and proved to be much better. Despite its low cost (less than $100 in Moscow retail), the performance of the card is up to the mark, and taking into account good overclockability and, moreover, the ability to turn on all eight pipelines, the card will allow you to feel comfortable in modern toys, even in heavy modes.

    Attention! According to Unwinder ‘s author of the RivaTuner program, unblocking pipelines is physically impossible on NV43 revision A4 (the card does not react at all to an attempt to change the configuration of pipelines). The article unlocked a video card with an A2 revision chip.


    We are waiting for your comments in a specially created conference thread.