Ati radeon review: ATI Radeon HD 5570 Review

AMD ATI Radeon HD 5770 Review

Verdict

Key Specifications

  • Review Price: £125.00

Last month AMD unveiled the worlds first DirectX 11 graphics cards in the form of the HD 5870 and HD 5850 and mighty fine cards they were too. However, priced at £340 for the former and £240 for the latter, they weren’t exactly what you’d call priced for the masses. So, today we’re looking at the HD 5770, which at £120 is likely to fall within many more people’s budgets. Has performance been compromised too much to reach this price point or will this be the pick of the current crop of cards? Read on to find out.


The HD 5770 is a relatively small card at just 220mm long. This means it should have no problems fitting in the vast majority of PC cases, though the double width cooler will limit those with particularly small cases. A major plus point of the double width of the card is that AMD has retained the quad-video output configuration of its more expensive cards. As such, you get two dual-link DVI, a DisplayPort, and an HDMI output but as with the higher end cards, you are limited to just three outputs at once.


AMD’s new black and red livery is continued on this card along with the superfluous extended faux exhausts on the trailing edge of the cooler shroud. We do like the look but still fail to see why these little extended bits are needed – that extra bit of length they add could just make the difference between successfully fitting this card in a case and not.


Just one six-pin PCI-Express power connection is required to power this card and AMD’s quoted TDP is 108W, while idle power is just 18W. We will of course be putting these figures to the test later on. Two Crossfire connectors protrude from the top edge of the card where we’ve come to expect them. As always, we’re inclined to recommend you go for a more expensive single-card option than for Crossfire or SLI, due to compatibility issues, but at least the option is there.


The HD 5770 is based on a chip (codenamed Juniper) that is derived from the one (codenamed Cypress) that powers the HD 58×0 series of cards. As such it uses the same basic architecture and technology but just has less of everything. In particular, where Cypress has 1,600 stream processors, 80 texture units, 32 ROPs, and a 256-bit memory interface, Juniper has 800 stream processor, 40 texture units, 16 ROPs, and a 128-bit memory interface. Clock speeds, however, haven’t changed so the HD 5770 really is almost exactly half off the HD 5870 – both cards run at 850MHz and use 1GB of GDDR5 memory clocked at 1.2GHz.


A second card based on Juniper is also available and it comes under the HD 5750 name. We’re not looking at this card today but we’ll get one in soon. Essentially it has had one SIMD engine removed, resulting in a total of 720 stream processors, and 36 texture units. Its clock speeds have also been reduced so the core runs at 700MHz while the memory ticks along at 1. 15GHz.


So that’s the card, but for the real meat and potatoes of this review, though, we need to start gathering some numbers so let’s get testing.

We tested this card on our usual test bench, the details of which are below. We ran five games to test for performance then checked total system power consumption and noise levels both when idling and gaming. As always we ran our tests multiple times to try and ensure consistent figures then record the average.


With the exception of Counter-Strike: Source (CSS) and Crysis, the gaming results are recorded manually using FRAPs while we repeatedly play the same section of the game. For CSS and Crysis we use timedemos and framerate recording is automated. For Crysis, all in-game detail settings are set to High while all the other games are run at their highest possible graphical settings.


Meanwhile, power draw testing is done using a mains inline power meter. The idle figure is taken when the system is simply on the windows desktop with no obvious background activity going on. The load test is then taken when running our Crysis timedemo at 1,920 x 1,200 with 2xAA. Likewise, noise levels are tested using the same idle and load scenarios as above. We isolate the test bed in a sound proof box and record noise levels from about 30cm away.


We find that below about 40dB is indicative of a card that when housed in a conventional case will go unnoticed in a relatively quiet home office environment. For use in a ‘silent’ PC (for in the bedroom or cinema room) without resorting to sound proofing your case, you’ll want a card that’s lower than 35dB at idle. As for under load, above about 50dB is a constantly noticeable level that would disturb you if using speakers or open-back headphones while gaming.


”’Test System”’

* Intel Core i7 965 Extreme Edition

* Asus P6T motherboard

* 3 x 1GB Qimonda IMSh2GU03A1F1C-10F PC3-8500 DDR3 RAM

* 150GB Western Digital Raptor

* Microsoft Windows Vista Home Premium 64-bit


”’Cards Tested”’

* AMD ATI HD 5770

* AMD ATI HD 5850

* AMD ATI HD 4890

* AMD ATI HD 4870

* AMD ATI HD 4850

* AMD ATI HD 4770

* nVidia GeForce GTX 275

* nVidia GeForce GTX 260

* nVidia GeForce 9800 GT


”’Drivers”’

* AMD ATI HD 5000 series driver

* Other ATI cards – Catalyst 9. 9

* nVidia cards – 190.02


”’Games Tested”’

* Far Cry 2

* Crysis

* Race Driver: GRID

* Call of Duty 4

* Counter-Strike: Source

Starting, as always, with Crysis, the most obvious result is that the HD 5770 pretty much holds parity with its key competitors, the HD 4870, GTX 275, and GTX 260. Considering these were last year’s high-end cards, this is certainly a good start, though of course prices for these cards have dropped significantly since then. Sadly this still doesn’t equate to amazing performance, with the HD 5770 achieving just 28.68 fps at 1,920 x 1,200 with 2xAA. Thankfully Crysis is not representative of the vast majority of games and can be considered a worse case scenario.


Far Cry is a game we’ve only recently brought into out testing procedure so we don’t have results for some of the older cards but we’re still able to draw some conclusions. The main thing to note is that the HD 5770 again seems to tally almost exactly with the performance of the GTX 275. As such it’s fair to conclude that the GTX 260 and HD 4870 would have fallen into similar positions and that the HD 5770 competes well.


With the latest Call Of Duty game now available, we’ll probably transition our testing to that game in the near future, but for now we’ve stuck to the old game. The HD 5770 performs very well in this game giving completely playable framerates even at 1,920 x 1,200 with 4xAA. Compared to other cards, it also holds up well, again almost exactly equalling the three price-competitive cards previously mentioned.


Our penultimate game, Race Driver: GRID, sees a change in gaming style from running and gunning to driving and accompanying this is a change in fortunes for the HD 5770. Here, despite still competing with the GTX 260, it consistently falls behind the GTX 275 and HD 4870. Nevertheless, it still delivers a very playable experience at all the settings tested.


Finally we have Counter-Strike: Source and as we’ve come to expect even this relatively modest card absolutely breezes through this title – at 1,920 x 1,200 with 4xAA, it still gets an average of 149. 27 fps. As such, we’ve only reported the highest settings we tested at.


Looking next at power consumption and the case for the HD 5770 starts to become even clearer. It has the lowest idle power consumption on test and has significantly lower power consumption when under load than all the cards with which it competes in terms of price and gaming performance.


As for the noise these cards produce, well again this is only a test we’ve introduced recently so are missing some cards for comparison. However, we can draw some conclusions. In particular, we can see that this card falls within our level of ‘you wouldn’t really notice it in normal use’ at idle but isn’t so quiet as to be suitable for a ‘silent’ PC. Under load it is just on the cusp of being annoyingly loud but if kept in a well ventilated case, it shouldn’t be too disturbing.


All told then, the HD 5770 has an impressive level of performance for its price, draws relatively little power, and is perfectly acceptable when it comes to noise levels. It also compares well to its price/performance competitors. The GTX 275 is simply overpriced and not worth considering while the HD 4890 is a little bit faster but does cost a little bit more. As for the HD 4870 and GTX 260, they offer almost identical performance to the HD 5770 and cost about the same but they consume more power and have fewer features, so here the HD 5770 is the clear graphics card of choice.


”’Verdict”’


As a mid-range card, the AMD ATI Radeon HD 5770 was never going to set new records but it doesn’t skimp on features and delivers as much performance as we’d expect at its price point.


—-


—-


—-


—-

Score in detail

  • Value 9

  • Features 10

  • Performance 8

ATI Radeon 32MB SDR

by Anand Lal Shimpion October 13, 2000 4:46 AM EST

  • Posted in
  • GPUs

0 Comments
|

0 Comments

IndexThe Chip HyperZ: Attacking the Memory Bandwidth Issue HyperZ ExplainedHyperZ PerformanceThe Test Quake III Arena PerformanceMDK2 PerformanceUnrealTournament Performance16-bit vs 32-bit PerformanceCPU Scaling PerformanceWindows 2000 Driver PerformanceFinal Words

For years ATI has been considered
to be a leader in multimedia or video features, courtesy of products like their
TV tuner and All-in-Wonder lines, however they have rarely been considered to
be the high-end performance leaders of the industry.  

Like Matrox and S3, ATI
was caught completely off guard by the 3D revolution of a few years ago that
left the market dominated by companies like the now defunct Rendition as well
as a much more familiar name, 3dfx.  ATI managed to stay alive courtesy of their
extremely strong hold on many OEM markets, places where 3dfx and NVIDIA had
a hard time breaking into at first.

The Rage 128 was originally
intended to mark ATI’s entry back into the market as a serious contender in
terms of performance, not only in terms of OEM acceptance.  Unfortunately a
horrendously delayed Rage 128 chip gave ATI little in the way of credibility
and when it eventually did make it out, the solution wasn’t powerful enough
to put ATI’s name anywhere on the performance map. 

What ATI needed was a new
core, as we noticed with the Rage 128 Pro, increasing the clock speed of the
Rage 128 only bought ATI performance that NVIDIA’s TNT2 had been giving us for
months before.   The introduction of the Rage Fury MAXX shortly thereafter gave
ATI a strong taste of what competing in the high-end gaming market was like,
and it only foreshadowed what was soon to come once ATI finally got a new core
to work with.

Enter the Radeon.  Previewed
in April of this year and released as well as shipped just two months later,
the Radeon turned quite a few heads.  For the first time, ATI was not only playing
catch-up to NVIDIA but they were even outperforming the king of the market. 

It didn’t take long for
the market to respond, and since the Radeon’s introduction the pressure has
been placed on NVIDIA, and now with the possibility of not seeing a new product
until next year, it is ATI’s chance to shine.  ATI has definitely seen this
opportunity, and just like AMD did following the introduction of their Athlon,
ATI is attempting to take advantage of their situation as much as possible.

Recent reports have indicated
that ATI was planning to dramatically phase out their Rage 128 line and replace
it with a more cost effective solution based on the Radeon core.  Just as NVIDIA
did with their GeForce2 MX, ATI is doing something very similar by releasing
a low-cost version of the Radeon.

While continuing to promote
the still relatively new Radeon brand name, ATI is introducing the next incarnation
of the line, the Radeon SDR. 

The Chip
IndexThe Chip HyperZ: Attacking the Memory Bandwidth Issue HyperZ ExplainedHyperZ PerformanceThe Test Quake III Arena PerformanceMDK2 PerformanceUnrealTournament Performance16-bit vs 32-bit PerformanceCPU Scaling PerformanceWindows 2000 Driver PerformanceFinal Words

PRINT THIS ARTICLE

Review and testing Radeon HD 3850 256Mb / Overclockers.ua

No less expected than the release of the GeForce 8800GT were new video cards from AMD. What to hide, lately things have not been going well for this company. In the processor segment, Intel retains its lead. And in the video card market, things are even more deplorable. Nvidia is leading a massive offensive in all market segments. And the only thing left for AMD is to pursue an aggressive pricing policy. So the release of the new video cards Radeon HD3870 and Radeon HD3850 was a little late, again Nvidia was the first. And the prices for new video cards have been adjusted all the time, and downwards, because according to preliminary tests and statements by company representatives, new video cards are weaker than the newcomer from Nvidia.

But now new video adapters have appeared in our open spaces, and we can already compare them with opponents in real gaming applications. And of course, we will test the new video card for overclocking capabilities. The subject of this review will be the younger model — Radeon HD3850.

Architectural features

First, it’s worth mentioning some features of the architecture of the video card in question. The new products are based on the RV670 graphics chip. The new GPU is not in vain left with the serial number of the old 6th series, since it is almost a complete copy of the R600.

Radeon HD 2900 XT has already been reviewed on our site and in that article the architectural features of the R600 and its functional blocks were considered. All this is true for its updated version. I will repeat only the main characteristics. The video processor consists of 64 stream processors, including 5 ALUs each. The specifications indicate a total of 320. Four texture units with 4 sample units, so the specifications say 16 texture units.

The main differences of the new graphics processor are support for the 256-bit memory bus and DirectX 10.1. The first simplification is not terrible, because many analysts have already said that the 512-bit bus of the Radeon HD 2900 does not fully realize itself. AMD representatives say that the new optimized 256-bit controller is as fast as the old one. But this rather only indirectly confirms that the bandwidth provided by the 256-bit bus is enough for the power of this graphics chip.

The new RV670 is made using a thin 55 nm process technology, which significantly reduces its power consumption and heat dissipation.

Support for the new API from Microsoft should be the main trump card. But, unfortunately, we will be able to see it only in the first half of 2008. The new DirectX should be part of the first SP1 update for Windows Vista. Important changes in DirectX 10.1 will be Shader Model 4.1 and cube map arrays (arrays of cube maps). Using the second possibility promises a full-fledged implementation of Global Illumination, implementing the calculation of reflected and refracted lighting, blurry reflections, soft shadows, color transfer or overflow depending on the lighting. In the game, it will look like the correct shadows from all objects, depending on the position of the light source and its distance, while reflecting light and other details will be taken into account. For example, if an object is surrounded on all sides by other objects, it will be darker. As a result, seemingly small innovations make it possible to realize more natural lighting, close to life. Technically, this becomes possible by dividing the image into cubes, and performing operations to calculate the lighting on each of these elements.

Also in the updated version of DirectX will be small improvements in blending and anti-aliasing. Now you can use special anti-aliasing filters from pixel shaders, which in particular should improve anti-aliasing in HDR rendering.

RV670 has acquired support for the PCI Express 2.0 bus with a bandwidth of 2.5 gigabits/s. Support for PCI Express 1.0 and 1.1 is retained, but as we can judge from the experience of GeForce 8800GT, compatibility problems with some motherboards are still possible.

ATI PowerPlay technology is now supported, which was previously implemented only in mobile video chips. This technology controls the operating frequency, GPU voltage and cooler speed of the cooling system depending on the load of the graphics core. Theoretically, with an average load, the frequency should take an average value, but during testing, only two working values ​​​​of the video processor frequency were noted.

Multithreading has become popular not only among CPU manufacturers, but also among graphics giants. The development of SLI and CrossFire technologies has led to the possibility of using more than two video cards in one system. Based on RV670, it is already possible to create CrossFireX arrays from 4 video cards or from two dual-chip video adapters. The card already has two connectors for connecting two more boards. Connection of 4 boards is made in a checkerboard pattern. Support for this feature was introduced in the Catalyst 7.10 drivers.

Despite the already decent age of video card pairing technologies, they have not gained popularity among the general public, remaining the lot of ardent enthusiasts or benchers. The main increase from SLI and CrossFire modes is realized only in synthetic applications (3DMark), and in real games the increase rarely exceeds 50% when using a second video card, and sometimes even differs by a couple of percent.

Characteristics Radeon HD3850 256Mb:

  • Core code name RV670
  • Technical process 55 nm
  • Core clock 670 MHz
  • Number of universal processors 320
  • Number of texture units — 16
  • Number of blending units — 16
  • Memory bus — 256 bits
  • GDDR3 memory frequency 1660 MHz (2*830 MHz)
  • Memory capacity 256 MB
  • Memory bandwidth 53 Gb/s
  • Theoretical maximum fill rate 10.7 gigapixels per second
  • Theoretical texture sampling rate up to 10.7 gigatexels per second
  • Power consumption up to 95 W
  • CrossFireX support
  • Two DVI-I Dual Link connectors
  • TV-Out, HDTV-Out, HDCP support
  • PCI Express 2.0 bus

Sapphire Radeon HD3850 256Mb

The object of this test will be a SAPPHIRE video card. A well-known company was one of the first to introduce new products of the Radeon family, and they were the first to reach our open spaces.

True, the first batches arrived in a simple OEM kit. Instead of a big box — a modest antistatic bag. True, the delivery is not completely empty; it also contains:

  • HDTV Adapter
  • S-Video to RCA adapter
  • Two molex to 6-pin graphics card power adapters
  • Catalist 7.10 Driver Disc

Graphics card compared to predecessors of the Radeon HD 29 series00, has become more compact in size.

The 256-bit bus allowed us to simplify the layout and layout of the board, and the number of elements is much less than that of the Radeon HD 2900. In size and design, the board is more reminiscent of the Radeon HD 2600XT in the version with GDDR4 memory.

The video card is equipped with a single-slot cooling system made in the form of a turbine.

Copper heatsink covers almost the entire board. It also cools the memory chips and the power elements of the power circuit; special thermal pads serve as a thermal interface for them. On the front side, the radiator is covered with small thin fins that are blown by a fan. In the area of ​​the power circuit, the radiator has spikes, also to increase the effective working surface. The free lower part of the heat sink plate is covered with a dielectric film, which prevents short circuit in case of accidental contact with the board elements.

With the cooling system removed, the board looks like this.

As you can see, the chip has no protective frame. Its dimensions are quite small, despite the fact that it consists of 666 million transistors. Made on the 43rd week of this year.

Samsung memory. The memory chips have an access time of 1.1 ns, which corresponds to a frequency of 1800 MHz.

There is a small memory frequency margin, but we will find out if it is implemented below.

Monitoring and overclocking

RivaTuner 2.06 and ATITool 0.27b3 work with the new video card, detecting it immediately without problems. The frequencies of this instance are 668/828 (1656) MHz. When working in 2D, the chip operates at 297 MHz, under load its frequency immediately rises to 668 MHz. Interestingly, ATITool constantly shows the maximum frequency of the chip, while RivaTuner fixes different values.

The idle temperature is kept at 50°C. In this case, the cooler can completely stop. At maximum load, the speed rises to 45%, and the chip warms up to 90°C. The maximum temperature during testing was 91°C.

ATI/AMD cards have always been hot-tempered. But for a processor made using 55 nm technology, the temperature is still high. This problem can be solved quite simply, because RivaTuner and ATITool allow you to control the fan speed. But accordingly, the noise emitted by the cooler also grows. We raised the speed to 60%, while the fan hum is already noticeable, but tolerable, and does not interfere much. This made it possible to reduce the temperature by 20°С, to the level of 69°C under load.

Of course, an overclocking attempt was made. Since there were some problems with ATITool in this regard, overclocking was carried out using RivaTuner. The chip was raised to a frequency of 800 MHz and higher, but the operation of the card was unstable. I had to limit myself to a value of 750 MHz. The memory began to work unstably and produce artifacts at a frequency of about 2050 MHz. Work on 2000 MHz was without failure.

Overclocking was 12% for the graphics core and 20% for the video memory. But the card clearly has more potential. So overclocking enthusiasts will not be disappointed. For stable operation at higher frequencies, you will need to increase the voltage, which can be achieved using a volt mod. Our overclocking is more exploratory, since it is absolutely safe, and even everyday, since most users are limited by software overclocking capabilities. Overclocking will also give you an idea of ​​the performance scalability of the RV670.

Test participants

For comparison of Radeon HD3850 256Mb, the following video cards were selected in tests:

Leadtek GeForce 8800GT 512MB

Comparison with this competitor is the most interesting, because this is also a new video card and is positioned by Nvidia for the same segment. But so far the real price for it is higher than even for the older model Radeon HD 3870. But soon a 256-MB version of this card should appear, the price of which will be approximately at the level of the Radeon HD3850.

Leadtek GeForce 8800GTS 640Mb

This video card is inferior to GeForce 8800GT. The card, although more expensive than its predecessor, is, however, slower. But will she be able to keep her lead over Radeon? After all, the former flagship AMD Radeon HD 2900XT successfully competed with this card. And Radeon HD 38хх have the same architecture, so there is a chance to hope for some parity of these cards in tests.

XpertVision GeForce 8600GTS 256Mb

This opponent is closest in price to the Radeon HD 3850, even a little cheaper. But at the same time, it has much more modest characteristics. The fact that it is slower than the Radeon card is already clear in advance, the only question is how much.

ASUS Radeon X1950 PRO 256Mb

This opponent was chosen as the most powerful representative of the Middle-End segment of the Radeon family from the past two generations. It’s no secret that the Radeon HD 2600XT couldn’t outperform this card, yielding to the GeForce 8600GTS as well. A Radeon X1950 PRO at one time was a «people’s» video card. Most tests are done in Direct X10 games, so this card will only be present in older Direct X9 games. A small selection of these gaming applications will not allow us to compile an accurate data on the performance difference between this card and the next generation, but it will be possible to form a general opinion about the advantage of the new architecture.

Test configuration and test features

Test stand:

  • Processor: Core 2 Duo E4400 2 GHz (overclocked to 3.25 GHz, 325 MHz FSB)
  • Cooler: Zalman CNPS 7000B Al-Cu
  • Motherboard: Gigabyte P35-S3
  • Memory: 2x1Gb GOODRAM PC6400 (813 MHz with 5-4-4-12 timings)
  • Hard drive: 320Gb Hitachi T7K250
  • Power Supply: CoolerMaster eXtreme Power 500-PCAP
  • Operating system: Windows XP SP2, Windows Vista Ultimate 32x
  • Drivers for video cards GeForce: ForceWare 163.71(WinXP), 163.75 (WinVista)
  • Video card drivers GeForce 8600GTS 256Mb for WinVista: ForceWare 162.22
  • Drivers for video card GeForce 8800GT 512Mb: ForceWare 169.02
  • Radeon Graphics Drivers: Catalyst 7.11

In testing, we used a set of applications similar to the recent review of GeForce 8800GT 512MB. This video card used its own drivers, because the latest official drivers of the 163 series do not yet support this card. During testing of the GeForce 8600GTS in Direct X10, it was found that the latest ForceWare 163.75 drivers significantly reduce performance on this particular card. The situation is paradoxical, but installing an older version of ForceWare 162.22 allowed us to improve the results of this card by 10-20% in some tests, so this version was chosen as a working version for testing. But in Windows XP, the latest drivers did not bring such degradation, and even slightly improved performance compared to older versions. For this reason, for tests in Direct X9GeForce 8600GTS used ForceWare version 163.71.

3DMark 2006

Traditionally, we start with synthetic dough. They are the latest version of 3DMark.

GeForce 8800GT confidently takes the first place. Radeon yields to it by 21%, but is almost 2% ahead of GeForce 8800GTS. The gap from weaker video cards is simply huge, 50-80%. Overclocking helps to raise the result by another 10%, but it doesn’t allow it to reach the leader, but the advantage over GeForce 8800GTS is already impressive.

F.E.A.R.

The game is not new, but still quite popular. The benchmark built into the game was used for the test.

In view of the age of the game, it simply makes no sense to consider it in weak modes. The game settings are set to the highest graphics quality with AF16x, AA4x and soft shadows enabled.

The alignment of forces has not changed much — the first place belongs to GeForce 8800GT. Next come the GeForce 8800GTS and Radeon HD 3850. Overclocking in this game brings a very high performance increase of 23-27%, almost catching up with the GeForce 8800GTS. Interestingly, the lag of the Radeon X1950 PRO from a beginner (without overclocking) is no longer so high and amounts to 25%. The outsider in this test is the GeForce 8600GTS. This card lags behind the Radeon HD 3850 by 40-50%, overclocking Radeon brings this figure to 75-90%.

S.T.A.L.K.E.R.

One of the best games of this year from the domestic company GSC.

Maximum graphics settings with full dynamic lighting and anisotropic filtering. Smoothing is not available in this mode.

Nvidia representatives feel great in this game. Senior models are confidently leading, the gap between Radeon HD 3850 and GeForce 8800GT is 70%. Overclocking allows you to win only an additional 10%. But the outsider in this game is the Radeon X1950 PRO, yielding 5-9% to the younger Nvidia card and 66-97% to the opponent on the RV670.

PT Boats Knights of the sea DX10

The first few games under DirectX 10 we will test in modes with and without anti-aliasing. The PT Boats demo will be the first test. Radeon X1950 PRO is no longer involved here, because it does not support DirectX 10.

Graphics settings are maximum. Since anti-aliasing causes a significant drop in performance in this game, only AA2x is used in the test.

Except for the two permanent leaders from Nvidia, the rest of the cards show an incredibly low level of performance. The superiority of these models here is measured not by percentages but by orders of magnitude. After all, Radeon is already 8 times inferior! But such a terrible situation is most likely due to the Catalyst drivers. New graphics cards sometimes have problems supporting some new games, and this is a prime example of this. After all, even the GeForce 8600GTS 256MB managed to produce a larger number of frames, although it is also absolutely unplayable.

World in Conflict DX10

Popular strategy. Excellent graphics for its genre, but the game has high system requirements. The GeForce 8800GT performed quite well in this game, let’s see how the Radeon performs.

Graphics settings at maximum quality. This automatically activates AF16x filtering and AA4x smoothing. The test was carried out with these parameters and with antialiasing disabled.

For the Radeon HD 3850, this game is a tough nut to crack. Lagging behind the leaders is 30-50%. Overclocking allows you to win no more than 10%. And even at 1024×768, the average fps cannot be called comfortable. But compared to its closest competitor in terms of price, GeForce 8600GTS 256MB, the newcomer demonstrates a twofold superiority. Enabling anti-aliasing leads to a significant drop in performance, in percentage terms, more significant than that of Nvidia video cards. In high resolution, the performance drop is twofold. Affects a small amount of video memory. However, even at a low resolution of 1024×768, the GeForce 8600GTS loses only 10%, while the Radeon HD 3850 loses 40% with the same memory size of 256Mb. A high drop in performance in «heavy» was also inherent in the predecessors of the Radeon HD 29 series00. Probably ATI/AMD never solved this problem.

Call of Juares DX10

The next test is the demo version of the popular western Call of Juares.

Graphics and shadow settings at maximum. Anti-aliasing for the test in this game was not included.

In this game, the result is very different from all the predecessors. Suddenly, the Radeon HD 3850 comes out on top. 17% higher than the main competitor GeForce 8800GT. However, as the resolution increases, the lag of Nvidia video cards decreases. But still, the results of a video card that has half the video memory and costs much less are very impressive. And overclocking brings an additional 15-20% to the final result. Almost twofold superiority over GeForce 8600GTS 256MB. It remains to be hoped that Nvidia will be able to gradually improve performance in this game with the help of drivers.

Crysis DX10

The most anticipated game that made many think about a total upgrade. The phenomenon is significant, rather than for the gaming industry, but for the computer market. Thanks to CryTek, sales of computer manufacturers will definitely go uphill.

The test was carried out only at the maximum settings very high. Anti-aliasing was not included, because the performance in this game leaves much to be desired.

GeForce 8800GT confidently stays on the pedestal. The Radeon HD 3850 is almost on a par with the GeForce 8800GTS, but as the resolution increases, the lag becomes noticeable. Overclocking increases performance by 12%. The advantage over the GeForce 8600GTS 256MB is huge, and again almost twice.

Terminals

Before drawing conclusions, it’s worth taking a look at the prices of the video cards in question. At the time of its appearance on our market, the Radeon HD 3850 costs about $200. The closest variant in terms of price is only GeForce 8600GTS. But with a 20% price difference, the performance difference is much larger. Radeon wins 50-100%, except for the strange performance in the game PT Boats. Prices for GeForce 8800GT 512Mb and GeForce 8800GTS 640Mb are $300 and $380 respectively, while GeForce 8800GT is still in short supply. Of course, the advantage of the new Nvidia card is significant, but there is simply no competitor to the Radeon HD 3850 in its price segment. As we have seen from the tests, it is not even worth comparing with the GeForce 8600GTS — different weight categories. So if now you are counting on buying up to 200 USD, you can take the Radeon HD 3850 without hesitation.

The Radeon HD 3850 has every chance to repeat the fate of the Radeon X1950 PRO and become a bestseller. These models are already universally available, and show the best result in terms of price / performance in this price category. Perhaps the situation will change with the release of GeForce 8800GT 256Mb. But AMD is already promising to cut prices, and GeForce video cards still appear more on paper than in our storefronts.

Apart from poor performance in PT Boats, no problems were noted with games. Not a single crash or error. Even the refresh rate issue that the Radeon HD 2600 had was not there. The stability of the work was a pleasant surprise, especially if you remember that when new Radeon models appeared, they constantly had problems working under Vista in DirectX10. Only Nvidia left a small unpleasant aftertaste with the strange behavior of the latest drivers on the GeForce 8600GTS video card.

The native cooling system is not the most efficient, however, it does its job perfectly if you increase the fan speed a little. The video card has overclocking potential, and overclocking gives a noticeable increase in performance. Overclocking the graphics core by 12% and memory by 20% gives a stable increase of 10-20%. And extreme overclocking will allow you to squeeze a lot more out of the card.

The weak point of the video card under consideration is a small amount of memory, because even representatives of the previous generation were often equipped with twice as much memory. So far, only the Radeon HD 3870 is being manufactured in the version with 512 MB of memory. We will try to test this video card and compare it with rivals from Nvidia as soon as possible.

ATI RadeOn SDR 32 video card review

selling quality products at affordable prices
price is healthy competition. And when there’s no time
a very strong player is taken over or bought
another, larger company, then becomes
a little sad. Without going into the reasons for the triumph and
the subsequent collapse of 3dfx, one can only
state that the highly respected by us
NVidia in the graphics market
there is only one serious competitor left — the company
ATI. The policy of this Canadian company is that
she does not sell her chips to the side,
preferring to produce their own graphics cards. Today we
consider a video card from this manufacturer under
called ATI RadeOn SDR 32 ($135), based on
the same chip. This model is positioned
as a productive 2D/3D accelerator for systems midrange
market, being a direct competitor to video cards based on
based on the NVidia GeForce2MX chip.

Specification:

  • Chip: RadeOn;
  • AGP 2x/4x interface;
  • 160 MHz core, 160 MHz SDR SDRAM;
  • Render pipelines: 2;
  • Fillrate: 320 million
    pixels per second;
  • Setup Engine & T&L speed: 30 million
    polygons per second;
  • RAMDAC frequency 350 MHz;
  • Maximum resolution 2048×1536-60Hz

There is a very interesting point here:
GeForce2 MX is lightweight
version of the GeForce2GTS chip (twice as many
rendering pipelines, 2 times narrower bus
memory and underfrequency), while in
ATI Radeon SDR video card is used
fully functional version of the RadeOn chip, which means
the performance of the T&L core is the same as that of
more expensive model. Let’s see what it will be
mean in practice…

How we tested

  • Intel Celeron 700 processor
  • Asus CUSL2 motherboard (i815E)
  • Memory 128Mb PC-133 SDRAM
  • SB Live! Value
  • HDD Fujitsu MPF 3204AT
  • Video card ATI RadeOn SDR — 32Mb (driver D714-0831a-62B-SPD) — $135
  • Video card Asus AGP V7100 (GeForce2 MX) — 32Mb (driver 6.31) —
    $130
  • Monitor 17» ViewSonic PF775

Set of settings provided
RadeOn drivers are not very great. However, the most important
functions are still there — for example, the possibility
enabling full-screen anti-aliasing
(anti-aliasing, FSAA). From quality assessment
anti-aliasing, we started the preparation of RadeOn SDR.

anti-aliasing can significantly improve
quality of 3D visualization. Let’s look at an example
OpenGL games Heavy Metal: FAKK2 at FSAA by ATI. Quality
anti-aliasing is quite good, although not as good
high, as with the most resource-intensive FSAA 2×2 on
GeForceX.

Now let’s look at
screenshots showing the most painful place
RadeOn — dithering in hicolor.

640*480-16bit

640*480-32bit

As we see the picture in Hicolor
unattractive. Annoying Artifacts
dithering is especially noticeable when looking at
translucent textures. Fortunately, the difference is
speeds between 16 and 32 bit modes on RadeOn are completely
is small, so we decided to play tests
hold in truecolor…

Let’s start with the test package 3D Mark 2000
v.1.1 which uses the same graphic
engine, as the game unfinished Max Payne. In that
in terms of the test is not an absolute synthetic and
its results reflect the behavior of the system in
real Direct3D games.

If the leadership in hicolor is on the GeForce2MX side,
then in trucolor RadeOn comes forward what could be
a consequence of the higher block speed
hardware calculation of transformation and lighting
(T&L).

Now our traditional test — Heavy Metal:
FAKK2.
MX leads at low resolutions, RadeOn
, however, begins to outperform the competitor with
resolution 1024*768-32bit. At the same time, playability
continues to be at a fairly good level.

About results in Unreal Tournament say
nothing special — both tested video cards
showed very similar results. The very same game
too dependent on the processor to
show a noticeable difference between the 2
modern graphics chips.

And finally, the technology demo Dagoth
Moor Zoological Garden
, actively using
hardware T&L.