DVM's GeforceFX 5600 Ultra - Radeon 9600 Pro Showdown |
HOW DO NVIDIA's & ATI's LATEST MID-RANGE CARDS STACK UP?? |
BENCHMARKING SCHEME... In order to have a good basis of comparison of these four video cards, I selected a total of five benchmarks, consisting of both "synthetic" multi-test video benchmarks and actual games, and employing both OpenGL and Direct3D rendering modes:
- 3DMark03 Build 320 Multi-test Video Benchmark (DirectX 9)
- 3DMark2001 SE Build 330 Multi-test Video Benchmark (DirectX 8)
- GL Excess v.1.2b Multi-test Video Benchmark (OpenGL)
- Quake 3 Arena "Demo002" Timedemo (OpenGL game)
- Unreal Tournament 2003 Demo v.226 Benchmark Test (DirectX 8 game)
For each of these tests, the benchmark was run under four different conditions on each video card:
- Default core & memory clockspeeds, antialiasing & anisotropic filtering off
- Default core & memory clockspeeds, 4x antialiasing & 8x anisotropic filtering
- Overclocked core & memory settings, antialiasing & anisotropic filtering off
- Overclocked core & memory settings, 4x antialiasing & 8x anisotropic filtering
Considering the relative ease with which today's video cards can be overclocked and the resulting free speed boost, the inclusion of overclocked data has become "standard operating procedure" in studies of this sort. Including data to quantify the impact of antialiasing (AA) and anisotropic filtering (AF) has also become de rigeur, since these visual quality enhancement techniques are so widely used now. And, despite the advances in 3D technology, they can still result in a very significant performance penalty, especially AA.
Additionally, as mentioned above, the Radeon 9500 was subjected to two full sets of tests, one running with the stock card configuration, and the second using "hacked" drivers to enable the additional 4 pixel pipelines of the card, i.e. in "Radeon 9700" mode. So the benchmarking plan worked out to a total of 5 cards x 5 benchmarks x 4 conditions = 100 total tests.
All of the testing was done at a screen resolution of 1024x768 and 32-bit color depth, with the exception of the Quake 3 Arena timedemo, which was run at 1600x1200x32. This higher resolution setting was used because this older game just doesn't "push" a video card hard enough at 1024x768.
Regarding the overclocked core and memory speed settings, I used the maximum stable speeds that I have determined in the past by trial & error for my particular Radeon 9500 (351/580) and GF4 Ti4200 (300/550) cards. For the Gainward FX 5600, I simply utilized the "enhanced mode" settings (450/900) that are available through Gainward's bundled EXPERTool utility. For the Powercolor 9600 Pro, I used the empirical approach again, attaining maximum stable clockspeeds of 515MHz core and 720MHz memory.
BENCHMARK TEST PLATFORM I carried out the benchmarking on my Dell Dimension 8100 (see configuration details at right), which has been upgraded to a 2.6GHz Northwood Pentium 4 using the PowerLeap P4/N adapter, with XP Pro as the operating system. I would consider this to be a fairly typical gaming rig by current standards. It certainly has a good amount of processing horsepower, but is still well shy of the current state of the art. Though the CPU is quite powerful, the system is somewhat limited by its older Intel 850 motherboard, with a 400MHz front-side bus speed, and its use of PC800 RDRAM.
Swapping video cards in and out of a PC naturally involves changing video drivers, too. For the Radeon 9500, the ATI Catalyst 3.5 driver set was used, with W1zzard's hacked ATI2MTAG.SYS file (available HERE) utilized for running the card in 9700 mode. Core and memory clockspeed adjustments were handled with Rage3D's Radeon Tweak Utility, v.3.9.
For the Ti4200 and FX 5600 cards, the Nvidia Detonator XP v.44.03 drivers were used. Overclocking of the Ti4200 was done via the popular Riva Tuner tweaking utility, while Gainward's EXPERTool utility (included on the driver CD with the card) was employed on the FX 5600.
Testing for the Powercolor 9600 Pro was done using the Catalyst 3.7 drivers. This was possible because the Powercolor card was purchased & tested about a month after the original benchmarking, and the newer drivers had become available in the meantime. It should be noted, however, that the Catalyst 3.7's are largely a "bug fix" driver release, and there is no significant performance difference between them and the 3.5 version used earlier in this study. Overclocking was accomplished with the Rage3D Radeon Tweak Utility.
Just a quick note on changing drivers. I've found that with the drivers available these days from ATI and Nvidia, this process is much easier and safer than it used to be. For all the swapping of cards that I did for this study, I never had to do anything more complicated than uninstall the previous drivers from Add/Remove Programs, power down, switch cards, boot back up and install the new drivers. No complicated driver clean out procedure or deleting of Registry keys was necessary at any time in the process, nor did I encounter the dreaded "black screen" or get locked up in 16-color mode. That's good news for anyone who needs to change a video card and remembers what a messy process it often used to be.
So, let's move on to the benchmark results and their interpretation....