Gå til innhold

Kjøpeguiden: 6600GT - GTX280 / HD4870X2 (24.11)


guezz

Anbefalte innlegg

I have noticed that numerous people have questions about performance of today’s high-end graphic cards. Therefore am I making this thread to explain the performance of various cards and which positive and negative sides they have.

 

Useful link(s)

The Introduction Guide - Advices, Technologies, Drivers & Utilities

Technical vocabulary

 

Content Overview

Bang for buck cards

• Performance ranking by brand and generation

- NV4X

- G7X

- G8X

- G9X

- GT200

- R4X0

- R5X0

- R6X0

- R7X0

• Technologies by brand and generation

- NV4X

- G7X

- G8X

- G9X

- GT200

- R4X0

- R5X0

- R6X0

- R7X0

 

__________________________________________________

 

Update: 24 November 08

__________________________________________________

 

Disclaimer

This guide is meant as a quick overview of performance and technology. I have tried to be as objective as possible. The performance ranking is a general idea of average performance in games and are collected from more than one review (most used are: AnandTech, The Tech Report, X-bit Labs, FiringSquad and Beyond3D). If you disagree about something or find factual errors please reply to this thread or preferable send a PM, although I will ignore anything which are not backed up by facts (sod off fanboys).

 

Important information

Today it’s in my opinion a bit obsolete to talk about “pixel pipelines” when talking about the latest generation of video cards (R5XX and G7X). This is why I have decided to include: pixel processors, TMUs, ROPs (z compare units are a part of a ROP) and vertex processors.

 

”A pixel processor calculates different effects for a displayed pixel to create realistic materials and surfaces. In general one could say that the more pixel processors exist the better the performance would be. Not all games uses the same amount of shaders, therefore the performance will fluctuate between them”

 

”The most basic explanation of the function of a Texture Mapping Unit (TMU) is that it uses a 2D-image to “clothe” a 3D figure.

 

”A Render Output Unit or Raster Operations Pipeline (ROP) is one of the final steps in producing the final result. The ROP takes care of the transformation of information stored in the GPUs memory into pixels which are displayed on a monitor. The basic task for a ROP is: AA, blending and z-buffer de-/compression.”

 

”The basic task for a vertex processor is to control the behaviour and appearance of the corners to the triangles which 3D-objects are build upon”

 

”Unified processor can do pixel, vertex and with time geometry (a DX10 feature) shading. ATI and nVidia have different types of processors!

 

__________________________________________________

 

Bang for buck cards (24 November)

<1000 = 9600 GT 512MB (700)

Pretty high IQ requirements. Typical: 1680x1050 high settings 16xAF.

<1500 = HD 4850 512MB (1150)

High IQ requirements. Typical: 1680x1050 high settings 2xMSAA 16xAF.

<5000 = HD 4870 512MB (1900)

Very high IQ requirements. Typical: 1680x1050 high settings 4xMSAA 16xAF.

 

__________________________________________________

 

The video cards are ranked by their performance

 

X1800 XT memory configurations (core frequency MHz / memory frequency MHz) (Fill rate (pixel output) Gpixel/s / Memory bandwidth GB/s) (Interface available)

 

 

nVidia

 

6800 LE 128/256MB (320/700) (2.56/22.4) (AGP/PCIe) 256-bit

- 8 pixel processors – 8 TMUs - 8 ROPs - 4 vertex processors

It can often be softmodded into a 6800. If one is especially lucky 16 pixel processors can be within reach. The 6800 LE has GDDR memory. Out of box performance is a bit faster than a 9800 Pro, but if the softmod is a success a major performance boost is to be expected.

 

6800 XT 128/256/512MB (325/700) (2.6/22.4) (AGP/PCIe) 256-bit

- 8 pixel processors – 8 TMUs – 8 ROPs - 4 vertex processors

This card is performance wise pretty identical to a 6800 LE. The AGP version can be unlocked to 12x1 and 6 vertex processors. It’s advised to choose models with GDDR3 memory rather than GDDR if overclocking is important to you.

 

6600 GT 128/256MB (500/900 (1000 PCIe)) (2.0/16.0) (AGP/PCIe) 128-bit

- 8 pixel processors - 8 TMUs - 4 ROPs - 3 vertex processors

The high core and memory (GDDR3) frequency mostly eliminates architectonical deficiencies (4 ROPs, 3 vertex shader units and 128-bit memory). It’s about 20% faster than a 9800. The performance difference between the AGP- and PCIe version is negligible. It can’t be softmodded. The 6600 GT is a better deal than a 6800 LE if you don’t like to mod a card.

 

6800 128/256MB (325/700) (3.9/22.4) (AGP/PCIe) 256-bit

- 12 pixel processors – 12 TMUs – 8 ROPs - 5 vertex processors

16 pixel processors and 6 vertex processors can be achieved though a softmod if you’re successful. Slow GDDR memory prevents high frequencies therefore crippling performance when compared to a 6800 GT. It’s about 15 % faster than a 6600 GT, but struggles to keep up with faster cards like: X800 XL and 6800 GT.

 

6800 GS 256/512MB (350 (425 PCIe)/1000) (5.1/32.0) (AGP/PCIe) 256-bit

- 12 pixel processors – 12 TMUs – 8 ROPs - 5 vertex processors

The PCIe version is marginally slower than a 6800 GT and the performance hit is a bit larger if AA and AF are used. The AGP version is even slower due to the lower core frequency. The performance is quite an achievement when you look at the apparent weak architecture when compared to a 6800 GT. 16x1 and 6 vertex processors can be achieved for the AGP version through a softmod.

 

6800 GT 256MB (350/1000) (5.6/32.0) (AGP/PCIe) 256-bit

- 16 pixel processors – 16 TMUs – 16 ROPs - 6 vertex processors

The card is substantially faster than a 6800. GDDR3 memory can achieve higher frequencies than the slower GDDR which cards like LE and NU are equipped with. An unmodded X800 Pro is a bit slower.

 

6800 Ultra 256MB (400/1100) (6.4/35.2) (AGP/PCIe) 256-bit

- 16 pixel processors – 16 TMUs – 16 ROPs - 6 vertex processors

The only difference between an Ultra and a GT is the higher frequencies and marginally higher voltage (0.1 V).

 

These cards are a bit special

- Asus V9999 6800 GT 128MB (350/700) (5.6/22.4) 256-bit

- 16 pixel processors – 16 TMUs – 16 ROPs - 6 vertex processors

The core is based upon the one used in a 6800 GT. Therefore has it 16 pixel processors, but the performance is crippled by the slow GDDR memory. It’s substantially faster (~15 %) than a 6800 because of obvious reasons.

 

- MSI NX6800-TD 128MB (350/700) (5.6/22.4) (AGP) 256-bit

- 16 pixel processors – 16 TMUs – 16 ROPs - 6 vertex processors

The core is based upon the one used in a 6800 GT. It’s very important to note that this card must be softmodded before having 16x1 and 6x1. The performance is crippled by the slow GDDR memory. It’s softmodded substantially faster (~15 %) than a 6800 because of obvious reasons. It runs cooler due to a good copper cooler and has therefore often a better overclocking ability.

 

- Asus V9999 6800 256MB Gamer Edition (350/1000) (2.8/32.0) 256-bit

- 12 pixel processors – 12 TMUs – 8 ROPs – 5 vertex processors

The GE has identical frequencies as a 6800 GT, but is crippled by having only 12 pixel processors. You can achieve 16 pixel processors through a softmod. It’s a bit slower than an X800 Pro.

 

G7X

 

7300 GT 256MB (350/667) (1.4/10.7) (PCIe) 128-bit DLDVI

- 8 pixel processors – 8 TMUs – 4 ROPs - 5 vertex processors

It’s marginally slower than a X1600 Pro, thus it being a bit slower than a 6600 GT. Slow GDDR2 memory cripples performance.

 

7600 GS 256/512MB (400/800) (3.2/12.8) (AGP/PCIe) 128-bit DLDVI

- 12 pixel processors – 12 TMUs – 8 ROPs - 5 vertex processors

Performance wise is the card pretty identical to a 6800. This means ~15 % faster than a 6600 GT.

 

7600 GT 256MB (560/1400) (4.48/22.4) (AGP*/PCIe) 128-bit DLDVI

- 12 pixel processors – 12 TMUs – 8 ROPs - 5 vertex processors

It’s a bit faster than a 6800 Ultra and therefore is it natural to compare it to a 7800 GS. The X1800 GTO has a similar performance. Good overclocker.

* Leadtek WinFast A7600 GT TDH and XFX 7600 GT

 

7800 GS 256MB (375/1200) (3.0/38.4) (AGP) 256-bit DLDVI

- 16 pixel processors – 16 TMUs – 8 ROPs - 6 vertex processors

This video card is a bit faster than a 6800 Ultra and the performance is equal an X800 XT PE. Games using HDR with FP blending (like Far Cry) can increase the performance gap to 50 % when compared to a 6800 Ultra. The card can’t be softmodded.

 

7800 GT 256MB (400/1000) (6.4/32.0) (PCIe) 256-bit DLDVI

- 20 pixel processors – 20 TMUs – 16 ROPs - 7 vertex processors

The card can’t be softmodded to a GTX. The performance can vary from -10% to plus 40 % (1600x1200) higher than a 6800 Ultra. It’s natural to see this card as a replacement for the 6800 Ultra on the PCIe marked.

 

7800 GTX 256MB (430/1200) (6.88/38.4) (PCIe) 256-bit DLDVI

- 24 pixel processors – 24 TMUs – 16 ROPs - 8 vertex processors

Performance wise is it a bit weaker than 6800 Ultra SLI. If any pixel shader intensive games are used it truly shines and is then marginally faster. X1800 XT 256MB is a bit faster.

 

7900 GS 256MB (450/1320) (7.2/42.2) (PCIe) 256-bit D-DLDVI

- 20 pixel processors – 20 TMUs – 16 ROPs - 7 vertex processors

The performance is pretty equal a 7800 GTX.

 

7900 GT 256MB (450/1320) (7.2/41.6) (PCIe) 256-bit D-DLDVI

- 24 pixel processors – 24 TMUs – 16 ROPs - 8 vertex processors

It’s a bit faster than a 7800 GTX. An X1800 XT 256MB is marginally faster. Good overclocker.

 

7950 GT 256/512MB (550/1400) (8.8/48.8) (AGP*/PCIe) 256-bit HDCP D-DLDVI

- 24 pixel processors – 24 TMUs – 16 ROPs - 8 vertex processors

The card is about 20% faster than a 7900 GT and a bit weaker than the X1900 XT 256MB.

* XFX

 

7900 GTO 512MB (650/1320) (10.4/42.2) (PCIe) 256-bit DLDVI

- 24 pixel processors – 24 TMUs – 16 ROPs - 8 vertex processors

It’s a bit weaker than an X1900 XT 256MB in performance.

 

7800 GTX 512MB (550/1700) (8.8/54.4) (PCIe) 256-bit DLDVI

- 24 pixel processors – 24 TMUs – 16 ROPs - 8 vertex processors

A 7900 GTX is a bit faster. It’s marginally slower than an X1900 XT 512MB.

 

7900 GTX 512MB (650/1600) (10.4/51.2) (PCIe) 256-bit D-DLDVI

- 24 pixel processors – 24 TMUs – 16 ROPs - 8 vertex processors

Performance wise is it substantially (~30 %) faster than a 7900 GT. ATI matches it with the X1900 XTX which is marginally faster.

 

7950 GX2 512MB (500/1200) (8.0/38.4) (PCIe) 256-bit HDCP D-DLDVI

- 24 pixel processors – 24 TMUs – 16 ROPs - 8 vertex processors (per GPU)

It’s important to emphasize that this “card” uses SLI technology with its ups and downs (e.g. 512MB, not 1024MB). 7900 GT SLI is a bit weaker and the gap usually widens even further at extreme resolutions and/or settings. Motherboard and bios compatibility.

 

These cards are a bit special

Galaxy 7300 GT 256MB (500/1400) (2.0/22.4) (PCIe) 128-bit DLDVI

- 8 pixel processors – 8 TMUs – 4 ROPs - 5 vertex processors

The performance is similar to 7600 GS / X1600 XT. GDDR3 memory and a good core make it possible to achieve such high frequencies.

 

- Gainward BLISS 7800 GS SILENT 512MB (425/1200) (6.8/38.4) (AGP) 256-bit DLDVI

- 20 pixel processors – 20 TMUs – 16 ROPs - 7 vertex processors

This is essentially a 7800 GT disguised as a 7800 GS. A 7800 GTX (reference frequencies) is about 15 % faster.

 

- Gainward BLISS 7800 GS+ SILENT 512MB (450/1250) (7.2/40.0) (AGP) 256-bit DLDVI

- 24 pixel processors – 24 TMUs – 16 ROPs - 8 vertex processors

This is essentially a 7900 GT disguised as a 7800 GS. Because of the slower memory frequency it's marginally slower.

 

G8X

 

8500 GT 256/512MB (450/900/800) (1.8/12.8) (PCIe) 128-bit DLDVI

- 16 unified processors – 8 TMUs – 4 ROPs

It’s about as fast as an X1600 Pro thus a bit slower than the 6600 GT.

 

8600 GT 256MB (540/1180/1400) (4.32/22.4) (PCIe) 128-bit D-DLDVI

- 32 unified processors – 16 TMUs – 8 ROPs

The performance is about the same as a 7800 GT which in turn puts it ~15% above a 7600 GT. An 8600 GTS is about 15% faster.

 

8600 GTS 256MB (675/1450/2000) (5.4/32.0) (PCIe) 128-bit DL-HDCP D-DLDVI

- 32 unified processors – 16 TMUs – 8 ROPs

The performance is similar to a 7900 GS.

 

8800 GTS 320MB (500/1200/1600) (10.0/64.0) (PCIe) 320-bit HDCP D-DLDVI

- 96 unified processors – 24 TMUs – 20 ROPs

The only thing separating it from a regular GTS is the lower amount of memory. The memory bottleneck is most apparent at 1680x1050 or higher resolutions in 2007+ games, ranging from no difference at all to large. Overall it isn’t currently a major bottleneck (1680x1050 or less with a bit of AA), while more complex titles later on are bound to increase the memory bottleneck.

 

8800 GTS 640MB (500/1200/1600) (10.0/64.0) (PCIe) 320-bit HDCP D-DLDVI

- 96 unified processors – 24 TMUs – 20 ROPs

It’s 35% faster than the X1950 XTX. Good overclocker. Great potential (read the ”Unified Shader Architecture” section).

 

8800 GTS SSC 640MB (500/1200/1600) (10.0/64.0) (PCIe) 320-bit HDCP D-DLDVI

- 112 unified processors – 24 TMUs – 20 ROPs

It’s marginally faster than an 8800 GT.

 

8800 GTX 768MB (575/1350/1800) (13.8/86.4) (PCIe) 384-bit HDCP D-DLDVI

- 128 unified processors – 32 TMUs – 24 ROPs

The performance is about 75% better than an X1950 XTX. Great potential (read the ”Unified Shader Architecture” section).

 

8800 Ultra 768MB (612/1500/2160) (14.7/103.7) (PCIe) 384-bit HDCP D-DLDVI

- 128 unified processors – 32 TMUs – 24 ROPs

It’s about 10% faster than the GTX.

 

G9X

 

9500 GT 512MB (550/1400/1600) (4.4/25.6) (PCIe 2.0) 128-bit DL-HDCP D-DLDVI

- 32 unified processors – 16 TMUs – 8 ROPs

It’s about 5% slower than an 8600 GTS.

 

8800 GS 384MB (550/1650/1600) (9.6/38.4) (PCIe 2.0) 192-bit DL-HDCP D-DLDVI

- 96 unified processors – 48 TMUs – 12 ROPs

Performance is similar to an 8800 GT 256MB, therefore position it between HD 3850 and HD 3870.

 

9600 GSO 384MB (550/1650/1600) (9.6/38.4) (PCIe 2.0) 192-bit DL-HDCP D-DLDVI

- 96 unified processors – 48 TMUs – 12 ROPs

It’s a rebadged 8800 GS.

 

9600 GT 512MB (650/1625/1800) (10.4/57.6) (PCIe 2.0) 256-bit DL-HDCP D-DLDVI

- 64 unified processors – 32 TMUs – 16 ROPs

Performance is similar to a HD 3870.

 

8800 GT 256/512MB (600/1500/1800) (9.6/57.6) (PCIe 2.0) 256-bit DL-HDCP D-DLDVI

- 112 unified processors – 56 TMUs – 16 ROPs

8800 GTX is roughly 15% faster or put in another way - 15% faster than the 8800 GTS. The 256MB version (600/1500/1400) is about 10% faster than HD 3850 256MB, or 10% slower than HD 3870 512MB.

 

9800 GT 512MB (600/1500/1800) (9.6/57.6) (PCIe 2.0) 256-bit DL-HDCP D-DLDVI

- 112 unified processors – 56 TMUs – 16 ROPs

It’s an 8800 GT featuring die shrink and Hybrid Power.

 

8800 GTS 512MB (650/1625/1940) (10.4/62.1) (PCIe 2.0) 256-bit DL-HDCP D-DLDVI

- 128 unified processors – 64 TMUs – 16 ROPs

It’s about 5% slower than an 8800 GTX.

 

9800 GTX 512MB (675/1688/2200) (10.8/70.4) (PCIe 2.0) 256-bit DL-HDCP D-DLDVI

- 128 unified processors – 64 TMUs – 16 ROPs

Performance is equal an 8800 GTX.

 

9800 GTX+ 512MB (738/1836/2200) (11.8/70.4) (PCIe 2.0) 256-bit DL-HDCP D-DLDVI

- 128 unified processors – 64 TMUs – 16 ROPs

It performs similar to a HD 4850 or 5% faster than the 9800 GTX.

 

9800 GX2 512MB (600/1500/2000) (9.6/64.0) (PCIe 2.0) 256-bit DL-HDCP D-DLDVI HDMI

- 128 unified processors – 56 TMUs – 16 ROPs (per GPU)

It’s about 45% faster than an 8800 Ultra.

 

GT200

 

GTX 260 896MB (576/1242/1998) (16.1/111.9) (PCIe 2.0) 448-bit DL-HDCP D-DLDVI

- 192 unified processors – 64 TMUs – 28 ROPs

Performance is about 30% faster than a 9800 GTX while the HD 4870 is 5% faster.

 

GTX 260 216 896MB (576/1242/1998) (16.1/111.9) (PCIe 2.0) 448-bit DL-HDCP D-DLDVI

- 216 unified processors – 72 TMUs – 28 ROPs

This revision matches HD 4870 512MB’s performance - making it 5% faster than its predecessor.

 

GTX 280 1GB (602/1296/2214) (19.3/141.7) (PCIe 2.0) 512-bit DL-HDCP D-DLDVI

- 240 unified processors – 80 TMUs – 32 ROPs

It’s 5% slower than a 9800 GX2 or put in another way, 50% faster than the 9800 GTX.

 

ATI

 

X800 SE 128/256MB (425/800) (3.4/22.4) (AGP/PCIe) 256-bit

- 8 pixel processors – 8 TMUs – 8 ROPs - 6 vertex processors

This card has the same amount of pixel processors as a 6800 LE but the performance is pretty equal to a 6600 GT. A 6800 is a better choice.

 

X800 GT 128/256MB (475/980) (3.8/24.96) (AGP/PCIe) 256-bit

- 8 pixel processors – 8 TMUs – 8 ROPs - 6 vertex processors

This card can have cores based upon: R420 (X800 series), R423 (X800XT PCIe) and R480 (X850-series) which have achieved too low frequencies or have defect shader units. ATI sells these cards as an X800 GT. The card is pretty equal a 6600 GT in performance. R480 cores are most often to find among the 256 MB variants – some of these can be softmodded.

 

X800 256MB (400/700) (4.8/22.4) (AGP/PCIe) 256-bit

- 12 pixel processors – 12 TMUs – 12 ROPs - 6 vertex processors

It has the same amount of pixel processors as a 6800 and the performance is also very similar.

 

X800 GTO 128MB/256MB (400/980) (4.8/31.36) (AGP/PCIe) 256-bit

- 12 pixel processors – 12 TMUs – 12 ROPs - 6 vertex processors

The performance is a bit better than cards like X800 and 6800. It is quite unlikely today that you are able to softmod the Connect3D version into a X850 XT.

 

X800 Pro 256MB (475/900) (5.7/28.8) (AGP/PCIe) 256-bit

- 12 pixel processors – 12 TMUs – 12 ROPs - 6 vertex processors

You can’t softmod it to 16 pixel processors like the vivo edition. There is a chance for success if a hardmod is preformed. An X800 Pro is much faster than a 6800 LE and a fair bit faster than a 6800. The 6800 GT is a bit faster.

 

X800 Pro VIVO 256MB (475/900) (5.7/28.8) (AGP/PCIe) 256-bit

- 12 pixel processors – 12 TMUs – 12 ROPs - 6 vertex processors

There is a good chance to successfully softmod it to a XT PE. It has then no problems keeping up with either 6800 GT or 6800 Ultra.

NB! The PCIe version is supposed to be cut by a laser. If you a lucky a hardmod might be successful.

 

X850 Pro 256MB (520/1080) (6.24/34.56) (AGP/PCIe) 256-bit

- 12 pixel processors – 12 TMUs – 12 ROPs - 6 vertex processors

This is an X800 Pro on steroids. The performance is about 5-10 % faster than a X800 Pro but is still a bit weak when compared to a 6800 GT. X850 Pro usually overclock a bit better than a X800 XL therefore nullifying the lead. The VIVO version can be softmodded to a X850 XT PE.

 

X800 GTO² 256MB (400/980) (6.4/31.36) (PCIe) 256-bit

- 16 pixel processors – 16 TMUs – 16 ROPs - 6 vertex processors

This card has a R480 core which was not found suited to become a X850XT/XT PE. The performance is about the same as an X800 XL. Now ships with 16 pixel processors.

 

X800 XL 256/512MB (400/1000) (6.4/32.0) (AGP/PCIe) 256-bit

- 16 pixel processors – 16 TMUs – 16 ROPs - 6 vertex processors

It performs a bit better than an X850 Pro. The card has a pretty similar performance as a 6800 GT.

 

X800 XT 256MB (500/1000) (8.0/32.0) (AGP/PCIe) 256-bit

- 16 pixel processors – 16 TMUs – 16 ROPs - 6 vertex processors

The X800 XT performs similar to a 6800 Ultra.

 

X800 XT PE 256MB (520/1120) (8.32/35.85) (AGP/PCIe) 256-bit

- 16 pixel processors – 16 TMUs – 16 ROPs - 6 vertex processors

A XT PE is a bit faster than a 6800 Ultra.

 

X850 XT 256MB (520/1080) (8.32/34.56) (AGP/PCIe) 256-bit

- 16 pixel processors – 16 TMUs – 16 ROPs - 6 vertex processors

This is the steroid version of X800 XT. It uses the R480 core which is slightly higher clocked. The X850 XT performs about 3 % faster than its predecessor. An X850 XT is therefore marginally faster than a 6800 Ultra.

 

X850 XT PE 256MB (560/1180) (8.64/37.76) (AGP/PCIe) 256-bit

- 16 pixel processors – 16 TMUs – 16 ROPs - 6 vertex processors

This is the steroid version of X800 XT PE. It uses the R480 core which is slightly higher clocked.

 

R5X0

 

X1600 Pro 128/256MB (500/780) (2.0/12.5) (AGP/PCIe) 128-bit DLDVI

- 12 pixel processors – 4 TMUs – 4 ROPs – 8 z compare units - 5 vertex processors

A 6600 GT is a bit faster.

 

X1300 XT 128-512MB (500/800) (2.0/12.8) (AGP/PCIe) 128-bit DLDVI

- 12 pixel processors – 4 TMUs – 4 ROPs – 8 z compare units - 5 vertex processors

Essentially a X1600 Pro using a new core.

 

X1600 XT 128/256MB (590/1380) (2.36/22.1) (PCIe) 128-bit DLDVI

- 12 pixel processors – 4 TMUs – 4 ROPs – 8 z compare units - 5 vertex processors

The performance is similar to a 6800. It’s most likely crippled by only having 4 TMUs and 4 ROPs. When more pixel shader intensive games are released an increase in performance is expected.

 

X1650 Pro 256MB (600/1400) (2.4/22.4) (AGP/PCIe) 128-bit DLDVI

- 12 pixel processors – 4 TMUs – 4 ROPs – 8 z compare units - 5 vertex processors

It’s essentially an X1600 XT using a new core.

 

X1650 XT 256MB (575/1350) (4.6/21.6) (AGP/PCIe) 128-bit DLDVI Native CF

- 24 pixel processors – 8 TMUs – 8 ROPs - 8 vertex processors

The performance is pretty equal a 7600 GT.

 

X1800 GTO 256MB (500/1000) (6.0/32.0) (PCIe) 256-bit D-DLDVI

- 12 pixel processors – 12 TMUs – 8 ROPs – 8 vertex processors

The card is a bit faster than a 6800 Ultra and performs similar as a 7600 GT. Some cards can be softmodded.

 

X1800 XL 256MB (500/1000) (8.0/32.0) (PCIe) 256-bit D-DLDVI

- 16 pixel processors – 16 TMUs – 16 ROPs - 8 vertex processors

It’s a bit slower than a 7800 GT.

 

X1950 GT 256MB (500/1200) (6.0/38.4) (AGP*/PCIe) 256-bit HDCP D-DLDVI Native CF

- 36 pixel processors – 12 TMUs – 12 ROPs - 8 vertex processors

A bit faster than a 7900 GS, while in newer (2007+) shader intensive games it usually takes a significant lead.

* Sapphire and Palit

 

X1900 GT 256MB (575/1200) (6.9/38.4) (PCIe) 256-bit D-DLDVI

- 36 pixel processors – 12 TMUs – 12 ROPs - 8 vertex processors

An X1800 XT 256MB is marginally faster while the 7900 GT has about the same speed, while in newer (2007+) shader intensive games it usually takes a significant lead. Some cards can be softmodded.

 

X1900 GT 256MB (512/1315) (6.1/42.1) (PCIe) 256-bit HDCP D-DLDVI

- 36 pixel processors – 12 TMUs – 12 ROPs - 8 vertex processors

This is the new revision of X1900 GT which features a better cooler and HDCP support. The changed frequencies have little to say for its performance.

 

X1950 Pro 256/512MB (580/1400) (6.9/44.2) (AGP/PCIe) 256-bit HDCP* D-DLDVI Native CF

- 36 pixel processors – 12 TMUs – 12 ROPs - 8 vertex processors

The performance is marginally better than a X1900 GT which in turn puts it side by side a X1800 XT 256MB, while in newer (2007+) shader intensive games it usually takes a significant lead.

* Till now everybody includes it despite the fact that this isn’t obligatory.

 

X1800 XT 256/512MB (625/1500) (10.0/48.0) (PCIe) 256-bit D-DLDVI

- 16 pixel processors – 16 TMUs – 16 ROPs - 8 vertex processors

In general the performance is a bit faster than a 7800 GXT and marginally faster than a 7900 GT. When equipped with 512 MB of VRAM it often pulls further ahead, especially at high resolutions.

 

X1900 XT 256MB (625/1450) (10.0/46.4) (PCIe) 256-bit D-DLDVI

- 48 pixel processors – 16 TMUs – 16 ROPs - 8 vertex processors

The only thing separating it from a regular X1900 XT is less memory available. The performance is usually affected – thus revealing that 256MB is often a bottleneck for such a fast card. The severity will of course vary from game and setting used but a ~10% hit isn’t unusual in newer games. To summarize the performance: about 5% faster than a 7950 GT, while in newer (2007+) shader intensive games it usually takes a significant lead.

 

X1950 XT 256/512MB (625/1800) (10.0/57.6) (AGP*/PCIe) 256-bit D-DLDVI

- 48 pixel processors – 16 TMUs – 16 ROPs - 8 vertex processors

Speed wise is it pretty equal a X1900 XT 512MB, which in turn shows a bottleneck caused by low amount of memory (read also X1900 XT 256MB). It’s the fastest card using the AGP interface.

* GeCube

 

X1900 XT 512MB (625/1450) (10.0/46.4) (PCIe) 256-bit D-DLDVI

- 48 pixel processors – 16 TMUs – 16 ROPs - 8 vertex processors

The card is substantially (~25%) faster than a X1800 XT 512MB, although it can’t entirely match a 7900 GTX. The performance is a bit better than a 7800 GTX 512 MB, while in newer (2007+) shader intensive games it usually takes a significant lead.

 

X1900 XTX 512MB (650/1550) (10.4/49.6) (PCIe) 256-bit D-DLDVI

- 48 pixel processors – 16 TMUs – 16 ROPs - 8 vertex processors

It’s a bit faster than an X1900 XT and marginally faster than a 7900 GTX, while in newer (2007+) shader intensive games it usually takes a significant lead.

 

X1950 XTX 512MB (650/2000) (10.4/64.0) (PCIe) 256-bit HDCP D-DLDVI

- 48 pixel processors – 16 TMUs – 16 ROPs - 8 vertex processors

Even though the GDDR4 memory offers unparalleled memory bandwidth for this generation it still isn’t all that much faster than an X1900 XTX – only about 5% in average.

 

R6X0

 

HD 2400 XT 256MB (700/1400) (2.8/11.2) (PCIe) 64-bit DL-HDCP DLDVI

- 40 unified processors – 4 TMUs – 4 ROPs

8500 GT is marginally faster.

 

HD 2600 Pro 256/512MB (600/1000) (2.4/16.0) (PCIe) 128-bit DL-HDCP D-DLDVI

- 120 unified processors – 8 TMUs – 4 ROPs

7600 GT is about 5% faster.

 

HD 3650 256/512MB (725/1600) (2.9/25.6) (PCIe 2.0) 128-bit DL-HDCP D-DLDVI

- 120 unified processors – 8 TMUs – 4 ROPs

It’s marginally slower than a 2600 XT GDDR3.

 

HD 2600 XT 256/512MB GDDR3 (800/1600) (3.2/25.6) (PCIe) 128-bit DL-HDCP D-DLDVI

- 120 unified processors – 8 TMUs – 4 ROPs

8600 GT is roughly 5-10% faster.

 

HD 2600 XT 256/512MB GDDR4 (800/2200) (3.2/35.2) (PCIe) 128-bit DL-HDCP D-DLDVI

- 120 unified processors – 8 TMUs – 4 ROPs

8600 GT is about 5% faster.

 

HD 2900 GT 256MB (600/1600) (9.6/51.2) (PCIe) 256-bit DL-HDCP D-DLDVI

- 240 unified processors – 16 TMUs – 16 ROPs

It’s about 10% faster than an X1950 Pro.

 

HD 3850 256/512MB (670/1660) (10.7/53.1) (PCIe 2.0) 256-bit DL-HDCP D-DLDVI

- 320 unified processors – 16 TMUs – 16 ROPs

It’s about 20% faster than an X1950 XTX. Memory is often a bottleneck over 1280x1024 for the 256MB version. Supports DX10.1.

 

HD 2900 Pro 512/1024MB (600/1600) (9.6/102.4) (PCIe) 512-bit DL-HDCP D-DLDVI

- 320 unified processors – 16 TMUs – 16 ROPs

8800 GTS 320MB is about 10% faster.

 

HD 2900 XT 512MB (742/1650) (11.9/105.6) (PCIe) 512-bit DL-HDCP D-DLDVI

- 320 unified processors – 16 TMUs – 16 ROPs

8800 GTS 640MB has about the same performance.

 

HD 3870 512MB (775/2250) (12.4/72.0) (PCIe 2.0) 256-bit DL-HDCP D-DLDVI

- 320 unified processors – 16 TMUs – 16 ROPs

It’s marginally faster than a HD 2900 XT 512MB. Supports DX10.1.

 

HD 2900 XT 1GB (745/2000) (11.9/128.0) (PCIe) 512-bit DL-HDCP D-DLDVI

- 320 unified processors – 16 TMUs – 16 ROPs

Even though GDDR4 memory offers unparalleled memory bandwidth and being equipped with additional 512MB - it still only marginally faster.

 

HD 3870 X2 512MB (825/1800) (13.2/57.6) (PCIe 2.0) 256-bit DL-HDCP D-DLDVI

- 320 unified processors – 16 TMUs – 16 ROPs (per GPU)

Performance is about 5% faster than an 8800 Ultra.

 

R7X0

 

HD 4550 256/512MB (600/1600) (2.4/12.8) (PCIe 2.0) 64-bit DL-HDCP DLDVI

- 80 unified processors – 8 TMUs – 4 ROPs

It matches the HD 3650 in performance or put in another way, roughly 5-10% slower than the 8600 GT.

 

HD 4670 512MB (750/2000) (6.0/32.0) (PCIe 2.0) 128-bit DL-HDCP D-DLDVI

- 320 unified processors – 32 TMUs – 8 ROPs

It’s about 5% slower than a 9600 GSO while marginally faster than the HD 3850.

 

HD 4830 512MB (575/1800) (9.2/57.6) (PCIe 2.0) 256-bit DL-HDCP D-DLDVI

- 640 unified processors – 32 TMUs – 16 ROPs

The performance is about 5% faster than the 9800 GT 512MB or 15% slower than the HD 4850.

 

HD 4850 512MB (625/1986) (10.0/63.6) (PCIe 2.0) 256-bit DL-HDCP D-DLDVI

- 800 unified processors – 40 TMUs – 16 ROPs

It’s about 5% faster than a 9800 GTX.

 

HD 4870 512MB (750/3600) (12.0/115.2) (PCIe 2.0) 256-bit DL-HDCP D-DLDVI

- 800 unified processors – 40 TMUs – 16 ROPs

Its performance exceeds the GTX 260 by 5%.

 

HD 4870 1GB (750/3600) (12.0/115.2) (PCIe 2.0) 256-bit DL-HDCP D-DLDVI

- 800 unified processors – 40 TMUs – 16 ROPs

The performance is generally 5% more than the 512MB version – making it also 5% faster than GTX 260 216.

 

HD 4850 X2 1GB (625/1986) (10.0/63.5) (PCIe 2.0) 256-bit DL-HDCP D-DLDVI

- 800 unified processors – 40 TMUs – 16 ROPs (per GPU)

It’s about equal with a GTX 280 in performance.

 

HD 4870 X2 1GB (750/3600) (12.0/115.2) (PCIe 2.0) 256-bit DL-HDCP D-DLDVI

- 800 unified processors – 40 TMUs – 16 ROPs (per GPU)

It’s about 30% faster than a GTX 280.

 

__________________________________________________

 

A bit of advice

It might be wise to include technology aspects in the decision process.

__________________________________________________

 

 

Technologies

 

nVidia

 

SM3.0 (PS3.0 and VS3.0)

There are currently a significant amount of released games which supports SM3.0. The performance increases over SM2.0 are often not all that great. R4X0 based cards doesn’t feature SM3 (maximum SM2.0b (PS2.0b/VS2.0): X700-X850), which often performs similar as SM3.0 but has marginal support by the game developers. SM3.0 will in the future become more common and the result is mostly a performance increase. It’s easier for the game developers to support SM3.0 and SM2.0 and therefore disregard SM2.0b. SM3.0 will most likely get its breakthrough in the year 2006, and as of late 2007 there are several games which require SM3.0 (e.g. Bioshock, Rainbow Six Vegas, SC Double Agent, DiRT, Stranglehold, Medal of Honor Airborne, etc.).

 

HDR (High Dynamic Range Rendering)

With the release of NV40 we saw a new type of HDR which uses FP blending. This type of HDR is being used in games like Far Cry and Splinter Cell: Chaos Theory. It’s important to emphasize that HDR with FP blending has nothing to do with SM3.0. ATI and nVidia just co-released SM3.0 and HDR with FP blending. There exists a different kind of HDR which utilizes INT10 or better. This method is supported by all ATI cards from the release of R300 (9700). Dark Messiah and HL2 EP1 are current examples on this approach to achieve HDR. HDR with FP blending will most likely be the most supported approach and will also provide the best precision.

No HDR.....................HDR with FP16 blending

farcrynohdr0hz.th.jpgfarcryhdr5qo.th.jpg

 

8xS Anti-aliasing

It provides a very good result because it uses both supersampling (2x) and multisampling (4x). The performance hit is great which mostly restrict the usage to older games.

 

Faster in OpenGL

nVidia has for a long time dominated the OpenGL API. In games like Doom 3 will nVidia often perform 10 % better than a similar card from ATI. R5X0/R6X0 has good OpenGL performance and will in many cases perform similar to their competitor, although the overall performance is still somewhat lacking.

 

Better Linux drivers

Good support for AMD64/IA64 and FreeBSD. ATI might in the future release somewhat decent drivers, AIGLX performance is in general not all that great but on the upside their OpenGL performance is great. Linux users are still advised to choose a card from nVidia. More information (13 November 2007)

 

Digital Vibrance Control (DVC)

It makes colours more vibrant. This is especially an advantage for those who have a cheap monitor with bland colours. This is mainly a love/hate feature.

dv9dr.jpg

 

SLI

If you're buying the PCIe version among one of these cards you can connect two together. SLI can achieve up to 90 % extra performance when compared to a single card. Games which are limited by the CPU will not show a large performance increase. You need a SLI based motherboard to utilize SLI. ATI has Crossfire which is a response to SLI.

sliintro2mb.jpg

 

Positive things about SLI

* It can give a large boost in performance, especially if you use high resolutions along with AA and AF.

* You can add another card later

* Up to 16xAA (8xS + 8xS -> 4xMSAA and 4xSSAA) is supported from 77.76

* Mixed vendors is supported from 8x.xx

* Supports all games from 75.xx (global settings: AFR, AFR2 or SFR)

* You can use have one dedicated card for PhysX acceleration from 180

 

Negative things about SLI

* A substantial increase in effect usage and heat development.

* It costs most often twice as much and a large performance increase is not guaranteed even if a SLI profile exists

* It can reduce your total amount of PCI and PCIe slots for later hardware purchases especially if fitted with a large cooler

* Games can be depended on profiles for optimal performance. This is the reason for still profile support despite all games being supported from 75.xx

* Memory is not being utilized. 128 MB + 128 MB = 128 MB. This can result into a bottleneck despite having a raw processing power. A good example is 6600 GT SLI at higher resolutions with AA and AF enabled

* No multi-monitor support while being in SLI mode for pre-180 drivers

* It’s more prone to tearing (often worse) than a single card

* 1080i doesn’t work

* Micro stuttering

 

If you like a much more in depth overview of SLI please visit this page.

 

 

G7X

 

Transparency AA

This is a technology which smoothens out the edges of more AA prone objects like 2D partial transparent textures which don’t use normal geometry. These objects can be: leaves, vegetation or fences made of chain-link. The method provides a very large increase in IQ on these affected objects because MSAA don’t work. You have two choices: multisampling (light on resources and most often improves IQ marginally) and supersampling (more demanding but provides excellent IQ). AFAIK SS TAA provides IQ which is a bit better than ATI’s Quality Adaptive AA. NV4X is now supported from 91.45.

MSAA.......................MS TAA..................SS TAA.....…...……....Performance (right: 1600x1200)

4xmsaa0bq.th.jpg4xmsaataaa5ur.th.jpg4xssaataaa3wh.th.jpgtaaami5.th.gif

The pictures are taken from Beyond3D

 

Gamma corrected AA

nVidia has finally introduced gamma corrected MSAA (must be turned on manually but is very light on resources) which ATI introduced in their R300. It’s a good decision by nVidia to let the feature be optional for the user since the desired effect may vary from display to display.

 

 

G8X

 

DX10 (SM4.0)

Microsoft has written this version from scratch, it’s not compatible with older OSs than Vista and has no support for older versions of DX – which is the reason for the Vista only DX9.0Ex. Some key features are: less overhead (time when nothing is done), more registers, geometry shading, obligatory support of features for graphic vendors and a larger texture limit – you can add more ”stuff” without a performance penalty. More about DX10

0,1425,i=139347,00.jpg

 

Unified Shader Architecture

It’s important to emphasise that this isn’t a DX10 requirement. The shader unit can now process: vertex, pixel or geometry - a great potential for dynamic allocating of shader processing power.

 

HDR FP blending + MSAA

The first generation from nVidia which supports it.

 

”Optimal” AF

Currently the best there is. Let the pictures speak for themselves:

hqafcomparisonjx7.png

http://techreport.com/reviews/2007q2/radeo...xt/index.x?pg=5" target="_blank">The pictures are taken from ”The Tech Report”

 

CSAA

This is new method which reduces the amount of stored colour/Z-samples when compared to regular MSAA. The result is better performance, while providing similar IQ on polygon edges, when using the available options: 8xCSAA, 16xCSAA and 16xQCSAA for 4xMSAA and 8xMSAA than e.g. 16xMSAA. Stencil shadows and Transparency AA will only use AA equal z-samples (e.g. 8xCSAA on polygon edges and 4xAA on stencil shadow edges). What is CSAA?

csaatypesbh4.png

 

 

GT200

 

CUDA

Multi-purpose GPU language for G8X and later generations to hardware accelerate various tasks, e.g., physics (PhysX), video transcoding, etc.

 

HybridPower/SLI

It’s a standard feature for this generation and allows the cooperation between a GPU and IGP. In practice the potential benefits are:

- Only the IGP is used for desktop rendering which means the GPU is shut off

- Performance increase since both the GPU (low-end) and IGP are working on game rendering

- Extra connectivity for displays

 

 

 

ATI

 

Temporal FSAA (TAA)

This is a very useful AA method. When used it doubles the effective AA therefore making 4xAA from 2xAA. V-sync must be turned on when TAA is in use. TAA is only enabled when the fps is over 60, regardless of the current refresh rate. When the fps is too low you might experience flickering along edges if a limit didn’t exist. You can change both the fps limit and multiplier with ATI Tool and ATI Tray Tools.

mssa6ah.gif

 

3Dc Normal Map Compression

3Dc can be useful if games begin to use the technology. The purpose is to compress normal maps and therefore allowing game developers to use four times (nVidia’s DXTC5 support is limited to 2:1) the detail without sacrificing more VRAM. When used it can increase performance when compared to 3Dc not being utilized. Today only a few games support this technology: Far Cry 1.3, Pirates! and Tribes Vengeance. Lack of support has made it a failure.

3dc4ib.jpg

 

Faster in D3D based games

When the D3D API is used ATI normally performs a bit better than nVidia.

 

Better performance scaling when using AA and/or very high resolutions

ATI is normally less penalized in performance when AA and/or very high resolutions are used. This can turn a tie into an overall victory. G8X has better performance scaling under such conditions than R6X0.

 

A bit faster in most games

This is the natural outcome when the great majority of games use the D3D API.

 

Crossfire

This is ATI’s answer to nVidia’s SLI. CF is supported from R4X0. It can achieve up to 90 % extra performance when compared to a single card. Games which are limited by the CPU will not show a large performance increase. You need a motherboard which supports crossfire.

aticrossfirestor2zd.jpg

 

Positive things about Crossfire

* It can give a large boost in performance, especially if you use high resolutions along with AA and AF

* You can add another card later

* SuperAA supports up to 14xAA

* Mixed vendors and cards (when being in the same series, i.e. X800 XL with X800 CE) are supported

* Supports Intel’s i975x and i955x chipset from 6.5

* Supports all games. If the game title has no profile in ATi's driver database, AFR rendering will be enabled as a default mode for OpenGL based games and SuperTiling for Direct 3D based games

* http://www.computerbase.de/news/hardware/g...fire_x1950_xtx/" target="_blank"> Software CrossFire (with ~10% performance hit) for X1900 and X1950 cards from 6.11. You don’t need a master card, also X1900 and X1950 cards can be mixed together.

* X1300, 1600, X1800 GTO and X1800 XL doesn’t require a master card and an external connection cable from Catalyst 6.5.

* R6X0, X1950 Pro, X1950 GT and X1650 XT has a similar CrossFire solution as SLI!

* CrossFire X enables three or four HD 3800 cards to work together (from Catalyst 8.3)

* Hybrid CrossFire enables low idle power consumption by only using a RS780 based IGP and increase performance by combining the IGP and a HD 2400 series card (from Catalyst 8.3)

 

Negative things about Crossfire

* Requires normally a master card and an external connection cable

* CF pre-R5X0 is limited to max 60 Hz at 1600x1200 (a limitation in the compositing engine and single link DVI). R5X0 cards can display max 60 Hz at 2560x1600 due to a better compositing engine and dual link DVI

* Games can be depended on profiles for optimal performance. This is the reason for still profile support despite all games being supported.

* Severely limited control over the game profiles. Still no native control over rendering modes: AFR, SFR or SuperTiling besides setting Catalyst AI to off to force AFR. You can rename the exe-file to a game which uses the rendering mode of your choice (e.g. sam2.exe for AFR).

* Substantial increase in effect usage and heat development

* It costs most often twice as much and a large performance increase is not guaranteed

* It can reduce your total amount of PCI- and PCIe slots for later hardware purchases especially if fitted with a large cooler

* Memory is not being utilized (i.e. 128 MB + 128 MB = 128 MB). This can result into a bottleneck despite having a raw processing power

* No multi-monitor support while being in Crossfire mode for pre R6X0 cards (from Catalyst 8.1).

* Micro stuttering

 

If you like a much more in depth overview of Crossfire please visit this page.

 

 

R5X0

 

Adaptive AA

ATI’s equivalent to nVidia’s Transparency AA: “This is a technology which smoothens out the edges of more AA prone objects like 2D partial transparent textures which don’t use normal geometry. These objects can be: leaves, vegetation or fences made of chain-link. The method provides a very large increase in IQ on these affected objects because MSAA don’t work.” You can choose between Performance (much better IQ than nVidia’s MS TAA but has a greater performance hit) and Quality. AFAIK SS TAA provides IQ which is a bit better than ATI’s Quality Adaptive. The performance setting reduces the effective AAA with 1/2 when using 4x or more, while 2x doesn't work (i.e. 2xMSAA with 0xAAA). It’s important to emphasize that this technology has been unofficially supported by ATI from the release of R300 (9700). R5X0 hardware is on the other hand a lot less hit in performance than pre-R5X0 cards.

aaa5mo.th.png

 

Better Anisotropic Filtering (AF)

R5X0 can use angle independent AF which results in an improved IQ when compared to its competitor (G7X). See under the G8X section for more ”up to date” information.

afqoh6.png

 

Shader heavy games

Cards using a high ALU:TEX ratio, like X1900 XT, normally shows a performance gain in these games when compared to their nVidia counterparts.

 

HDR FP blending + MSAA

Only R5X0 and G8X can use MSAA simultaneously with HDR FP blending. The “Chuck” hot fix (Oblivion) from ATI has shown us that HDR FP blending + MSAA isn’t just depended on the support from the game developers.

 

Colour saturation

The feature is very similar to nVidia’s Digital Vibrance Control: “It makes colours more vibrant. This is especially an advantage for those who have a cheap monitor with bland colours. This is mainly a love/hate feature.” ATI’s approach is hardware based while nVidia does it via software.

avivosettings6ik.jpg

 

 

R6X0

 

CFAA

This is ATI’s answer to nVidia’s CSAA. The main downside is texture blur – the image becomes less sharp, which in turn is similar to nVidia’s Quincunx. Normal AA (inc. CSAA) will only take the samples from within the pixel, while CFAA will in addition also take one (narrow tent) or two (wide tent) from neighbour pixels. CFAA is better at reducing aliasing than CSAA, although the edges are more blurry.

cfaaci3.png

 

5.1 sound

Even though it’s just a pass-through implementation – thus not a dedicated decoder since the CPU does all the work:

The HDMI solution is a novel one, the board working with an active DVI-to-HDMI adapter to transmit audio out over the spare pins in a dual-link DVI port (the current revision of HDMI that R600 supports -- 1.2 -- is analogous to single-link DVI), audio provided by an on-chip HD Audio controller. Yes, the new Radeon's are also rudimentary HD Audio controllers, too, and all audio processing for that controller is currently done on the CPU. That makes it no less a solution to provide protected path audio out via encrypted HDMI, though, using one of the DVI ports on the board.

It supports up to 16-bits / 48kHz LPCM (2.0) and regular Dolby Digital and DTS (5.1).

 

Poor AF performance

R6X0 has probably too little texture fill-rate, this is especially true for z-fill-rate, which can represent a severe bottleneck for AF performance. It’s worth noting that RV7X0 has the same 4:1 ALU:TEX ration as R6X0, but also having an more efficient architecture (e.g. memory bandwidth, cache coherency, directly linked MC/ROP, etc.), so it’s more about RV7X0 better meeting and executing the general texture fill-rate requirement in game’s increasingly ALU-limited situations. RV6X0’s AA performance is on the hand pretty good.

 

DX10.1

This new revision adds several performance and IQ enhancements: indexable cube map arrays (for real-time Global Illumination), improved AA control (e.g. custom anti-aliasing filters), mandatory FP32 filtering, mandatory 4xMSAA support, render target output parallelisation (to enable AA in games using deferred rendering), new texture formats, etc.

http://www.bit-tech.net/hardware/2007/11/3...adeon_hd_3870/8" target="_blank">More information

 

 

R7X0

 

7.1 LPCM

This is a major upgrade from R6X0’s 2.0 LPCM, although it should be noted that also this time neither DTS-HD MA nor DD TrueHD are supported.

 

Excellent AA scaling

R7X0 has significant better performance scaling than nVidia’s GT200, this is especially apparent when going from 4xMSAA to 8xMSAA.

 

_________________________________________________

 

 

Media

 

Video Decoding

The cards listed have all video acceleration in one form or another. The purpose is to ensure a fluid playback (reducing CPU workload) which is especially important when playing HD content.

 

- R4X0

* Supports decoding of major formats like: MPEG1/2/4, Real Media, DivX and WMV9. It’s very important to note that only VIVO cards can encode content from external devices (video camera, etc).

* Supports techniques which improves IQ though the use of: deinterlacing and shaders.

 

- NV4X

* Supports decoding of major formats like: MPEG1/2/4 and WMV9 (PCIe; not 6800GT/Ultra).

* Supports similar techniques as ATI to improve IQ.

* The complete list of supported formats

 

- R5X0

* Excellent decoding, while decent in the new formats: VC-1 and H.264.

* Excellent Standard Definition IQ.

* Appalling HD IQ

* Avivo Video Converter looks very promising.

 

- G7X

* Very good SD decoding performance, also pretty good in the new formats: VC-1 and H.264

* Excellent Standard Definition IQ.

* Appalling HD IQ

* Purevideo costs: $20-50 , video acceleration and IQ enhancements are now included in popular playback software (PowerDVD, etc.).

 

- R6X0

* Full VC-1/H.264 decode acceleration for RV610/630/670!

* Very good HD IQ, although the 2400 series is only mediocre.

* The HD 2900 XT has not the new UVD engine. It has an ALU based video processor and should perform similar to a R5X0.

 

- G8X

* Full H.264 decode acceleration for G84/G86/G92!

* Good HD performance for G80

* Excellent Standard Definition IQ.

* Excellent HD IQ, although 8500 and below are only mediocre.

* The complete list of supported formats

 

- GT200

* VP2: Full H.264 decode acceleration like G84/G86/G92!

 

Dual-Link DVI (DLDVI)

This is an essential feature for those wanting to run monitors using 2560x1600@60Hz (2560x1600x60 = 256 MHz > 165 MHz single-link DVI limitation). Single-Link DVI has only enough bandwidth to run up to 1920x1200@60Hz.

 

HDCP (High-Bandwidth Digital Content Protection)

This is a DRM (Digital Rights Management) technology which currently is being used by HD-DVD and Blu-Ray movies. The movie industry thinks this will reduce piracy but it makes it a lot harder for people to back-up their originals.

 

You need a video card and monitor/HDTV which are both HDCP ready if you want to use digital connections (HDMI, DVI).

 

Analogue connections have currently no such limitations since ICT (Image Constraint Token) is not yet implemented – when it’s (2009 or even as late as 2012) the resolution will be reduced to 540p or not allowed at all. It should be noted that AACS (Advanced Access Content System) don’t allow component to transfer more than a maximum 1080i - VGA on the other hand has no such restrictions.

hdcphn3.png

 

HDTV Output (YPrPb component)

All cards listed support it.

 

 

Misc

 

Power

 

The video cards by themselves

 

This is a decent Internet based PSU calculator

 

PCIe cards use a six pin power connector, while the great majority of AGP cards use four pins. There exists four (white here) to six pin (black here) connectors:

powerconnector8ir.jpg

__________________________________________________

 

Final words

I’m hoping that this thread can ease the decision making and therefore resulting into the right card for you. :)

 

You must remember that softmodding is done at your own responsibility and can void any guaranty if discovered. If this makes you uncomfortable please disregard softmodding in your decision making.

__________________________________________________

 

18 February 07: Added: 8800 GTS 320MB, X1950 GT and provided more information about G80 performance. Update: video decoding (e.g. added G8X).

19 April 07: Added: 8500 GT, 8600 GT, 8600 GTS, 7950 GT AGP and X1950 XT AGP. Corrected: small performance adjustments for 8800 GTS and GTX.

4 May 07: Added: 8800 Ultra, 8500 GT performance, X1950 GT AGP and G84/G86 in the Media section. Updated: bang for buck cards

14 May 07: Added: HD 2900 XT + technology and media.

10 July 07: Added: 2400 XT, 2600 Pro, 2600 XT GDDR3/GDDR4 and 2900 XT 1GB. Improvements: Linux and HDCP sections.

24 July 07: Updated: 8800 GTS 320MB performance + decoding section.

15 September 07: More correctly specified performance of 8800 GTS 320MB and R5X0s which are ALU heavy (e.g. X1900 XT). Added: “Shader heavy games” section under R5X0 and some games which require SM3.0 under NV4X’s SM3.0 section. Updated: “Bang for Buck” cards.

29 October 07: Added: X2900 Pro and 8800 GT. Updated: “Bang for Buck” cards.

10 November 07: Added: HD 2900 GT.

03 December 07: Added: HD 3850, HD 3870, 8800 GTS SSC and 8800 GTS 512MB. Updated: performance of 8800 GTS and 8800 GTX to better reflect the influence of newer games when compared to previous generation and adjusted a bit upwards for HD 2900 GT. Updated: “Bang for Buck” cards. Corrected: 8800 GT supports DL-HDCP.

08 December 07: Added: 8800 GT 256MB and to whom the "Bang for Buck" cards might be suitable for.

29 January 08: Added: HD 3870 X2. Updated: “Bang for Buck”.

21 February 08: Added: 8800 GS, 9600 GT and HD 3650. Corrected: HD 3870 X2 performance. Updated: “Bang for Buck”.

23 March 08: Added: 9800 GX2, CrossFire X and Hybrid CrossFire. Updated: “Bang for Buck”.

1 April 08: Added: 9800 GTX.

22 June 08: Added: 9800 GTX+, GTX 260, GTX 280 and HD 4850. Updated: “Bang for Buck”. New: 7.1 LPCM and CUDA sections.

25 June 08: Added: HD 4870. Updated: “Bang for Buck”.

17 August 08: Added: 9500 GT, 9800 GT, 9600 GSO, HD 4870 X2 and HybridPower. Updated: “Bang for Buck”.

13 September 08 Added: HD 4670. Updated: “Bang for Buck”.

23 September 08 Added: GTX 260 216. Updated: “Bang for Buck”.

24 November: Added: HD 4550, HD 4830, HD 4870 1GB and HD 4850 X2. Added: AF performance and DX10.1 sections under R6X0. PhysX and multi-monitor support for SLI. Updated: “Bang for Buck”.

Endret av guezz
Lenke til kommentar
Videoannonse
Annonse
Skal kjøpe nytt hovedkort og skjermkort. valget står da mellom å kjøpe geforce 6800GT eller 2x6600 256MB (a 1600 kr stk.)til et sli-hovedkort. Hva gir best ytelse av disse to alternativene?  :hmm:

sier seg selv.. 24x3.2, eller 16x1.6 pipes ? :p

 

eller klokk... 600/ukjent eller 350/1000mhz ;)

Endret av Vizla
Lenke til kommentar
har SLI-HK 2xPCIe x 16 porter?

yepp

Nei det stemmer vel ikke. De har vel 1xPCIe 16X og 1xPCIe 8X hvis je gikke tar så feil. Og det gjør vel heller ikke så mye fra eller til.

 

EDIT: var vel litt seint ute, men slik går det når man svarer hvis man gjør mange ting samtidig. :) Og så ser det ut som jeg tar feil så da er det jo greit. 1x1PCIe 16X blir til 2xPCIe 8X

Endret av bOMS
Lenke til kommentar
Skal kjøpe nytt hovedkort og skjermkort. valget står da mellom å kjøpe geforce 6800GT eller 2x6600 256MB (a 1600 kr stk.)til et sli-hovedkort. Hva gir best ytelse av disse to alternativene?  :hmm:

Vennligst les testen under SLI-avsnittet. For å svare på spørsmålet ditt er 6800 GT bedre enn 6600 GT i SLI.

 

Edit:

Du kan dessuten senere kjøpe deg enda et 6800 GT-kort for litt ekstra ytelse....

Endret av guezz
Lenke til kommentar

Opprett en konto eller logg inn for å kommentere

Du må være et medlem for å kunne skrive en kommentar

Opprett konto

Det er enkelt å melde seg inn for å starte en ny konto!

Start en konto

Logg inn

Har du allerede en konto? Logg inn her.

Logg inn nå
×
×
  • Opprett ny...