: News
: Reviews
: Editorials

: Archives
: About
: Contact
: Advertising
: Privacy
: Links
: Forums

 

 

 

State of 3D: Q4 2007

 

8800 GT, 55nm, and Other Surprises

 

By Josh Walrath

 

            NVIDIA previously had been very aggressive in advanced process nodes.  The TnT 2 was one of the first to utilize 250 nm, and the original GeForce 256 used the brand new 220 nm node.  Things worked out very well for NVIDIA through the GeForce 2 (180 nm) and GeForce 3 series (150 nm).  The jump to 130 nm was not so good for NVIDIA though.  Between design issues with the NV30 chip, as well as troubles experienced by TSMC on their 130 nm process (like the famous migrating void issue), the GeForce FX 5800 was essentially a disaster for NVIDIA.  On the other hand ATI took a more conservative approach by using the 150 nm process, and though the chip was large compared to the competition, the well known process allowed predictable yields and speed bins.

            The overall design of the R300 architecture was also a win for ATI, as it could handle DX8 games with fantastic speed, as well as new DX9 content with good speed.  NVIDIA could not claim the same thing with its NV3x parts.  ATI also did not overdesign their part by going outside of the initial SM 2.0 specification for their first parts.

            We then saw the design philosophies between the two companies change with the next round.  The NV40 was released on a basic 130 nm process, while ATI took a more aggressive approach and went with a new 130 nm Low-K process.  Unfortunately ATI was not able to get enough wafers delivered from TSMC on this process to meet demand.  The X800 XT PE became widely known as the “Phantom Edition” because of the lack of parts on the market.  NVIDIA, while slower overall with their 6800 Ultra part, was able to suck up a lot of marketshare with its 6800 GT and Ultra products.  Namely because they were available, and both cards’ performance was much faster than the competing X800 Pro which was actually in good supply.

            ATI was again bit by being more aggressive than NVIDIA on the process advancement with the introduction of the R520.  The Radeon X1800 was to be one of the first 90 nm parts on the market.  Due to immature design software and the newness of the 90 nm node, the X1800 was delayed by about six months.  The card that was supposed to meet and beat the 110 nm based G70 part from NVIDIA missed its window of opportunity.  By the time ATI released the X1900 (R580), NVIDIA was able to answer with the 90 nm 7900 GTX.

            The R600 on the 80 nm HS line from TSMC was also disappointing, mainly due to overall yields and speed bins (and the lack of polish on the design).  All this while NVIDIA kept with the 90 nm HS for G80, and the stock 80 nm Low-K for the G84/86.  NVIDIA has been very conservative about relying on advanced process nodes for their upcoming products, but that conservatism could lead to problems since it seems that ATI’s choice of TSMC’s 55 nm HS line could be a homerun. 

The Rise of the RV670

            As I had mentioned earlier, the R600 architecture is not a bad one.  It is just unfortunate that the first few iterations of it were underwhelming.  The HD 2900 XT suffered from the AA performance bug, and it did not feature the high quality filtering that the G8x series did.  The HD 2600 and 2400 parts had good shader power, but their fillrates and lack of AA performance made them less popular than the 8600/8500/8400 series from NVIDIA.  The Superscalar architecture also required a lot of driver level work to make it efficient.

            It appears as though AMD has hit the problem head on with the RV670 chip.  People both inside and out of AMD are very excited about this product, and it should fix most of the issues that users have been complaining about.  It will also add a few more wrinkles that could rain on NVIDIA’s parade.

            The big announcement with the RV670 is that it supports Direct X 10.1.  NVIDIA’s G92 does not.  AMD spent a lot of time optimizing the architecture, as well as porting it over to the new 55 nm HS line at TSMC.  All indications point to a faster and more flexible solution, and one which will not show the previous performance hit when AA is enabled.  It is also able to do more things with AA with the expanded programmability that DX 10.1 requires from the AA unit.

            Being the first DX 10.1 part will be a fine feather in AMD’s cap, and it is very reminiscent of the Radeon X800 vs. GeForce 6800 introduction.  At that time ATI called NVIDIA’s pursuit of SM 3.0 and HDR in their GF6 series a “waste of transistors”.  Considering how SM 3.0 and HDR have been picked up over the years, NVIDIA’s decision was a good one in terms of performance and marketing value.  Now we are seeing the exact opposite of the situation, with AMD/ATI pursuing full DX 10.1 functionality while NVIDIA is not at this point.  As an aside though, we have yet to hear NVIDIA comment about AMD’s inclusion of DX 10.1 functionality.  I am guessing they will have likely learned from ATI’s mistake of downplaying that functionality and save them from looking the fool.  The company line will likely be, “DX 10.1 makes good sense, and we will be delivering a DX 10.1 product when there are applications about to be released utilizing those features.”

            The Universal Video Decoder will also be added to the chip, and we can argue that it is a better overall unit than the competing PV2 from NVIDIA.  Issues that we see with the NVIDIA part with output quality and noise reduction are not an issue with AMD’s UVD.  This will be a big selling point for the AMD card, as its price and single slot design will make it worth a second look for home theater setups.  Add in that it has full HDMI support without the use of internal bridge cables, it could be one of the best HTPC cards available throughout the holiday season.

            It does not look as though AMD has pursued the texture filtering quality that NVIDIA currently implements, but rather they are hoping the new AA schemes will further help texture filtering.  In titles which will utilize these DX 10.1 features, image quality between NVIDIA and AMD should be comparable.  Unfortunately for AMD, DX 10.1 games are still well out into the future.  So when playing World of Warcraft, as well as many current and older games, the NVIDIA cards will have better overall image quality and performance.

 

Next:  More RV670 and "The Game"

 

If you have found this article interesting or a great help, please donate to this site.

 

Copyright 1999-2007 PenStar Systems, LLC.