Again 128 stronger California shooters, but with cut down spears (512MB and 256bit)

Part 1: Theory and architecture

In the previous material dedicated to the release of the new mid-range solution Nvidia Geforce 8800 GT, based on the G92 chip, we mentioned that this solution uses a chip in which not all ALU and TMU execution units are unlocked, some of them are waiting in the wings. to be included in a video card at a different price level. And now this moment has come, Nvidia announced an updated version of the GeForce 8800 GTS, which retained the same name as the younger solution based on the G80. The easiest way to distinguish it is by the amount of installed video memory; it is equal to 512 megabytes, in contrast to the previous 320 MB and 640 MB options. So this model was called Geforce 8800 GTS 512MB.

The new version of the GeForce 8800 GTS is based on the G92 chip, previously used in the GeForce 8800 GT, a video card of the so-called upper mid-price level, so we already know the main features and characteristics. Unlike the two GeForce 8800 GT models with a recommended price of $200 to $250 (which doesn’t correlate well with real prices at the moment, by the way), the new solution has a manufacturer’s recommended price of $349-399. The peculiarities of the video chip used are support for only a 256-bit memory bus, but a larger number of unlocked universal execution units. Let's take a closer look at the new lower high-end solution from Nvidia...

Before reading this material, we recommend that you carefully read the basic theoretical materials DX Current, DX Next and Longhorn, which describe various aspects of modern hardware graphics accelerators and architectural features of Nvidia and AMD products.

These materials quite accurately predicted the current situation with video chip architectures, and many assumptions about future solutions were justified. Detailed information about the Nvidia G8x/G9x unified architecture using previous chips as an example can be found in the following articles:

As we mentioned in the previous material, the G92 chip includes all the advantages of the G8x: a unified shader architecture, full support for DirectX 10, high-quality anisotropic filtering methods and the CSAA antialiasing algorithm with up to sixteen samples inclusive. Some chip blocks are slightly different from those in the G80, but the main change compared to the G80 is the 65 nm manufacturing technology, which has reduced production costs. Let's look at the characteristics of the GPU and new video solutions based on it:

Graphics accelerator Geforce 8800 GTS 512MB

  • Chip codename G92
  • 65 nm technology
  • 754 million transistors (more than G80)
  • Unified architecture with an array of shared processors for stream processing of vertices and pixels, as well as other types of data
  • Hardware support for DirectX 10, including shader model Shader Model 4.0, geometry generation and recording intermediate data from shaders (stream output)
  • 256-bit memory bus, four independent 64-bit wide controllers
  • Core frequency 650 MHz (Geforce 8800 GTS 512MB)
  • ALUs operate at more than double the frequency (1.625 GHz for GeForce 8800 GTS 512MB)
  • 128 scalar floating-point ALUs (integer and floating formats, IEEE 754 32-bit precision FP support, MAD+MUL without clock loss)
  • 64 texture addressing units with support for FP16 and FP32 components in textures
  • 64 bilinear filtering units (like G84 and G86, no free trilinear filtering and more efficient anisotropic filtering)
  • Possibility of dynamic branches in pixel and vertex shaders
  • 4 wide ROP blocks (16 pixels) with support for antialiasing modes up to 16 samples per pixel, including with FP16 or FP32 frame buffer format. Each block consists of an array of flexibly configurable ALUs and is responsible for generating and comparing Z, MSAA, and blending. Peak performance of the entire subsystem up to 64 MSAA samples (+ 64 Z) per clock, in Z only mode 128 samples per clock
  • Record results from up to 8 frame buffers simultaneously (MRT)
  • All interfaces (two RAMDAC, two Dual DVI, HDMI, HDTV) are integrated on the chip (unlike those placed on an external additional NVIO chip in the GeForce 8800)

GeForce 8800 GTS 512MB reference card specifications

  • Core frequency 650 MHz
  • Universal processor frequency 1625 MHz
  • Number of universal processors 128
  • Number of texture blocks 64, blending blocks 16
  • Effective memory frequency 1.94 GHz (2*970 MHz)
  • Memory type GDDR3
  • Memory capacity 512 megabytes
  • Memory bandwidth 64.0 gigabytes per second.
  • Theoretical maximum fill rate is 10.4 gigapixels per second.
  • Theoretical texture sampling speed up to 41.6 gigatexels per second.
  • Two DVI-I Dual Link connectors, supports output resolutions up to 2560x1600
  • SLI connector
  • PCI Express 2.0 bus
  • TV-Out, HDTV-Out, HDCP support
  • Recommended price $349-399

As can be seen from the characteristics, a new version GeForce 8800 GTS 512MB is quite different from the old ones. The number of execution units has increased: ALU and TMU, and the GPU frequency has also increased significantly, including the frequency of shader units. Despite the reduced memory bus (256-bit versus 320-bit in older versions), the memory bandwidth remained the same, since its operating frequency was raised to the corresponding value. As a result, the new GTS has significantly increased shader execution power, as well as increased texture fetch speed. At the same time, the fill rate and bandwidth remained the same.

Due to the changed memory bus width, the latter volume can no longer be equal to 320 MB or 640 MB, only 256 MB, 512 MB or 1 GB. The first value is too small, it will clearly not be enough for a card of this class, and the last one is too high, a slight increase in performance is unlikely to justify the increased price of such options (which may well appear in the future). Therefore, Nvidia chose the middle option with 512 MB cards. Which, as our recent research has shown, is the golden mean for all modern games, which are very demanding on video memory and use up to 500-600 megabytes. We never tire of repeating that this does not mean that all game resources must necessarily be located only in the local memory of the video card; resource management can be transferred to API management, especially in Direct3D 10 with video memory virtualization.

Architecture

As was written in the previous article on the GeForce 8800 GT, we can say that the G92 is the previous flagship G80, transferred to a new technological process, but with some changes. The new chip has 8 large shader units and 64 texture block, as well as four wide ROPs. Despite all the changes for the better, the number of transistors in the chip seems too large, probably, the increased complexity of the chip is explained by the inclusion of a previously separate NVIO chip, as well as a new generation video processor. In addition, transistor counts have been impacted by more complex TMUs, and there is the potential for larger caches to provide greater efficiency to the 256-bit memory bus.

There are very few architectural changes in the G92 chip; we talked about them all in the previous article, and we won’t do it again. Everything said in the reviews of previous solutions remains valid; we will present only the main diagram of the G92 chip, now with all 128 universal processors:

Of all the changes in the chip, compared to the G80, only a reduced number of ROP units and some changes in the TMU, which were described in our previous material. Let us once again point out that 64 texture units in the GeForce 8800 GTS 512MB in real applications in most cases will NOT be stronger than 32 units in the GeForce 8800 GTX. With trilinear and/or anisotropic filtering enabled, their performance will be approximately the same, since they have the same number of texture data filtering units. Of course, where unfiltered samples are used, the performance of solutions on the G92 will be higher.

PureVideo HD

One of the expected changes in the G92 was the built-in second-generation video processor, known from the G84 and G86, which received expanded support for PureVideo HD. This version of the video processor almost completely relieves the CPU when decoding all types of video data, including the “heavy” H.264 and VC-1 formats. The G92 uses a new model of programmable PureVideo HD video processor, which includes the so-called BSP engine. The new processor supports decoding H.264, VC-1 and MPEG-2 formats with resolutions up to 1920x1080 and bitrates up to 30-40 Mbps, performing the work of decoding CABAC and CAVLC data in hardware, which allows you to play all existing HD-DVD and Blu -ray disks even on medium-power single-core PCs. VC-1 decoding is not as efficient as H.264, but it is still supported by the new processor. You can read more about the second generation video processor in our reviews of the G84/G86 and G92, links to which are given at the beginning of the article.

PCI Express 2.0

Among the real innovations in the G92 is support for the PCI Express 2.0 bus. The second version of PCI Express doubles the standard bandwidth, from 2.5 Gb/s to 5 Gb/s, resulting in the x16 connector can transfer data at speeds of up to 8 GB/s in each direction, as opposed to 4 GB/s for version 1.x. It is very important that PCI Express 2.0 is compatible with PCI Express 1.1, and old video cards will work in new motherboards, and new video cards with support for the second version will remain functional in boards without its support. Provided there is sufficient external power and without increasing the interface bandwidth, of course.

The real impact of higher PCI Express bus bandwidth on performance was assessed in its materials by Nvidia's main competitor. According to them, a mid-level video card with 256 megabytes of local memory accelerates when moving from PCI Express 1.0 to 2.0 in modern games such as Company of Heroes, Call of Juarez, Lost Planet and World In Conflict by about 10%, the figures vary from 5 % up to 25% for different games and testing conditions. Naturally, speech in high resolutions, when the frame buffer and related buffers occupy most of the local video memory, and some resources are stored in the system.

To provide backward compatibility With existing PCI Express 1.0 and 1.1 solutions, the 2.0 specification supports both 2.5 Gbps and 5 Gbps transfer rates. PCI Express 2.0 backwards compatibility allows legacy 2.5 Gb/s solutions to be used in 5.0 Gb/s slots that will operate at lower speeds, and a device designed to version 2.0 specifications can support both 2.5 Gb/s and 5 Gb/s speeds . In theory, compatibility is good, but in practice, problems may arise with some combinations of motherboards and expansion cards.

Support for external interfaces

Everything here is the same as with the GeForce 8800 GT, there are no differences. An additional NVIO chip available on GeForce 8800 boards that supports remote external interfaces(two 400 MHz RAMDAC, two Dual Link DVI (or LVDS), HDTV-Out), in this case, was included in the chip itself; support for all these interfaces is built into the G92 itself.

GeForce 8800 GTS 512MB video cards usually have two Dual Link DVI outputs with HDCP support. As for HDMI, support for this connector has been implemented; it can be implemented by manufacturers on specially designed cards. Although the presence of an HDMI connector on a video card is completely optional, it can be successfully replaced by an adapter from DVI to HDMI, which is included with most modern video cards.

The 8800 GTX was a landmark event in the history of 3D graphics. It was the first card to support DirectX 10 and its associated unified shader model, which greatly improved image quality over previous generations, and it remained unrivaled in terms of performance for a long time. Unfortunately, all this power came at a cost. With expected competition from ATI and the release of cheaper mid-range models based on the same technology, the GTX was considered a card aimed only at those enthusiasts who wanted to be at the forefront of modern advances in graphics processing.

Model history

To correct this situation, nVidia released a card of the same line GTS 640MB a month later, and a couple of months later the GTS 320MB. Both offered similar performance to the GTX, but at a much more reasonable price. However, at around $300-$350, they were still too expensive for gamers on a budget - they were not mid-range, but high-end models. In hindsight, the GTS were worth every penny, as what followed for the rest of 2007 was one disappointment after another.

First up were the supposed mid-range 8600 GTS and GT cards, which were heavily stripped-down versions of the 8800 series. They were smaller and quieter and had new HD video processing capabilities, but their performance was below expected levels. Purchasing them was impractical, although they were relatively inexpensive. The alternative ATI Radeon HD 2900 XT matched the GTS 640MB in terms of performance, but consumed a huge amount of power under load and was too expensive to be considered mid-range. Finally, ATI attempted to release the DX10 series in the form of the HD 2600 XT and Pro. multimedia capabilities which were even better than the nVidia 8600, but they were not powerful enough to be worthy of the attention of gamers who had already bought previous generation video cards, such as the X1950 Pro or 7900 GS.

And now, a year after the start of sales of the 8800 GTX with the release of the 8800 GT, the first real update of the model with support for DirectX 10 appeared. Although it took a lot of time, the nVidia GeForce 8800 GT had the same characteristics as the GTS model, and the cost was in the range of 200-250 dollars , has finally reached the mid-range price range that everyone has been waiting for. But what made the card so special?

More is not better

As technology develops and the number of transistors in CPUs and GPUs develops, there is a natural need to reduce their size. This leads to lower energy consumption, which in turn means less heat. More processors fit on one silicon chip, which reduces their cost and theoretically puts a lower limit on the price of equipment made from them. However, changing production processes poses high risks for business, so it is customary to release a completely new architecture based on already existing and proven technologies, as was the case with the 8800 GTX and HD 2900 XT. With the improvement of architecture, there is a transition to less energy-intensive Hardware, on which a new design is later again based.

The 8800 series followed this path with the G80 cores of the GTX and GTS, produced using 90 nm technology, and the nVidia GeForce 8800 GT is based on the G92 chip, already made using a 65 nm process. While the change doesn't seem like much, it equates to a 34% reduction in wafer size or a 34% increase in the number of processors on a silicon wafer. As a result, electronic components are becoming smaller, cheaper, and more efficient, which is an extremely positive change. However, the G92 core is not just smaller, there is something else.

First of all, the VP2 video processing engine that was used in the 8600 series has now appeared in the GeForce 8800 GT 512MB. So now you can enjoy high-definition video without system slowdown. The final display engine, which is controlled by a separate chip on the 8800 GTX, is also integrated into the G92. The result is 73 million more transistors on-chip than the 8800 GTX (754 million versus 681 million), although the number of stream processors, texture processing and ROP power is less than that of the more powerful model.

New version of the transparent algorithm nVidia anti-aliasing, added to the GeForce 8800 GT arsenal, is designed to significantly improve image quality while maintaining high system performance. In addition, the new processor did not add any new graphics capabilities.

The manufacturing company apparently thought for a long time about which functionality of the previous 8800 series cards was not fully used and could be reduced, and which should be left. The result was a design GPU, which, in terms of performance, is located somewhere between GTX and GTS, but with GTS functionality. As a result, the 8800 GTS card became completely redundant. The 8800 Ultra and GTX still provide more graphics power, but with fewer features, a much higher price, and higher power consumption. Against this background, the GeForce 8800 GT 512 MB card really took a strong position.

GPU architecture

The GeForce 8800 GT uses the same unified architecture that Nvidia introduced when it first announced the G80 processor. The G92 consists of 754 million transistors and is manufactured using TSMC's 65nm process. The substrate size is about 330 mm 2 , and although this is noticeably smaller than the G80, it is still a long way from being called a small piece of silicon. There are a total of 112 scalar thread cores, which run at 1500 MHz in the standard configuration. They are grouped into 7 clusters, each of which has 16 stream processors that share 8 texture address blocks, 8 texture filter sections and their own independent cache. This is the same configuration that Nvidia used in the G84 and G86 chips at the shader cluster level, but the G92 is a much more complex GPU than either of them.

Each of the shader processors can generate two MADD and MUL commands in one clock cycle; the blocks combined into a single structure can process all shader operations and calculations that come in both integer and floating point form. What's interesting, however, is that despite the stream processors' capabilities being the same as the G80 (except for number and frequency), Nvidia claims the chip can do up to 336 GFLOPS. However, NADD and MUL calculations require 504 GFLOPS. As it turned out, the manufacturing company took a conservative approach to determining computing power and did not take MUL into account when calculating overall performance. At briefings and roundtables, Nvidia representatives said that some architectural improvements should allow the chip to approach its theoretical maximum throughput. In particular, the task manager has been improved, distributing and balancing data that comes through the pipeline. NVidia has announced that it will support double precision in future GPUs, but this chip only emulates it due to the need to follow IEEE standards.

ROP architecture

The ROP structure of the G92 is similar to that of any other graphics processor in the GeForce 8-series family. This means that each section has a L2 cache and is assigned to a 64-bit memory channel. There are a total of 4 ROP sections and a 256-bit data storage interface. Each of them is capable of processing 4 pixels per clock cycle, if each of them is specified by four parameters ( RGB color and Z). If only the Z component is present, then each section can process 32 pixels per clock cycle.

ROPs support all common anti-aliasing formats used in previous GeForce 8-series GPUs. Since the chip has a 256-bit GDDR interface, Nvidia decided to make some improvements to ROP compression efficiency to reduce bandwidth and graphics memory usage when anti-aliasing is enabled at 1600x1200 and 1920x1200 resolutions.

As a derivative of the original G80 architecture, the filter and texture address blocks, as well as the ROP sections, operate at a different clock speed than the stream processors. Nvidia calls this the base speed. In the case of the GeForce 8800 GT, the video card's characteristics are determined by a frequency of 600 MHz. This theoretically results in a fill rate of 9600 gigapixels per second (Gp/s) and a bilinear texture fill rate of 33.6 Gp/s. According to users, the clock frequency is very low, and an increase in the number of transistors does not guarantee adding or maintaining functionality. When the company switched from 110nm to 90nm technology, it reduced the number of transistors by 10% through optimization. Therefore, it would not be surprising if there are at least 16 more stream processors on the chip that are disabled in this product.

Design

The reference design of the card provides for the core, shader unit and memory to operate at 600 MHz, 1500 MHz and 1800 MHz, respectively. The 8800 GT features a single-slot cooling system, and the glossy black metal casing almost completely hides its front side. The 50 mm fan corresponds to the design of radial coolers of top models and performs its duties very quietly in all operating modes. It doesn’t matter if the computer is idle, loaded only with work Windows table, or your favorite game is running - it will be practically inaudible against the background of other noise sources in the PC case. However, it is worth noting that the first time you turn on a computer with a new video card, you can be scared. The fan starts to howl when the GPU is loaded at full capacity, but the noise subsides before the desktop appears.

The metal front panel attracts fingerprints, but this is of little concern, since once installed it will be impossible to see them. According to user reviews, the cover helps prevent accidental damage to components such as capacitors on the front of the card. The green printed circuit board combined with the black heatsink bezel gives the 8800 GT a distinctive look. The model is marked with the GeForce logo along the top edge of the front panel. Mark Rein, the company's vice president, told reporters that this entailed additional costs, but was necessary to help users figure out which graphics card is the heart of the system at LAN parties.

Under the heatsink are eight 512-megabit graphics memory chips, giving a total of 512 MB of storage capacity. This is GDDR3 DRAM with an effective frequency of up to 2000 MHz. The GPU supports both GDDR3 and GDDR4, but this feature was never used in this series.

Heating and power consumption

The nVidia GeForce 8800 GT video card is very sexy. Its design is simply very pleasing to the eye and, given the internal changes to the G92, it exudes a sophisticated design feel.

More important than the aesthetic aspects, however, according to users, is the fact that the manufacturer managed to pack all the power into a single-slot device. This is not just a welcome change, it is a pleasant surprise. The characteristics of the GeForce 8800 GT are such that we can assume the presence of a cooler two slots high. The reason why Nvidia went with such a thin design was due to changes in the manufacturing process that reduced the heat to a level that a low-profile fan could handle. In fact, temperatures have dropped so much that even a relatively small cooler doesn't have to spin very quickly, resulting in the card remaining virtually silent even when handling intense games. However, the board temperature rises significantly, requiring a significant amount of air to prevent overheating. As a result of process reduction, the GeForce 8800 GT 512 MB consumes only 105 W even at full load. Thus, only one six-pin power connector is required. This is another nice change.

The card was the first to support PCIe 2.0, allowing it to receive power up to 150 W. However, the company decided that for backward compatibility it is much easier to limit the power through it to 75 watts. This means that regardless of whether the card is connected to motherboards with PCIe 1.1 or PCIe 2.0, only 75 W comes through the connector, with the rest of the power coming through the auxiliary connector.

Processor VP2

Speaking about the possibility of transmitting HDCP signals, it is worth touching on the new generation video processor that nVidia incorporated into the G92. The VP2 is a single programmable SIMD processor whose flexibility allows it to be expanded in the future. It enables very intensive processing of H.264 encoded video, shifting the load from the CPU to the GPU. In addition to VP2, there is also an H.264 stream processor and an AES128 decoder. The first of these is specifically designed to accelerate CAVLC and CABAC encoding schemes - tasks that are very CPU intensive in a pure software environment. AES128 enables faster processing of the encryption protocol required by video content security schemes such as AACS and Media Foundation. Both of these schemes require encoding of video data (both compressed and uncompressed) when transferred over buses like PCI-Express.

Improving Image Quality

Nvidia is trying hard to improve the transparent anti-aliasing technique that first appeared in the 7-series GeForce. Multisampling reduces card performance slightly, but in most cases it is not effective. On the other hand, supersapling provides much better and more stable image quality, but at the cost of reduced operation speed - it is an incredibly resource-intensive method of anti-aliasing.

The drivers that come with the video card contain a new multisampling algorithm. The differences are quite significant, but the final decision is made by the user. The good news is that since this is a driver-level change, any hardware that supports transparent antialiasing can use the new algorithm, including cards released after the GeForce 7800 GTX. To activate the new mode, you just need to download the latest updates from the manufacturer's website.

According to user reviews, updating the driver for the GeForce 8800 GT will not be difficult. Although the video card web page contains only links to files for the OS Windows Vista and XP, search with home page allows you to find what you need. For nVidia GeForce 8800 GT Windows drivers 7-10 are installed using the GeForce 342.01 Driver utility with a capacity of 292 MB.

Connectivity

The output connectors of the nVidia GeForce 8800 GT are quite standard - 2 dual-channel DVI-I ports with HDCP support, which are suitable for both analog and digital interfaces of monitors and TVs, and a 7-pin analog video port provides conventional composite and component output. DVI connectors can be used in combination with a DVI-VGA and DVI-HDMI adapter, so any connection option is possible. However, Nvidia still makes audio support for use with HDMI connectors an option for third-party manufacturers - there is no audio processor inside the VP2, so audio is implemented through the on-board S/PDIF connector. This is disappointing, since the thin and quiet card is ideal for gaming home theaters.

The GeForce 8800 GT is the first graphics system compatible with PCI Express 2.0, which means it can access memory at speeds of 16 GB/s. - twice as fast as the previous standard. While this may be useful for workstations and intensive computing, it won't be of much use to the average gamer. In any case, the standard is fully compatible with all previous versions PCIe, so there's nothing to worry about.

nVidia's partner companies offer overclocked versions of the GeForce 8800 GT, as well as game packages.

BioShock from 2K Games

BioShock was one of the best games that existed at the time the video card was released. It's a "genetically modified" first-person shooter set in the underwater city of Rapture, created on the floor of the Atlantic Ocean by a man named Andrew Ryan as part of the realization of his 1930s art deco dream. 2K Boston and 2K Australia have licensed and used Epic Games' Unreal Engine 3 to best effect, and also leveraged some DirectX 10 capabilities. All of this is controlled through an option in the game's graphics control panel.

The BioShock setting forced the developers to use a lot of water shaders. DirectX 10 technology helped improve the ripples when characters move through water, and pixel shaders were used en masse to create wet objects and surfaces. Additionally, the DX10 version of the game uses a depth buffer to create "soft" particle effects where they interact with their surroundings and look more realistic.

The nVidia GeForce 8800 GT, whose characteristics allow it to show its strengths in the BioShock game, is only slightly inferior to the GTX at a resolution of 1680x1050. As this parameter increases, the gap between the cards increases, but not by a large margin. The reason for this is likely due to the fact that the game did not support transparent anti-aliasing, making the 8800 GTX's massive memory bandwidth advantage moot.

According to user reviews, the 8800 GT also works quite well with SLI enabled. Although its capabilities are not close to those of the GTX, it competes with the Radeon HD 2900 XT graphics card with 512 MB of memory in the CrossFire configuration. Perhaps even more interesting is the fact that at 1920x1200 the 8800 GT is almost as fast as the 640MB GTS!

Crysis Syngle Player Demo from Electronic Arts

This game will literally make your video card cry! The big surprise was its graphics - they surpassed everything that was in computer games before it. Testing with the built-in GPU speed meter is much faster than in reality. About 25 fps in the performance test is enough to get an acceptable frame rate for the user. Unlike other games, the low frame rate here still looks pretty flat.

The nVidia GeForce 8800 GT video card, whose characteristics in Crysis allow it to achieve sufficient frame rates at a resolution of 1680x1050 with high detail under DirectX 10, is not as fast as the GTX, but is noticeably more productive than the Radeon HD 2900 XT and 8800 GTS 640MB. The GTS 320MB struggles to handle Crysis and will need to drop most settings to medium to get framerates above 25 fps even at 1280 x 1024 image quality.

Performance

As you'd expect, the 8800 GTX remains unbeatable, but overall the GeForce 8800 GT GTS is ahead in most tests. At the highest resolutions and anti-aliasing settings, the GT's reduced memory bandwidth lets it down and the GTS occasionally pulls ahead. However, considering the price difference and other advantages, the 8800 GT is better in any case. Conversely, a comparison of GeForce GTX 8800/GT 8800 every time confirms why the first card is so expensive. While other models begin to slow down significantly as the number of image pixels increases, transparent anti-aliasing and anisotropic filtering are used, the 8800 GTX continues to demonstrate excellent results. In particular, Team Fortress 2 at a resolution of 1920x1200 with 8xAA and 16xAF on the 8800 GTX runs twice as fast as on the GT. However, for the most part, the GeForce 8800 GT video card shows good performance. Of course, if you don't take into account the incredible low frequency frames in Crysis.

Conclusion

While the GeForce 8800 GT doesn't match the specs of the 8800 GTX series leader, it offers similar performance at a fraction of the price and includes many additional features. And if we add here small sizes and quiet operation, then the model will seem simply phenomenal.

It is well known that flagship models of graphics adapters belonging to the highest price range are, first of all, a public demonstration of the technological achievements of the developer company. Although these solutions are deservedly popular among enthusiast players, they never make the main sales picture. Not everyone is able or willing to pay $600, an amount comparable to the cost of the most expensive modern gaming console, just for a graphics card, therefore, the main contribution to the income of AMD/ATI and Nvidia is made by less expensive, but much more widespread cards.

On November 9 last year, Nvidia announced the first consumer GPU with a unified architecture and support for DirectX 10. The new product was described in detail in our Directly Unified: Nvidia GeForce 8800 Architecture Review. Initially, the new product formed the basis of two new graphics cards – GeForce 8800 GTX and GeForce 8800 GTS. As you know, the older model performed well in games and can well be considered the choice of an enthusiast who is not bothered by the price, while the younger model has taken its rightful place in its price category– less than $500, but more than $350.

$449 is not a very high price for a new generation product that has full support for DirectX 10 and can offer the user a serious level of performance in modern games. However, Nvidia decided not to stop there, and on February 12, 2007, presented to the public a more affordable GeForce 8800 GTS 320MB model with an official price of $299, which seriously strengthened its position in this sector. These two graphics cards will be discussed in today's review. Along the way, we will find out how critical the amount of video memory is for the GeForce 8 family.

GeForce 8800 GTS: technical specifications

To evaluate the qualities and capabilities of both GeForce 8800 GTS models, we should remind our readers of the characteristics of the GeForce 8800 family.


All three GeForce 8800 models use the same G80 graphics core, consisting of 681 million transistors, as well as an additional NVIO chip containing TMDS transmitters, RAMDAC, etc. The use of such a complex chip to produce several graphics models adapters belonging to different price categories is not the most the best option in terms of cost final product, however, it cannot be called unsuccessful: Nvidia has the opportunity to sell rejected versions of the GeForce 8800 GTX (those that did not pass the frequency screening and/or have a certain number of defective blocks), and the cost of video cards selling for over $250 is hardly critical. This approach is actively used by both Nvidia and its arch-competitor ATI; just remember the history of the G71 graphics processor, which can be found both in the mass-market inexpensive video adapter GeForce 7900 GS and in the powerful two-chip monster GeForce 7950 GX2.

The GeForce 8800 GTS was created in the same way. As you can see from the table, in terms of technical characteristics, this video adapter is significantly different from its older brother: not only does it have lower clock frequencies and some stream processors are disabled, but also the amount of video memory is reduced, the width of the access bus is trimmed, and some of the TMU and rasterization units are inactive .

In total, the GeForce 8800 GTS has 6 groups of stream processors, each with 16 ALUs, giving a total of 96 ALUs. This card's main rival, the AMD Radeon X1950 XTX, features 48 pixel processors, each of which, in turn, consists of 2 vector and 2 scalar ALUs for a total of 192 ALUs.

It would seem that the GeForce 8800 GTS should be quite seriously inferior to the Radeon X1950 XTX in terms of pure computing power, but there are a number of nuances that make such an assumption not entirely legitimate. The first of them is that the GeForce 8800 GTS stream processors, like the ALU in Intel NetBurst, operate at a significantly higher frequency than the rest of the core - 1200 MHz versus 500 MHz, which already means a very significant increase in performance. Another nuance follows from the architecture of the R580 GPU. In theory, each of its 48 pixel shader execution units is capable of executing 4 instructions per clock cycle, not counting branch instructions. However, only 2 of them will be of type ADD/MUL/MADD, and the remaining two are always ADD instructions with a modifier. Accordingly, the efficiency of R580 pixel processors will not be maximum in all cases. On the other hand, G80 stream processors have a completely scalar architecture and each of them is capable of executing two scalar operations per clock cycle, for example, MAD+MUL. Although we still do not have exact data on the architecture of Nvidia stream processors, in this article we will look at how the new unified architecture of the GeForce 8800 is more advanced than the Radeon X1900 architecture and how this affects the speed in games.

As for the performance of texturing and rasterization systems, judging by the characteristics, the GeForce 8800 GTS has a larger number of texture units (24) and rasterizers (20) compared to the Radeon X1950 XTX (16 TMU, 16 ROP), however, their clock speed is ( 500MHz) is significantly lower than the clock frequency of the ATI product (650MHz). Thus, neither side has a decisive advantage, which means that gaming performance will be affected mainly by the “success” of the micro-architecture, and not by the numerical advantage of the execution units.

It is noteworthy that both the GeForce 8800 GTS and the Radeon X1950 XTX have the same memory bandwidth - 64GB/sec, however the GeForce 8800 GTS uses a 320-bit video memory access bus, it uses GDDR3 memory operating at 1600MHz, while the Radeon X1950 XTX can be found with 2GHz GDDR4 memory with 256-bit access. Given ATI's claims of a more advanced ring topology memory controller in the R580 compared to a typical Nvidia controller, it will be interesting to see if ATI's Radeon solution gains some advantage at high resolutions with full screen anti-aliasing enabled against its next-gen competitor, as was the case with the GeForce 7.

A less expensive version of the GeForce 8800 GTS with 320MB of memory, announced on February 12, 2007 and designed to replace the GeForce 7950 GT in the performance-mainstream segment, differs from the regular model only in the amount of video memory. In fact, to get this card, Nvidia only needed to replace the 512Mbit memory chips with 256Mbit chips. A simple and technologically advanced solution, it allowed Nvidia to assert its technological superiority in the price category of $299, which is quite popular among users. In the future, we will find out how much this affected the performance of the new product and whether a potential buyer should pay an extra $150 for a model with 640 MB of video memory.

In our today's review, the GeForce 8800 GTS 640MB will be represented by the MSI NX8800GTS-T2D640E-HD-OC video adapter. Let's tell you more about this product.

MSI NX8800GTS-T2D640E-HD-OC: packaging and accessories

The video adapter arrived at our laboratory in a retail version - packed in a colorful box along with all the accompanying accessories. The box turned out to be relatively small, especially in comparison with the box from MSI NX6800 GT, which at one time could compete in terms of dimensions with Asustek Computer packaging. Despite its modest size, MSI's packaging is traditionally equipped with a convenient carrying handle.


The design of the box is made in calm white and blue tones and does not hurt the eyes; the front side is decorated with an image of a pretty red-haired angel girl, so there is no talk of aggressive motifs, so popular among video card manufacturers. Three stickers inform the buyer that the card is pre-overclocked by the manufacturer, supports HDCP, and comes with the full version of the Company of Heroes game. On the back of the box you can find information about Nvidia SLI and MSI D.O.T technologies. Express. The latter is a dynamic overclocking technology, and, according to MSI, it can increase the performance of the video adapter by 2%-10%, depending on the overclocking profile used.

Having opened the box, in addition to the video adapter itself, we found the following set of accessories:


Quick Installation Guide
Quick User Guide
Adapter DVI-I -> D-Sub
YPbPr/S-Video/RCA splitter
S-Video cable
Power adapter 2xMolex -> 6-pin PCI Express
CD with MSI drivers and utilities
Two-disc edition of the game Company of Heroes

Both guides are designed as posters; in our opinion, they are too simple and contain only the most basic information. The pursuit of the number of languages, and there are quick guide user 26, led to the fact that nothing particularly useful, except for basic information on installing the card into the system, could be gleaned from it. We think the manuals could have been a little more detailed, which would have given some benefit to inexperienced users.

The driver disk contains an outdated version of Nvidia ForceWare 97.29, as well as a number of proprietary utilities, among which MSI DualCoreCenter and MSI Live Update 3 deserve special mention. The first is a unified control center that allows you to overclock both the video card and the central processor, however, for Full functionality of the program requires a system MSI boards, equipped with a CoreCell chip and, therefore, is of little use to owners of boards from other manufacturers. MSI Live Update 3 utility is designed to track driver and BIOS updates and update them conveniently via the Internet. This is enough convenient way, especially for those who do not want to understand the intricacies of the manual process BIOS updates video adapter.

Availability included full version The popular tactical RTS Company of Heroes deserves special praise from MSI. This is truly a game of the highest category, with excellent graphics and thoroughly developed gameplay; many players call her best game in this genre, which is confirmed by numerous awards, including the title of “Best Strategy Game E3 2006”. As we have already noted, despite belonging to the real-time strategy genre, Company of Heroes boasts modern graphics at the level of a good first-person shooter, so the game is perfect for demonstrating the capabilities of the GeForce 8800 GTS. In addition to Company of Heroes, the discs contain a demo version of Warhammer 40,000: Dawn of War – Dark Crusade.

We can confidently call the package of the MSI NX8800GTS-T2D640E-HD-OC good due to the presence of the full version of the very popular tactical RTS Company of Heroes and convenient software from MSI.

MSI NX8800GTS-T2D640E-HD-OC: PCB design

For the GeForce 8800 GTS model, Nvidia has developed a separate, more compact printed circuit board than the one used to produce the GeForce 8800 GTX. Since all GeForce 8800 are supplied to Nvidia's partners ready-made, almost everything that will be said below applies not only to the MSI NX8800GTS, but also to any other GeForce 8800 GTS model, be it the version with 640 or 320 MB of video memory.


The GeForce 8800 GTS PCB is significantly shorter than the GeForce 8800 GTX. Its length is only 22.8 centimeters versus almost 28 centimeters for the flagship model GeForce 8. In fact, the dimensions of the GeForce 8800 GTS are the same as those of the Radeon X1950 XTX, even a little smaller, since the cooler does not protrude beyond the PCB.

Our MSI NX8800GTS sample uses a board covered with a green mask, although the product is shown on the company's website with a PCB in a more familiar black color. Currently, both “black” and “green” GeForce 8800 GTX and GTS are on sale. Despite numerous rumors circulating on the Internet, there is no difference between these cards, other than the PCB color itself, as confirmed by the official Nvidia website. What is the reason for this “return to roots”?

There are many conflicting rumors on this matter. According to some of them, the composition of the black coating is more toxic than the traditional green one, while others believe that the black coating is more difficult to apply or more expensive. In practice, this is most likely not the case - as a rule, prices for solder masks of different colors are the same, which eliminates additional problems with masks of certain colors. The most likely is the simplest and most logical scenario - cards of different colors are produced by different contract manufacturers - Foxconn and Flextronics. Moreover, Foxconn probably uses coatings of both colors, since we have seen both “black” and “green” cards from this manufacturer.


The GeForce 8800 GTS power system is almost as complex as the GeForce 8800 GTX and even contains more electrolytic capacitors, but has a more dense layout and only one external power connector, thanks to which the printed circuit board was made much shorter. The same digital PWM controller as in the GeForce 8800 GTX and Primarion PX3540 is responsible for managing the GPU power. Memory power management is carried out by a second controller, Intersil ISL6549, which, by the way, is absent on the GeForce 8800 GTX, where the memory power supply scheme is different.

The left side of the PCB, where the main components of the GeForce 8800 GTS are located - GPU, NVIO and memory, is almost identical to the similar section of the GeForce 8800 GTX PCB, which is not surprising, since developing the entire board from scratch would require significant financial and time costs. In addition, it most likely would not have been possible to significantly simplify the board for the GeForce 8800 GTS by designing it from scratch, in light of the need to use the same G80 and NVIO tandem as on the flagship model. The only visible difference from the GeForce 8800 GTX is the absence of the second “comb” of the MIO interface (SLI), in its place there is space for installing a technological connector with latches, possibly performing the same function, but not soldered. Even the 384-bit memory bus layout is preserved, and the bus itself is cut to the required width in the simplest way: instead of 12 GDDR3 chips, only 10 are installed. Since each chip has a 32-bit bus, 10 chips in total give the required 320 bits. Theoretically, nothing prevents the creation of a GeForce 8800 GTS with a 384-bit memory bus, but the appearance of such a card in practice is extremely unlikely, therefore, a full-fledged GeForce 8800 GTX with lower frequencies has a high chance of being released.


The MSI NX8800GTS-T2D640E-HD-OC is equipped with 10 GDDR3 Samsung K4J52324QE-BC12 chips with a capacity of 512 Mbit, operating at a supply voltage of 1.8 V and having a nominal frequency of 800 (1600) MHz. According to the official Nvidia specifications for the GeForce 8800 GTS, the memory of this video adapter should have exactly this frequency. But it’s not for nothing that the version of MSI NX8800GTS we are considering has the letters “OC” in its name - it is pre-overclocked, so the memory runs at a slightly higher frequency of 850 (1700) MHz, which gives an increase in bandwidth from 64 GB/sec. up to 68 GB/sec.

Since the only difference between the GeForce 8800 GTS 320MB and the regular model is the video memory volume reduced by half, memory chips with a capacity of 256 Mbit, for example, Samsung K4J55323QC/QI series or Hynix HY5RS573225AFP, are simply installed on this card. Otherwise, the two GeForce 8800 GTS models are identical to each other down to the smallest detail.

The NX8800GTS GPU marking is slightly different from the GeForce 8800 GTX processor marking and looks like “G80-100-K0-A2”, while in the reference flagship card the chip is marked with the symbols “G80-300-A2”. We know that the production of GeForce 8800 GTS may include G80 units that have defects in functional units and/or have not passed frequency selection. Perhaps these features are reflected in the labeling.

The 8800 GTS processor has 96 active stream processors out of 128, 24 TMUs out of 32, and 20 ROPs out of 24. For the standard GeForce 8800 GTS, the base GPU frequency is 500 MHz (513 MHz actual frequency), and the shader processor frequency is 1200 MHz (1188 MHz real frequency), but for the MSI NX8800GTS-T2D640E-HD-OC these parameters are 576 and 1350 MHz, which corresponds to the frequencies of the GeForce 8800 GTX. We will find out how much this will affect the performance of the MSI product later, in the section dedicated to the gaming test results.

The configuration of the NX8800GTS output connectors is standard: two DVI-I connectors capable of operating in dual-link mode and a universal seven-pin mini-DIN connector, allowing both the connection of HDTV devices via the analog YPbPr interface, and SDTV devices using the S-Video or Composite interface. In the MSI product, both DVI connectors are carefully covered with rubber protective caps - a rather meaningless, but pleasant detail.

MSI NX8800GTS-T2D640E-HD-OC: Cooling System Design

The cooling system installed on the MSI NX8800GTS, as well as on the vast majority of GeForce 8800 GTS from other graphics card vendors, is a shortened version of the GeForce 8800 GTX cooling system described in the corresponding review.


The radiator and heat pipe, which transmits heat flow from the copper base in contact with the heat spreader of the graphics processor, have been shortened. Also located differently is the flat U-shaped heat pipe, pressed into the base and responsible for the uniform distribution of heat flow. Aluminum frame on which all cooler parts are fixed. has many protrusions in places of contact with memory chips, power transistors power stabilizer and NVIO chip crystal. Reliable thermal contact is ensured by traditional inorganic fiber pads impregnated with white thermal paste. For the GPU, a different, but also familiar to our readers, thick dark gray thermal paste is used.

Due to the fact that there are relatively few copper elements in the design of the cooling system, its mass is small, and the fastening does not require the use of special plates that prevent fatal bending of the PCB. Eight regular spring-loaded bolts securing the cooler directly to the board are quite enough. The possibility of damage to the graphics processor is virtually eliminated, since it is equipped with a heat spreader cover and is surrounded by a wide metal frame that protects the chip from possible distortion of the cooling system, and the board from excessive bending.

A radial fan with an impeller diameter of approximately 75 millimeters, which has the same electrical parameters as in the GeForce 8800 GTX cooling system - 0.48A/12V, and is connected to the board via a four-pin connector, is responsible for blowing the radiator. The system is closed with a translucent plastic casing in such a way that hot air is blown out through the slots in the mounting strip.

The design of the GeForce 8800 GTX and 8800 GTS coolers is well thought out, reliable, time-tested, virtually silent in operation and provides high cooling efficiency, so there is no point in changing it to anything else. MSI only replaced the Nvidia sticker on the casing with its own, repeating the design on the box, and provided the fan with another sticker with its own logo.

MSI NX8800GTS-T2D640E-HD-OC: noise and power consumption

To estimate the noise level generated by the system MSI cooling NX8800GTS, a Velleman DVM1326 digital sound level meter with a resolution of 0.1 dB was used. Measurements were made using the A-weighted curve. At the time of measurements, the background noise level in the laboratory was 36 dBA, and the noise level at a distance of one meter from a working stand equipped with a passively cooled graphics card was 40 dBA.






In terms of noise, the cooling system of the NX8800GTS (and any other GeForce 8800 GTS) behaves exactly the same as the system installed on the GeForce 8800 GTX. The noise level is very low in all modes; In this parameter, Nvidia's new design surpasses even the excellent GeForce 7900 GTX cooler, which was previously rightfully considered the best in its class. In this case, the only way to achieve complete silence and not lose cooling efficiency is by installing a water cooling system, especially if serious overclocking is planned.

As our readers know, the reference copies of the GeForce 8800 GTX from the first batches refused to run on a stand equipped to measure the level of power consumption of video cards. However, most of the new cards belonging to the GeForce 8800 family, and among them the MSI NX8800GTS-T2D640E-HD-OC, worked without problems on this system, which has the following configuration:

CPU Intel Pentium 4 560 (3.60 GHz, 1 MB L2);
Intel Desktop Board D925XCV (i925X);
Memory PC-4300 DDR2 SDRAM (2x512MB);
Hard drive Samsung SpinPoint SP1213C (120 GB, Serial ATA-150, 8MB buffer);
Microsoft Windows XP Pro SP2, DirectX 9.0c.

As we reported, motherboard, which is the heart of the measuring platform, has been specially modernized: to interrupt the power supply circuits PCI slot Express x16 includes measuring shunts equipped with connectors for connecting measuring equipment. The 2xMolex -> 6-pin PCI Express power adapter is also equipped with the same shunt. The measuring tool used is a Velleman DVM850BL multimeter, which has a measurement error of no more than 0.5%.

To create a load on the video adapter in 3D mode, the first graphics test SM3.0/HDR is used, which is part of the Futuremark 3DMark06 package and is launched in an endless loop at a resolution of 1600x1200 with forced anisotropic filtering of 16x. Peak 2D mode is emulated by running the 2D Transparent Windows benchmark, part of the Futuremark PCMark05 suite.

Thus, after carrying out a standard measurement procedure, we were able to obtain reliable data on the level of power consumption not only of the MSI NX8800GTS-T2D640E-HD-OC, but also of the entire Nvidia GeForce 8800 family.











The GeForce 8800 GTX is indeed ahead of the previous “leader”, Radeon X1950 XTX, in terms of power consumption, but only by 7 Watts. Considering the enormous complexity of the G80, 131.5 Watts in 3D mode can be considered a good indicator. Both additional power connectors of the GeForce 8800 GTX consume approximately the same power, not exceeding 45 W even in the heaviest mode. Although the PCB design of the GeForce 8800 GTX assumes the installation of one eight-pin power connector instead of a six-pin one, it is unlikely to be relevant even if the GPU and memory clock speeds significantly increase. In idle mode, the efficiency of Nvidia's flagship leaves much to be desired, but this is a price to pay for 681 million transistors and a huge shader processor frequency by GPU standards. Such a high level of power consumption during idle is partly due to the fact that the GeForce 8800 family does not reduce clock speeds in this mode.

Both versions of the GeForce 8800 GTS have noticeably more modest performance, although they cannot boast of the same level of efficiency as Nvidia cards using the previous generation G71 core. The single power connector of these cards is subject to a significantly higher load, in in some cases can reach 70 watts or more. The power consumption levels of the GeForce 8800 GTS variants with 640 and 320 MB of video memory differ slightly, which is not surprising - after all, this parameter is the only difference between these cards from each other. The MSI product, operating at higher frequencies, consumes more than the standard version of the GeForce 8800 GTS - about 116 Watts under load in 3D mode, which is still less than the same figure for the Radeon X1950 XTX. Of course, in 2D mode AMD card much more economical, however, video adapters of this class are purchased specifically for use in 3D, therefore, this parameter is not as critical as the level of power consumption in games and 3D applications.

MSI NX8800GTS-T2D640E-HD-OC: overclocking features

Overclocking members of the Nvidia GeForce 8800 family involves a number of features that we consider it necessary to tell our readers about. As you probably remember, the first representatives of the seventh generation GeForce, using the 0.11-μm G70 core, could increase the frequencies of rasterization units and pixel processors only in 27 MHz increments, and if the overclocking was less than this value, there was practically no performance gain. Later, in cards based on the G71, Nvidia returned to the standard overclocking scheme in 1 MHz increments, however, in the eighth generation GeForce, discrete changes in clock frequencies appeared again.

The scheme for distributing and changing clock frequencies in the GeForce 8800 is quite non-trivial, which is due to the fact that the shader processor units in the G80 operate at a significantly higher frequency than other GPU units. The frequency ratio is approximately 2.3 to 1. Although the main frequency of the graphics core can change in smaller steps than 27 MHz, the frequency of shader processors always changes in steps of 54 MHz (2x27 MHz), which creates additional difficulties when overclocking, because all utilities manipulate the main frequency , and not at all the frequency of the shader “domain”. There is, however, a simple formula that allows you to determine with sufficient accuracy the frequency of GeForce 8800 stream processors after overclocking:

OC shader clk = Default shader clk / Default core clk * OC core clk


Where OC shader clk is the desired frequency (approximately), Default shader clk is the initial frequency of shader processors, Default core clk is the initial core frequency, and OC core clk is the frequency of the overclocked core.

Let's look at the behavior of the MSI NX8800GTS-T2D640E-HD-OC when overclocked using the RivaTuner2 FR utility, which allows you to monitor the real frequencies of various areas or, as they are also called, “domains” of the G80 GPU. Since the MSI product has the same GPU clocks (576/1350) as the GeForce 8800 GTX, the following information is also relevant for Nvidia's flagship graphics card. We will increase the main GPU frequency in steps of 5 MHz: this is a fairly small step and it is not a multiple of 27 MHz.


Empirical testing confirmed: the main frequency of the graphics core can indeed change in variable steps - 9, 18 or 27 MHz, and we were not able to catch the pattern of change. The frequency of shader processors in all cases was changed in steps of 54 MHz. Because of this, some frequencies of the main “domain” of the G80 turn out to be practically useless when overclocking, and their use will only lead to excessive heating of the GPU. For example, there is no point in increasing the main core frequency to 621 MHz - the shader unit frequency will still be 1458 MHz. Thus, overclocking the GeForce 8800 should be carried out carefully, using the above formula and checking the monitoring data of Riva Tuner or another utility with similar functionality.

It would be illogical to expect serious overclocking results from a version of the NX8800GTS already overclocked by the manufacturer; however, the card unexpectedly showed quite good potential, at least on the GPU side. We managed to raise its frequencies from the factory 576/1350 MHz to 675/1566 MHz, while the NX8800GTS steadily passed several 3DMark06 cycles in a row without any additional cooling. The processor temperature, according to Riva Tuner, did not exceed 70 degrees.

The memory was much worse overclocked, since the NX8800GTX OC Edition was equipped with chips designed for 800 (1600) MHz, operating at a frequency higher than the nominal - 850 (1700) MHz. As a result, we had to stop at 900 (1800) MHz, since further attempts to increase the memory frequency invariably led to freezing or crashing in the driver.

Thus, the card showed good overclocking potential, but only for the GPU: relatively slow memory chips did not allow it to significantly increase its frequency. For them, the GeForce 8800 GTX level should be considered a good achievement, and a 320-bit bus at this frequency is already capable of providing a significant advantage in throughput over the Radeon X1950 XTX: 72 GB/sec versus 64 GB/sec. Of course, overclocking results may vary depending on the specific MSI NX8800GTS OC Edition and application additional funds, such as modifying the card's power supply or installing water cooling.

Test platform configuration and testing methods

A comparative study of the performance of the GeForce 8800 GTX was carried out on platforms with the following configuration.

AMD Athlon 64 FX-60 processor (2 x 2.60GHz, 2 x 1MB L2)
System Abit board AN8 32X (nForce4 SLI X16) for Nvidia GeForce cards
System Asus board A8R32-MVP Deluxe (ATI CrossFire Xpress 3200) for ATI Radeon cards
Memory OCZ PC-3200 Platinum EL DDR SDRAM (2x1GB, CL2-3-2-5)
Hard drive Maxtor MaXLine III 7B250S0 (Serial ATA-150, 16MB buffer)
Creative SoundBlaster Audigy 2 sound card
Power supply Enermax Liberty 620W (ELT620AWT, rated power 620W)
Dell 3007WFP monitor (30", maximum resolution 2560x1600)
Microsoft Windows XP Pro SP2, DirectX 9.0c
AMD Catalyst 7.2
Nvidia ForceWare 97.92

Since we consider the use of trilinear and anisotropic filtering optimizations unjustified, the drivers were configured in a standard way, implying the highest possible quality of texture filtering:

AMD Catalyst:

Catalyst A.I.: Standard
Mipmap Detail Level: High Quality
Wait for vertical refresh: Always off
Adaptive antialiasing: Off
Temporal antialiasing: Off
High Quality AF: On

Nvidia ForceWare:

Texture Filtering: High Quality
Vertical sync: Off
Trilinear optimization: Off
Anisotropic optimization: Off
Anisotropic sample optimization: Off
Gamma correct antialiasing: On
Transparency antialiasing: Off
Other settings: default

Each game was set to the highest possible level of graphics quality, and the game configuration files were not modified. To capture performance data, either the game's built-in capabilities or, in their absence, the Fraps utility were used. Where possible, minimum performance data was recorded.

Testing was carried out in three standard resolutions for our methodology: 1280x1024, 1600x1200 and 1920x1200. One of the goals of this review is to evaluate the impact of the GeForce 8800 GTS video memory on performance. In addition, the technical characteristics and cost of both versions of this video adapter allow us to count on a fairly high level of performance in modern games when using FSAA 4x, so we tried to use the “eye candy” mode wherever possible.

FSAA and anisotropic filtering were activated using game tools; in the absence of such, they were forced using the appropriate settings of the ATI Catalyst and Nvidia ForceWare drivers. Testing without full-screen antialiasing was used only for games that do not support FSAA for technical reasons, or when using FP HDR while testing representatives of the GeForce 7 family that do not support simultaneous operation of these capabilities.

Since our task was, among other things, to compare the performance of graphics cards that differed only in the amount of video memory, the MSI NX8800GTS-T2D640E-HD-OC was tested 2 times: at factory frequencies, and at frequencies reduced to the reference values ​​for the GeForce 8800 GTS: 513 /1188/800 (1600) MHz. In addition to the MSI product and the reference Nvidia GeForce 8800 GTS 320MB, the following video adapters took part in testing:

Nvidia GeForce 8800 GTX (G80, 576/1350/1900MHz, 128sp, 32tmu, 24rop, 384-bit, 768MB)
Nvidia GeForce 7950 GX2 (2xG71, 500/1200MHz, 48pp, 16vp, 48tmu, 32rop, 256-bit, 512MB)
AMD Radeon X1950 XTX (R580+, 650/2000MHz, 48pp, 8vp, 16tmu, 16rop, 256-bit, 512MB)

The following set of games and applications was used as test software:

3D first-person shooters:

Battlefield 2142
Call of Juarez
Far Cry
F.E.A.R. Extraction Point
Tom Clancy's Ghost Recon Advanced Warfighter
Half-Life 2: Episode One
Prey
Serious Sam 2
S.T.A.L.K.E.R.: Shadow of Chernobyl


3D shooters with third person view:

Tomb Raider: Legend


RPG:

Gothic 3
Neverwinter Nights 2
The Elder Scrolls IV: Oblivion


Simulators:


Strategy games:

Command & Conquer: Tiberium Wars
Company of Heroes
Supreme Commander


Synthetic gaming tests:

Futuremark 3DMark05
Futuremark 3DMark06

Game tests: Battlefield 2142


There is no significant difference between the two versions of the GeForce 8800 GTS with different amounts of video memory on board up to a resolution of 1920x1200, although at 1600x1200 the younger model is inferior to the older one by about 4-5 frames per second with quite comfortable performance for both. The 1920x1440 resolution, however, is a turning point: the GeForce 8800 GTS 320MB sharply drops out of the game with more than 1.5 times the lag on average and two times in the minimum fps. Moreover, it also loses to cards of the previous generation. There is a lack of video memory or a problem with the implementation of its management in the GeForce 8800 family.

The MSI NX8800GTS OC Edition is noticeably ahead of the reference model, starting with a resolution of 1600x1200, but, of course, it cannot catch up with the GeForce 8800 GTX, although at 1920x1440 the gap between these cards becomes impressively narrow. Obviously, the difference in the memory access bus width between the GeForce 8800 GTS and GTX is insignificant here.

Game tests: Call of Juarez


Both GeForce 8800 GTS models show the same level of performance in all resolutions, including 1920x1200. This is quite logical, given testing with HDR enabled but FSAA disabled. Operating at nominal frequencies, the cards are inferior to the GeForce 7950 GX2.

The overclocked version of MSI allows you to achieve parity in high resolutions, the use of which in this game is impractical even if you have a GeForce 8800 GTX in your system. For example, at 1600x1200 the average performance of Nvidia's flagship graphics card is only 40 fps with dips in graphically intensive scenes up to 21 fps. For a first-person shooter, such indicators can hardly be called truly comfortable.

Game tests: Far Cry


The game is far from young and is poorly suited for testing modern high-end video adapters. Despite the use of anti-aliasing, noticeable differences in their behavior can only be seen at a resolution of 1920x1200. The GeForce 8800 GTS 320MB faces a shortage of video memory here, and is therefore inferior by about 12% to the model equipped with 640 MB of video memory. However, due to the modest requirements of Far Cry by today's standards, the player is not in danger of losing comfort.

The MSI NX8800GTS OC Edition is almost on par with the GeForce 8800 GTX: the latter's power is clearly unclaimed in Far Cry.


Due to the nature of the scene recorded at the Research level, the readings are more varied; already at a resolution of 1600x1200 you can see the differences in the performance of various representatives of the GeForce 8800 family. Moreover, the lag of the version with 320MB of memory is already visible here, despite the fact that the action takes place in the confined space of an underground cave. The performance difference between the MSI product and the GeForce 8800 GTX at 1920x1200 resolution is significantly greater than in the previous case, since the performance of shader processors at this level plays a more important role.




In FP HDR mode, the GeForce 8800 GTS 320MB no longer experiences problems with video memory capacity and is in no way inferior to its older brother, providing a decent level of performance in all resolutions. The version offered by MSI gives another 15% increase in speed, but even the version running at standard clock speeds is fast enough to use a resolution of 1920x1200, and the GeForce 8800 GTX will undoubtedly provide comfortable conditions for the player at a resolution of 2560x1600.

Game tests: F.E.A.R. Extraction Point


The visual richness of F.E.A.R. requires appropriate resources from the video adapter, and the 5% lag of the GeForce 8800 GTS 320MB is already visible at the resolution of 1280x1024, and in the next resolution, 1600x1200, it sharply turns into 40%.

The benefits of overclocking the GeForce 8800 GTS are not obvious: both the overclocked and normal versions allow you to play equally successfully at a resolution of 1600x1200. At the next resolution, the speed increase from overclocking is simply not enough to reach a level comfortable for first-person shooters. Only the GeForce 8800 GTX with 128 active shader processors and a 384-bit memory subsystem can do this.

Game tests: Tom Clancy's Ghost Recon Advanced Warfighter

Due to the use of deferred rendering, using FSAA in GRAW is technically impossible, therefore, the data is shown only for the anisotropic filtering mode.


The advantage of the MSI NX8800GTS OC Edition over the regular reference card increases as the resolution increases, and at 1920x1200 resolution it reaches 19%. In this case, it is this 19% that allows us to achieve an average performance of 55 fps, which is quite comfortable for the player.

As for comparing two GeForce 8800 GTS models with different amounts of video memory, there is no difference in their performance.

Game Tests: Half-Life 2: Episode One


At a resolution of 1280x1024 there is a limitation on the part of the central processor of our test system - all cards show the same result. At 1600x1200 the differences are already apparent, but they are not fundamental, at least for three variants of the GeForce 8800 GTS: all three provide a very comfortable level of performance. The same can be said about the resolution of 1920x1200. Despite the high-quality graphics, the game is undemanding in terms of video memory and the GeForce 8800 GTS 320MB loses only about 5% to the older and much more expensive model with 640 MB of memory on board. The overclocked version of the GeForce 8800 GTS, offered by MSI, confidently takes second place after the GeForce 8800 GTX.

Although the GeForce 7950 GX2 shows better results than the GeForce 8800 GTS at 1600x1200 resolution, we should not forget about the problems that can arise when using a card that is essentially an SLI tandem, as well as the significantly lower quality of texture filtering in the GeForce 7 family The new Nvidia solution, of course, still has problems with drivers, but it has promising capabilities, and, unlike the GeForce 7950 GX2, has every chance of getting rid of “childhood diseases” in the shortest possible time.

Game tests: Prey


The GeForce 8800 GTS 640MB does not show the slightest advantage over the GeForce 8800 GTS 320MB, perhaps because the game uses a modified Doom III engine and does not show any special appetite in terms of video memory requirements. As with GRAW, the increased performance of the NX8800GTS OC Edition allows owners of this video adapter to expect fairly comfortable gaming at 1920x1200 resolution. For comparison, the regular GeForce 8800 GTS shows the same numbers at a resolution of 1600x1200. The flagship of the line, the GeForce 8800 GTX is beyond competition.

Game tests: Serious Sam 2


The brainchild of Croatian developers from Croteam has always strictly required the video adapter to have 512 MB of video memory, otherwise punishing it with a monstrous drop in performance. The capacity provided by the inexpensive version of the GeForce 8800 GTS was not enough to satisfy the game's appetites, as a result of which it was able to show only 30 fps at a resolution of 1280x1024, while the version with 640 MB of memory on board was more than twice as fast.

For some unknown reason, the minimum performance of all GeForce 8800 in Serious Sam 2 is extremely low, which may be due to both the architectural features of the family, which, as is known, has a unified architecture without division into pixel and vertex shaders, and to shortcomings in the ForceWare drivers. For this reason, GeForce 8800 owners will not yet be able to achieve complete comfort in this game.

Game tests: S.T.A.L.K.E.R.: Shadow of Chernobyl

Eagerly awaited by many players, the GSC Game World project, after many years of development, finally saw the light of day, 6 or 7 years after the announcement. The game turned out to be ambiguous, but, nevertheless, multifaceted enough to try to describe it in a few phrases. We only note that compared to one of the first versions, the project engine has been significantly improved. The game received support for a number of modern technologies, including Shader Model 3.0, HDR, parallax mapping and others, but did not lose the ability to work in a simplified mode with a static lighting model, providing excellent performance on not very powerful systems.

Because we focus on the highest level of image quality, we tested the game in full dynamic lighting mode with maximum detail. This mode, which involves, among other things, the use of HDR, does not support FSAA; at least that's the case in the current version of S.T.A.L.K.E.R. Since when using a static lighting model and DirectX 8 effects, the game loses much of its attractiveness, we limited ourselves to anisotropic filtering.


The game does not suffer from modest appetites - with maximum detail, even the GeForce 8800 GTX is not able to provide 60 fps in a resolution of 1280x1024. However, it should be noted that in low resolutions the main limiting factor is CPU performance, since the spread between cards is small and their average results are quite close.

However, some lag between the GeForce 8800 GTS 320MB and its older brother is already visible here, and with increasing resolution it only gets worse, and at a resolution of 1920x1200, the youngest member of the GeForce 8800 family simply does not have enough available video memory. This is not surprising, given the scale of the game scenes and the abundance of special effects used in them.

Overall, we can say that the GeForce 8800 GTX does not provide a serious advantage in S.T.A.L.K.E.R. ahead of the GeForce 8800 GTS, and the Radeon X1950 XTX looks just as successful as the GeForce 8800 GTS 320MB. AMD's solution is even somewhat superior to Nvidia's solution, since it works at a resolution of 1920x1200, however, the practical use of this mode is impractical due to the average performance of 30-35 fps. The same applies to the GeForce 7950 GX2, which, by the way, is somewhat ahead of both its direct competitor and the younger model of the new generation.

Game tests: Hitman: Blood Money


We noted earlier that the presence of 512 MB of video memory provides such a video adapter with some gains in Hitman: Blood Money in high resolutions. Apparently, 320 MB is also enough, since the GeForce 8800 GTS 320MB is almost as good as the regular GeForce 8800 GTS, regardless of the resolution used; the difference does not exceed 5%.

Both cards, as well as the overclocked GeForce 8800 GTS offered by MSI, allow you to successfully play in all resolutions, and the GeForce 8800 GTX even allows the use of higher-quality FSAA modes than the usual MSAA 4x, since it has the necessary performance margin for this.

Game Tests: Tomb Raider: Legend


Despite using settings that provide maximum graphics quality, the GeForce 8800 GTS 320MB handles the game just as successfully as the regular GeForce 8800 GTS. Both cards make 1920x1200 resolution available to the player in "eye candy" mode. The MSI NX8800GTS OC Edition is slightly superior to both reference cards, but only on average fps - the minimum remains the same. It is no more than that of the GeForce 8800 GTX, which may mean that this indicator is due to some features of the game engine.

Game tests: Gothic 3

The current version of Gothic 3 does not support FSAA, so testing was carried out using only anisotropic filtering.


Despite the lack of full-screen anti-aliasing support, the GeForce 8800 GTS 320MB is seriously inferior not only to the regular GeForce 8800 GTS, but also to the Radeon X1950 XTX, slightly ahead of only the GeForce 7950 GX2. Due to performance at 26-27 fps at 1280x1024 resolution, this card is not the best for Gothic 3.

Note that the GeForce 8800 GTX is ahead of the GeForce 8800 GTS, at best, by 20%. Apparently, the game is unable to use all the resources available to Nvidia's flagship model. This is also evidenced by the slight difference between the regular and overclocked version of the GeForce 8800 GTS.

Game tests: Neverwinter Nights 2

As of version 1.04, the game allows the use of FSAA, but HDR support is still a work in progress, so we tested NWN 2 in "eye candy" mode.


As already mentioned, the minimum playability barrier for Neverwinter Nights 2 is 15 frames per second, and the GeForce 8800 GTS 320MB balances on this edge already at a resolution of 1600x1200, while for the version with 640 MB of memory 15 fps is the minimum indicator below which its productivity does not drop.

Game Tests: The Elder Scrolls IV: Oblivion

Without HDR, the game loses much of its appeal, and although players have differing opinions on this matter, we tested TES IV in the mode with FP HDR enabled.


The performance of the GeForce 8800 GTS 320MB directly depends on the resolution used: if at 1280x1024 the new product is able to compete with the most productive cards of the previous generation, then at 1600x1200 and, especially, 1920x1200 it loses to them, losing up to 10% to the Radeon X1950 XTX and up to 25% to the GeForce 7950 GX2. However, this is a very good result for a solution that has an official price of only $299.

The regular GeForce 8800 GTS and its overclocked version offered by MSI feel more confident and provide comfortable performance at the level of first-person shooters in all resolutions.


Examining two versions of the GeForce 7950 GT, which differ in the amount of video memory, we did not record any serious differences in performance in TES IV, however, in a similar situation with two versions of the GeForce 8800 GTS, the picture is completely different.
If at 1280x1024 they behave the same, then at 1600x1200 the version with 320 MB of memory is more than half as good as the version equipped with 640 MB, and at a resolution of 1920x1200 its performance drops to the level of the Radeon X1650 XT. It is quite obvious that the issue here is not the amount of video memory as such, but the features of its distribution by the driver. The problem can probably be fixed by updating ForceWare, and we will check this claim with the release of new versions of Nvidia drivers.

As for the GeForce 8800 GTS and MSI NX8800GTS OC Edition, even in the open spaces of the Oblivion world they provide a high level of comfort in all resolutions, although, of course, not in the region of 60 fps, as in enclosed spaces. The most powerful solutions of the previous generation are simply not able to compete with them.

Game tests: X3: Reunion


The average performance of all members of the GeForce 8800 family is quite high, but the minimum is still at a low level, which means that drivers need to be improved. The results of the GeForce 8800 GTS 320MB are the same as those of the GeForce 8800 GTS 640MB.

Game Tests: Command & Conquer 3: Tiberium Wars

The Command & Conquer real-time strategy series is probably familiar to anyone who is even more or less interested in computer games. Continuation of the series, recently released Electronic Arts takes the player into the well-known world of confrontation between GDI and Brotherhood of Nod, which, this time, was joined by a third faction in the form of alien invaders. The game engine is made on modern level and uses advanced special effects; In addition, it has one feature - an fps limiter fixed at 30 frames per second. This may be done to limit the speed of the AI ​​and thus avoid unfair advantage over the player. Since the limiter is not disabled by standard means, we tested the game with it, which means we paid attention primarily to the minimum fps.


30 fps can be achieved by almost all test participants in all resolutions, with the exception of the GeForce 7950 GX2, which has problems with functioning SLI mode. Most likely, the driver simply does not have the appropriate support, since the last time the official Windows XP Nvidia ForceWare driver for the GeForce 7 family was updated was more than six months ago.

As for both GeForce 8800 GTS models, they demonstrate the same minimum fps, and, therefore, provide the same level of comfort for the player. Although the model with 320 MB of video memory is inferior to the older model in a resolution of 1920x1200, 2 frames per second is hardly a critical value, which, with the same minimum performance, again, does not affect the gameplay in any way. A complete lack of discrete control can only be provided by the GeForce 8800 GTX, whose minimum fps does not fall below 25 frames per second.

Game tests: Company of Heroes

Due to issues with FSAA activation in this game, we decided not to use the "eye candy" mode and tested it in pure performance mode with anisotropic filtering enabled.


Here we have another game where the GeForce 8800 GTS 320MB is inferior to the previous generation with a non-unified architecture. In fact, Nvidia's $299 solution is suitable for use at resolutions no higher than 1280x1024, even with anti-aliasing disabled, while the $449 model, which differs only in the amount of video memory, allows you to successfully play even at 1920x1200. However, this is also available to owners of AMD Radeon X1950 XTX.

Game Tests: Supreme Commander


But Supreme Commander, unlike Company of Heroes, does not have strict requirements for the amount of video memory. In this game, the GeForce 8800 GTS 320MB and the GeForce 8800 GTS show equally high results. Some additional gains can be achieved through overclocking, as demonstrated by the MSI product, but such a step will still not allow it to reach the level of the GeForce 8800 GTX. However, the available performance is sufficient to use all resolutions, including 1920x1200, especially since its fluctuations are small, and the minimum fps is only slightly inferior to the average.

Synthetic tests: Futuremark 3DMark05


Since by default 3DMark05 uses a resolution of 1024x768 and does not use full-screen anti-aliasing, the GeForce 8800 GTS 320MB naturally demonstrates the same result as the regular version with 640 MB of video memory. The overclocked version of the GeForce 8800 GTS, supplied to the market by Micro-Star International, boasts a nice even result of 13,800 points.






Unlike the overall result obtained in the default mode, we obtain the results of individual tests by running them in the "eye candy" mode. But in this case, this did not affect the performance of the GeForce 8800 GTS 320MB in any way - no noticeable lag behind the GeForce 8800 GTS was recorded even in the third, most resource-intensive test. The MSI NX8800GTS OC Edition took a stable second place in all cases after the GeForce 8800 GTX, confirming the results obtained in the overall standings.

Synthetic tests: Futuremark 3DMark06


Both GeForce 8800 GTS variants behave the same as in the previous case. However, 3DMark06 uses more complex graphics, which, combined with the use of FSAA 4x in some tests, may give a different picture. Let's get a look.






results separate groups tests are also natural. The SM3.0/HDR group uses a larger number of more complex shaders, so the advantage of the GeForce 8800 GTX is more pronounced than in the SM2.0 group. The AMD Radeon X1950 XTX also looks more advantageous in the case of active use of Shader Model 3.0 and HDR, and the GeForce 7950 GX2, on the contrary, in the SM2.0 tests.




After enabling FSAA, the GeForce 8800 GTS 320MB actually begins to lose to the GeForce 8800 GTS 640MB at 1600x1200 resolution, and at 1920x1200 the new Nvidia solution cannot pass the tests at all due to lack of video memory. The loss is close to twofold in both the first and second SM2.0 tests, despite the fact that they are very different in the construction of graphic scenes.






In the first SM3.0/HDR test, the effect of video memory on performance is clearly visible already at a resolution of 1280x1024. The younger model GeForce 8800 GTS is inferior to the older one by about 33%, then, at a resolution of 1600x1200, the gap increases to almost 50%. The second test, with a much less complex and large-scale scene, is not so demanding on the amount of video memory, and here the lag is 5% and about 20%, respectively.

Conclusion

Time to take stock. We tested two Nvidia GeForce 8800 GTS models, one of which is a direct competitor to the AMD Radeon X1950 XTX, and the other is aimed at the $299 mainstream performance card sector. What can we say with the results of gaming tests?

The older model, which has an official price of $449, performed well when it comes to performance. In most tests, the GeForce 8800 GTS outperformed the AMD Radeon X1950 XTX and only in some cases showed equal performance with the AMD solution and lagged behind the dual-processor GeForce 7950 GX2 tandem. However, given the exceptionally high performance of the GeForce 8800 GTS 640MB, we would not unequivocally compare it with products of the previous generation: they do not support DirectX 10, while the GeForce 7950 GX2 has a significantly worse quality of anisotropic filtering, and potential problems caused by incompatibility of one or another games with Nvidia technology SLI.

GeForce 8800 GTS 640MB can confidently be called the best solution in the price range of $449-$499. However, it is worth noting that the new generation of Nvidia products has not yet been cured of childhood diseases: in Call of Juarez flickering shadows are still visible, and Splinter Cell: Double Agent, although it works, it requires special launch on drivers version 97.94. At least until graphics-based cards hit the market. AMD processor The new generation GeForce 8800 GTS has every chance to take its rightful place as “the best accelerator worth $449.” However, before purchasing a GeForce 8800 GTS, we would recommend clarifying the issue of compatibility of the new Nvidia family with your favorite games.

The new GeForce 8800 GTS 320MB for $299 is also a very good purchase for the money: support for DirectX 10, high-quality anisotropic filtering and not a bad level of performance in typical resolutions are just some of the advantages of the new product. Thus, if you plan to play at 1280x1024 or 1600x1200 resolutions, the GeForce 8800 GTS 320MB is an excellent choice.

Unfortunately, a very promising card from a technical point of view, which differs from the more expensive version only in the amount of video memory, is sometimes seriously inferior to the GeForce 8800 GTS 640MB, not only in games with high requirements for video memory, such as Serious Sam 2, but also where previously the difference in performance of cards with 512 and 256 MB of memory was not recorded. In particular, such games include TES IV: Oblivion, Neverwinter Nights 2, F.E.A.R. Extraction Point and some others. Considering that 320 MB of video memory is clearly larger than 256 MB, the problem is clearly related to its inefficient allocation, but, unfortunately, we do not know whether this is due to flaws in the drivers or something else. However, even taking into account the above-described shortcomings, the GeForce 8800 GTS 320MB looks much more attractive than the GeForce 7950 GT and Radeon X1950 XT, although the latter will inevitably lose price with the advent of this video adapter.

As for the MSI NX8800GTS-T2D640E-HD-OC, we have a well-equipped product that differs from the Nvidia reference card not only in packaging, accessories and a sticker on the cooler. The video adapter is overclocked by the manufacturer and in most games provides a noticeable increase in performance compared to the standard GeForce 8800 GTS 640MB. Of course, it cannot reach the level of the GeForce 8800 GTX, but additional fps is never superfluous. Apparently, these cards are carefully selected for their ability to operate at higher frequencies; at least, our sample showed quite good results in the field of overclocking and it is possible that most copies of the NX8800GTS OC Edition are capable of overclocking well beyond what has already been done by the manufacturer.

The inclusion of a two-disc edition of Company of Heroes, considered by many game reviewers to be the best strategy game of the year, deserves special praise. If you are serious about purchasing a GeForce 8800 GTS, then this MSI product has every chance of becoming your choice.

MSI NX8800GTS-T2D640E-HD-OC: advantages and disadvantages

Advantages:

Increased performance level compared to the reference GeForce 8800 GTS
High level of performance at high resolutions using FSAA





Low noise level
Good overclocking potential
Good equipment

Flaws:

Insufficiently debugged drivers

GeForce 8800 GTS 320MB: advantages and disadvantages

Advantages:

High level of performance in its class
Support for new modes and anti-aliasing methods
Excellent anisotropic filtering quality
Unified architecture with 96 shader processors
Future proof: support for DirectX 10 and Shader Model 4.0
Efficient cooling system
Low noise level

Flaws:

Insufficiently debugged drivers (problem with video memory allocation, poor performance in some games and/or modes)
High energy consumption

For more than a year that has passed since the release of video cards based on NVIDIA GeForce 8800 line chips, the situation on the graphics accelerator market has become extremely unfavorable for the end buyer. In fact, an overclocker who could pay a tidy sum of money for top-end video cards simply had no alternative. A competitor from ATI (AMD) appeared later and, ultimately, was not able to compete with the GeForce 8800 GTX, and subsequently the Ultra version of the NVIDIA GeForce 8800. Therefore, NVIDIA marketers easily realized that in the absence of competition, reduce the cost of top-end Video cards are not necessary at all. As a result, throughout this entire period, prices for GeForce 8800 GTX and Ultra remained at the same very high level, and only a few could afford such video cards.

However, the upper price segment has never been the determining and priority for manufacturers of graphics chips and video cards. Yes, leadership in this class is certainly prestigious for any company, but from an economic point of view, the most profitable is the middle price range. However, as recent tests of the AMD Radeon HD 3850 and 3870, which claim supremacy in the mid-range, have shown, the performance of such video cards is unsatisfactory for modern games and, in principle, unacceptable for their high-quality modes. The NVIDIA GeForce 8800 GT is faster than this pair, but also falls short of comfort in DirectX 10 games. What comes next if there is an opportunity to pay extra? Until yesterday, there was virtually nothing, since there is literally a gap in price terms between the GT and GTX and that’s all.

But technical progress does not stand still - the appearance of the new NVIDIA G92 chip, produced using 65 nm technology, allowed the company not only to attract overclockers with the quite successful GeForce 8800 GT video card, but also yesterday, December 11 at 17:00 Moscow time, to announce a new product - GeForce 8800 GTS 512 MB. Despite the quite simple name of the video card, the new graphics accelerator has a number of significant differences from regular version GeForce 8800 GTS. In today's material we will introduce you to one of the first video cards, GeForce 8800 GTS 512 MB, appearing on Russian market, we’ll check its temperature conditions and overclocking potential, and, of course, we’ll study the performance of the new product.

advertising

1. Specifications video cards participating in testing

The technical characteristics of the new product are presented to your attention in the table below in comparison with NVIDIA video cards GeForce 8800 family:

Name of technical
characteristics
NVIDIA GeForce
8800 GT 8800 GTS 8800 GTS
512 MB
8800
GTX/Ultra
GPU G92 (TSMC) G80 (TSMC) G92 (TSMC) G80 (TSMC)
Technical process, nm 65 (low-k) 90 (low-k) 65 (low-k) 90 (low-k)
Core area, sq.mm 330 484 330 484
Number of transistors, million 754 681 754 681
GPU frequency, MHz 600
(1512 shader)
513
(1188 shader)
650
(1625 shader)
575 / 612
(1350 / 1500
shader)
Effective operating frequency
video memory, MHz
1800 1584 1940 1800 / 2160
Memory capacity, MB 256 / 512 320 / 640 512 768
Supported memory type GDDR3
Memory bus width, bits 256
(4 x 64)
320 256
(4 x 64)
384
Interface PCI-Express
x16 (v2.0)
PCI-Express
x16 (v1.x)
PCI-Express
x16 (v2.0)
PCI-Express
x16 (v1.x)
Number of unified shaders
processors, pcs.
112 96 128
Number of texture blocks, pcs. 56 (28) 24 64 (32) 32
Number of rasterization units (ROP’s), pcs. 16 20 16 24
Pixel Shaders/Vertex version support
Shaders
4.0 / 4.0
Video memory bandwidth, Gb/sec ~57.6 ~61.9 ~62.1 ~86.4 / ~103.7

shading, Gpixel/sec
~9.6 ~10.3 ~10.4 ~13.8 / ~14.7
Theoretical top speed
texture samples, Gtex./sec
~33.6 ~24.0 ~41.6 ~36.8 / ~39.2
Peak power consumption in
3D operating mode, Watt
~106 ~180
Power supply requirements
Watt
~400 ~400 ~400 ~450 / ~550
Reference video card dimensions
design, mm (L x H x T)
220 x 100 x 15 228 x 100 x 39 220 x 100 x 32 270 x 100 x 38
Exits 2 x DVI-I
(Dual Link)
TV-Out, HDTV-
Out, HDCP
2 x DVI-I
(Dual Link)
TV-Out,
HDTV-Out
2 x DVI-I
(Dual Link)
TV-Out, HDTV-
Out, HDCP
2 x DVI-I
(Dual Link)
TV-Out,
HDTV-Out
Additionally SLI support
Recommended cost, US dollars 199 / 249 349 ~ 399 299~349 499~599 / 699

2. Review of BFG GeForce 8800 GTS 512 MB OC (BFGR88512GTSE)

The newest video card from a company well known to overclockers comes in a very compact box, decorated in dark colors.

Manufacturers have long been practicing the release of cheaper solutions based on graphics processors in the upper price segment. Thanks to this approach, the variety of ready-made solutions significantly increases, their cost decreases, and most users often prefer products with the most favorable price/performance ratio.
NVIDIA did the same with the latest G80 chip, the world's first graphics processor that is a carrier of a unified architecture and supports Microsoft's new API - DirectX 10.
At the same time as the flagship GeForce 8800 GTX video card, a cheaper version called the GeForce 8800 GTS was released. It is distinguished from its older sister by the reduced number of pixel processors (96 versus 128) and video memory (640 MB instead of 768 MB for the GTX). A consequence of the reduction in the number of memory chips was a reduction in the bit capacity of its interface to 320 bits (for the GTX - 384 bits). More detailed characteristics of the graphics adapter in question can be found by examining the table:

Our Test Laboratory received the ASUS EN8800GTS video card, which we will review today. This manufacturer is one of the largest and most successful partners of NVIDIA, and traditionally does not skimp on packaging design and packaging. As they say, “there should be a lot of a good video card.” The new product comes in a box of impressive size:


On its front side there is a character from the game Ghost Recon: Advanced Warfighter. The matter is not limited to one image - the game itself, as you may have guessed, is included in the package. On the back of the package are brief characteristics product:


ASUS considered this amount of information insufficient, making a kind of book out of the box:


To be fair, we note that this method has been practiced for quite some time and, by no means, only ASUS. But, as they say, everything is good in moderation. Maximum information content turned into a practical inconvenience. A small breath of wind - and the top of the cover opens. When transporting the hero of today's review, we had to contrive and bend the retaining tongue so that it would justify its purpose. Unfortunately, bending it can easily damage the packaging. And finally, let’s add that the dimensions of the box are unreasonably large, which causes some inconvenience.

Video adapter: equipment and close inspection

Well, let's move directly to the configuration and the video card itself. The adapter is packaged in an antistatic bag and a foam container, which eliminates both electrical and mechanical damage to the board. The box contains discs, DVI -> D-Sub adapters, VIVO and additional power cords, as well as a case for discs.


Of the discs included in the set, the GTI racing game and the 3DMark06 Advanced Edition benchmark are noteworthy! For the first time, 3DMark06 was seen in a set of serial and mass-produced video cards! Without a doubt, this fact will appeal to users who are actively involved in benchmarking.


Well, let's move directly to the video card. It is based on a reference design PCB using a reference cooling system, and is distinguished from other similar products only by a sticker with the manufacturer's logo, which retains the Ghost Recon theme.


The reverse side of the printed circuit board is also unremarkable - many SMD components and voltage stabilizer regulators are soldered on it, that's all:


Unlike the GeForce 8800 GTX, the GTS requires only one additional power connector:


In addition, it is shorter than its older sister, which will certainly appeal to owners of small bodies. There are no differences in terms of cooling, and the ASUS EN8800GTS, like the GF 8800 GTX, uses a cooler with a large turbine-type fan. The radiator is made of a copper base and an aluminum casing. Heat transfer from the base to the fins is carried out partly through heat pipes, which increases the overall efficiency of the design. Hot air is expelled outside system unit, however, part of it, alas, remains inside the PC thanks to some holes in the cooling system casing.


However, the problem of strong heating is easily solved. For example, blowing a low-speed 120 mm fan improves the temperature of the board quite well.
In addition to the graphics processor, the cooler cools memory chips and elements of the power subsystem, as well as the video signal DAC (NVIO chip).


The latter was moved outside the main processor due to high frequencies the latter, which caused interference and, as a consequence, interference in work.
Unfortunately, this circumstance will cause difficulties when changing the cooler, so NVIDIA engineers simply did not have the right to make it of poor quality. Let's look at the video card in its “naked” form.


The PCB contains the G80 chip, revision A2, and 640 MB of video memory, composed of ten Samsung-made chips. Memory access time is 1.2 ns, which is slightly faster than the GeForce 8800 GTX.


Please note that the board has two slots for chips. If they were soldered on the PCB, the total memory volume would be 768 MB, and its width would be 384 bits. Alas, the video card developer considered such a step unnecessary. This scheme is used only in professional Quadro series video cards.
Finally, we note that the card only has one SLI connector, unlike the GF 8800 GTX, which has two.

Testing, analysis of results

The ASUS EN8800GTS video card was tested on a test bench with the following configuration:
  • processor - AMD Athlon 64 3200+@2400MHz (Venice);
  • motherboard - ASUS A8N-SLI Deluxe, NVIDIA nForce 4 SLI chipset;
  • RAM - 2x512MB DDR400@240MHz, timings 3.0-4-4-9-1T.
Testing was carried out in the operating room Windows system XP SP2, chipset driver version 6.86 installed.
The RivaTuner utility confirmed that the video card's characteristics correspond to the declared ones:


The video processor frequencies are 510/1190 MHz, memory - 1600 MHz. The maximum heating achieved after repeated runs of the Canyon Flight test from the 3DMark06 package was 76 ° C at a fan speed of the standard cooling system of 1360 rpm:


For comparison, I will say that under the same conditions, the GeForce 6800 Ultra AGP that came to hand heated up to 85 °C at maximum fan speed, and after prolonged operation it completely froze.

The performance of the new video adapter was tested using popular synthetic benchmarks and some gaming applications.

Testing with Futuremark development applications revealed the following:


Of course, on a system with a more powerful central processor, for example, a representative of architecture Intel Core 2 Duo, the result would be better. In our case, the outdated Athlon 64 (even overclocked) does not allow us to fully unleash the potential of the top video cards of our time.

Let's move on to testing in real gaming applications.


In Need for Speed ​​Carbon, the difference between the competitors is clearly visible, and the GeForce 7900 GTX lags behind the 8800 generation cards more than noticeably.


Because for a comfortable game Half Life 2 requires not only a powerful video card, but also a fast processor; a clear difference in performance is observed only at maximum resolutions with anisotropic filtering and full-screen anti-aliasing activated.


In F.E.A.R. There is approximately the same picture as in HL2.


In the heavy modes of the Doom 3 game, the card in question performed very well, but the same weak central processor does not allow us to fully appreciate the gap between the GeForce 8800 GTS and its older sister.


Since Pray is made on the Quake 4 engine, which in turn is a development of Doom3, the performance results of video cards in these games are similar.
The progressiveness of the new unified shader architecture and some “cutback” of capabilities relative to its older sister put the GeForce 8800 GTS between the fastest graphics adapter from NVIDIA today and the flagship of the seven thousandth line. However, it is unlikely that the Californians would have acted differently - a new product of this class should be more powerful than its predecessors. I'm glad that the GeForce 8800 GTS in terms of speed capabilities is much closer to the GeForce 8800 GTX than to the 7900 GTX. Support for the latest graphics technologies also inspires optimism, which should leave owners of such adapters with a good performance reserve for the near (and, we hope, more distant) future.

Verdict

After examining the card, we were left with exceptionally good impressions, which were greatly improved by the cost factor of the product. So, at the time of its appearance on the market and some time later, according to price.ru, the ASUS EN8800GTS cost about 16,000 rubles - its price was clearly overpriced. Now, for a long period of time, the card is sold for approximately 11,500 rubles, which does not exceed the cost of similar products from competitors. However, given the package, the brainchild of ASUS is undoubtedly in an advantageous position.

pros:

  • DirectX 10 support;
  • reinforced chip structure (unified architecture);
  • excellent level of performance;
  • rich equipment;
  • famous brand;
  • the price is on par with products from less reputable competitors
Minuses:
  • Not always a convenient big box
We thank the Russian representative office of ASUS for providing the video card for testing.

Feedback, suggestions and comments on this material are accepted in the forum website.