23.03.2023

Overclocking experience of the gtx 680 card. Video cards. ⇡ Performance, gaming tests


One of the most productive video cards of 2012, the Nvidia Geforce GTX 680, was equipped with a GK104 GPU, which makes it easy to run not too old games and videos up to 4K. At one time, it became a worthy competitor to the Radeon HD 7970 and all other high-performance single-processor video cards. In 2018, it can no longer be used to build a powerful or even average gaming computer, but it is quite suitable for budget gaming PCs.

The video card received from the manufacturer graphics GPU GK104 with 1536 CUDA cores (3 times more than the GTX 580 model) and 128 texture units.

The remaining characteristics of the GTX 680 can be represented by the following list:

  • base frequency of the processor / memory - 1006/6008 MHz;
  • memory size - 2048 MB;
  • process technology - 28 nm;
  • the number of transistors - 3.54 billion, more compared to the GTX 580, but significantly less than the Radeon HD 7970;
  • memory bus width - 256 bits;
  • interface - PCI-E 3.0.

Using a video card, you can connect up to 4 screens to one computer without special conductors, three of them in 3D Vision Surround mode. 2 DVI ports (DVI-I and DVI-D), HDMI and DisplayPort can be used for connection. The maximum display resolution in this case is 3840×2160 pixels.

The selling price of the GTX 680 at the very beginning was $499 or 17,999 rubles. For Russia. This cost is noticeably lower compared to competitors like the Radeon HD 7970. Finding out how much the GTX 680 graphics card costs now, you can get a figure of about $ 100-150.

GTX 680 Overview

In appearance, the graphics card does not differ much from the previous version of the GTX 580. Especially considering the use of a massive turbine cooler and other features of the cooling system.

To save space, the evaporation chamber was removed from the board. The radiator fins were made more beveled, facilitating the passage of air. This feature and the use of a special sound-absorbing material for the manufacture of the turbine leads to a noticeable reduction in noise from the cooler when compared with the 580th version or the Radeon HD 7970.

Other features of the model include:

  • the presence of full-size DisplayPort and HDMI ports, allowing you to do without adapters;
  • fewer memory chips on the board compared to competitors;
  • the lack of a heat spreader on the GPU, which avoids chips when installing the cooler.
The maximum power consumption of the GTX 680 is 195W, lower than comparable Nvidia cards. Thanks to this, when using a video card as an additional power supply, you can get by with only 2 six-pin connectors.

How to overclock an Nvidia Geforce GTX 680 graphics card

By overclocking the GTX 680, you can try to increase its performance in games and even use this card for cryptocurrency mining. For this, a special GPU Boost technology is used and one of the utilities suitable for Nvidia GPUs, such as MSI Afterburner or RivaTuner.

By increasing the video adapter supply voltage by 10%, you can get a frequency of 1100 MHz (instead of 1006) and an effective value of 6912 MHz (instead of 6008). The result of this overclocking was a slight increase in the load on the cooling system, the coolers of which began to work with the maximum number of revolutions.

When using the graphics adapter in games, performance increased by about 7-13%, depending on the settings.

If we compare the mining capabilities of the GTX 680, before overclocking it showed a value of 15 Mh / s for Ethereum, after increasing the frequency - about 16.2 Mh / s.

This level is not too high and roughly corresponds to the GTX 1050 graphics cards, although the newer model from Nvidia has lower power consumption, which means it is more profitable to mine.

With a relatively low cost of electricity, mining on the GTX 680 will bring some profit to the domestic user. Although it is not advisable to buy this card specifically for such purposes - you can find options on the market with a better ratio of costs to the speed of mining cryptocurrencies.

Testing in games

By checking the FPS of the GTX 680 in games, you can get an idea of ​​​​its capabilities and make a decision about buying a card. The most convenient way to do this is by testing the most demanding gaming applications at the time of the release of the model and at the present time. Moreover, using it together with the appropriate CPU (Core i5 or i7) and memory (at least 6–8 GB).

When running modern games using the video adapter, the following results were obtained:

  • for Lost Planet 2 - a game that heavily uses 3D graphics and constantly overlays displacement maps with geometric objects - the GTX 680 shows 52 frames per second;
  • when running the even more demanding Metro 2033 game with FullHD resolution, you can get a value of about 25 FPS - not bad for a comfortable gameplay, but almost on the verge;
  • in DiRT 3, the GTX 680 scores around 74 fps, although this game can no longer demonstrate all the graphics capabilities;
  • for Mafia II, the FPS of the card reaches 50 even without overclocking, for an overclocked card, the value increases by about 12%;
  • on maximum settings in the shooter Aliens vs. Predator video adapter shows 32 FPS;
  • for the technological shooter Hard Reset, the FPS values ​​turned out to be the most impressive - when you start the game at high settings, the frequency reaches 83 fps;
  • when trying to play one of the last parts of the Total War strategy, Shogun 2, the video card again gets a decent frequency value - 106 fps;
  • in the game Batman: Arkham City at a resolution of 1650x1080, the video adapter shows 27 FPS.

When running modern gaming applications with the GTX 680, performance drops. However, at medium and minimum settings, almost all games still run.

For example, in Assasin Creed: Origins, when launched with a Core i7 processor and 6 GB of RAM, it will provide a smooth gaming experience with a resolution of 1280x720. Kingdom Come: Deliverance, the 2018 game, will also run on low settings, although it will require 8 GB of RAM to run properly.

Driver update

The lack of a suitable driver for the GTX 680 leads to problems with the operation of resource-demanding games and other useful programs. Error messages will appear on the computer screen, the computer will freeze and run slower than it should. The solution to the problem is to download the appropriate software from the official resource of the manufacturer - Nvidia.

You can use to update the driver and other sites, the reliability of which the user is sure. It is not recommended to download and install software for a video card from third-party resources - the result is often a computer infection with a virus. It is undesirable to do this with the help of special utilities, the use of which can lead to the installation of outdated drivers and malware on the PC.

“... It is impossible to call the GTX 680 the undisputed leader. In terms of performance, the card is not much ahead of the HD 7970. Perhaps the only advantage of Kepler is manufacturability. Automatic overclocking by stock TDP, original smoothing modes, the ability to play

Gambling https://www.site/ https://www.site/

NVIDIA has entered the next generation graphics card race. At the end of March of this year, the GeForce GTX 680, the main competitor of the AMD Radeon HD 7970, went on sale. Now we will tell you what is hidden under the code name Kepler, how many cores a fresh crystal received and at what frequency it operates. And, of course, we will answer the age-old question: who is faster, "green" or "red"?

Reorganization

Architecture Fermi, which has been betting on for the last couple of years, has exhausted itself. It was replaced by Kepler. At first glance, everything is the same: stream processors are combined into SM modules, which, in turn, form GPC clusters, and a GPU is assembled from them. But if in GF110 (GeForce GTX 580) each SM-module was equipped with 32 CUDA-cores, then in Kepler there were six times more of them - 192 pieces. The thicker blocks (now called SMX) and auxiliary elements have also been added: there are now 16 “texture units” instead of 4, and sets of special operations - 32 against 4 SFU from the previous flagship.

A pair of SMXs and one rasterizer form a GPC (Graphics Processing Cluster). Four such sets form the basis of the chip GK104- the hearts of the GTX 680. Thus, the new GPU carries 1536 cores and 128 texture units, while the GF110 was content with 512 processors and 64 TMUs. But the number of PolyMorph Engines responsible for tessellation has decreased: 8 versus 16 in the previous generation. Few? No matter how. The developers not only revised the principles of operation of the PM Engine, but also added megahertz. So, according to the engineers, the overall performance increased by about 28%.

But what Kepler abandoned is frequency separation. At Fermi, the stream processors ran at twice the speed of the stone, now the numbers are even. On the one hand, this is good: the main elements of GK104 received a hefty dose of adrenaline. On the other hand, the total power of a single CUDA core has noticeably dropped.

Another loss is the memory bus. GK104 is equipped with four 64-bit GDDR5 controllers, so the bit depth is 256 versus 384 bits for GF110 and Thaiti on the Radeon HD 7970. Now the ROP is also smaller: 32 instead of 46 units for the GTX 580.

New opportunities

With Kepler, several new technologies have been introduced. First - GPU Boost. It monitors temperature and power consumption and, if the indicators do not exceed critical values, automatically increases the speed and voltage of the crystal. All this works at the hardware level and does not turn off even during manual overclocking.

The second is the ability to connect four monitors at once. At the same time, for 3D Vision Surround(games on three displays) no more need for SLI-bundle, one GTX 680 is enough.

The third is a discrete video encoding chip NVENC. NVIDIA cards have been good at processing video streams before, but they used the processing power of CUDA cores for this. Now they play only an auxiliary role: the new module performs the main calculations and does it four times faster than Fermi. The weak point of NVENC is that it is only familiar with the H.264 codec.

Fourth - fresh anti-aliasing algorithms, TXAA And FXAA. The latter is a "cheap" replacement for the traditional MSAA. The impact on performance is minimal, but you have to pay with a slight blurring of the picture. TXAA is a more serious approach, and according to the developers, it not only surpasses MSAA in quality, but also requires less resources.

Finally, the fifth is the adaptive vertical sync mode. Everything is simple here: VSync is activated only when fps exceeds the screen resolution. As a result, subsidence of the frame rate below the coveted threshold no longer leads to a sharp drop in speed.

cold blooded

Despite a threefold increase in the number of stream processors, the number of transistors used to build a chip increased slightly - from 3 to 3.54 billion. Thanks to the new 28-nm process technology, the core area and power consumption turned out to be small. Even at maximum load, TDP does not exceed 195 watts, and this is when the base frequency reached a fantastic mark of 1006 MHz, and under GPU Boost it rises to 1058 MHz! For comparison, a Radeon HD 7970 running at 925 MHz and carrying 4.31 billion transistors consumes 250 watts.

The cool nature of the GK104 made it possible to simplify the design of the board. The GTX 680 does not have an 8-pin PCIe connector, so loved by top solutions - a pair of 6-pin sockets was enough. Only four phases were allocated to power the crystal, and, judging by the wiring, the fifth was abandoned at the last moment. The length of the card is 25 cm, which is slightly less than the flagship GeForce of previous generations.

The novelty is equipped with eight GDDR5 modules with a total capacity of 2 GB. The memory speed is a record 6008 MHz, which partially compensates for the low bus width. The back panel is equipped with one HDMI 1.4a with DisplayPort 1.2 and a pair of DVI. The interface is PCIe 3.0, backwards compatible with PCIe 2.0.

A two-story turbine is responsible for cooling. It includes a large aluminum radiator with three heat pipes and an unusual for "green" cylindrical turntable, which uses sound-absorbing materials.

First in the world

We received the payment directly from the Russian office of NVIDIA. The appearance of the card is standard: the entire surface is covered with a black plastic casing and decorated with company logos. In terms of dimensions, the GTX 680 is really a little smaller than its predecessor and looks very neat. Of non-standard solutions, we note the unusual arrangement of power contacts: two 6-pin sockets are not in line, but one above the other.

For tests, we assembled a stand based on the motherboard Gigabyte GA-X58A-UD3R. Used as a processor Core i7-920, memory set three bars Kingston HyperX DDR3-1666MHz 2 GB each Windows 7 Ultimate 64-bit and all programs recorded on Kingston SSDNow.

The list of applications includes 3DMark11, Unigine Heaven Benchmark 2.5, Just Cause 2, DiRT 2, Aliens vs. Predator, Batman Arkham City And Total War: Shogun 2. The competitors were appointed GeForce GTX 590, GTX 580, AMD Radeon HD 6970 And HD 7970.

Who is first?

In 3DMark11, the new product was 17% faster than the HD 7970 and came close to the results of the dual GTX 590! Kepler was also liked by the resource-hungry Unigine Heaven Benchmark 2.5: superiority over AMD - 26%. In games, the numbers are even more interesting.

Aliens vs. The Greens' flagship Predator hit 57.7 fps, beating the GTX 580 with the HD 6970 and trailing the HD 7970 and GTX 590 by 1.5 and 15.8 frames, respectively. The newcomer also lost in Just Cause 2, which is optimized for AMD drivers - the difference with the HD 7970 was 21.7 fps. We managed to distinguish ourselves in DiRT 2, the "red" leader missed 37.5 fps.

NVIDIA's next win is Batman: Arkham City. With PhysX, MSAA 8x and a resolution of 1920x1080 turned on to the maximum, the hero of our review gave out quite playable 26 fps, while the HD 7970 barely reached 20 frames. After disabling PhysX, the situation did not change: with MSAA 4x, the gap was 4-6 fps, and after switching to 8x, it was 15-18 fps. The triumphant march of the GTX 680 was stopped by Total War: Shogun 2. The older GeForce reached 23.2 fps, the HD 7970 - up to 24.9.

The last word

Let's face it, we expected more from the GeForce GTX 680. Look at the resulting fps ratio: only 8% faster than the HD 7970. The same difference was between the GTX 580 and HD 6970 two years ago. Yes, in many applications the "green" flagship is head and shoulders above its competitor. But all these are just NVIDIA-optimized games. In impartial tests - Aliens vs. Predator, Batman: Arkham City, Total War: Shogun 2 - rivals are on par.

Today's victory for the GTX 680 is quite possibly due to the late release. If the GeForce appeared earlier than the HD 7970, AMD engineers would bleed, but they would have pulled the Graphics Core Next up to Kepler frequencies. What would come of it - see our plates. We overclocked the HD 7970 to the average GK104 speed of 1035 MHz and got similar results to the GTX 680.

It is impossible to call the GTX 680 the undisputed leader. In terms of performance, the card is not much ahead of the HD 7970. Perhaps the only advantage of Kepler is manufacturability. Automatic overclocking by TDP stock, original anti-aliasing modes, the ability to play on three monitors, a separate video decoding chip, PhysX, 3D Vision, noise isolation of the cooling system. And all this with an extremely modest power consumption of 195 watts. Otherwise, AMD and NVIDIA are roughly equal. And now it all depends on competent support, driver settings and, of course, the pricing policy of companies. For example, HD 7970 can already be bought for 16,500 rubles, official the price tag for the GTX 680 is 17,990 rubles.

Thinking out loud

If we digress a little from the results of Kepler and take a closer look at the characteristics, we can see a lot of interesting things. The GTX 680 looks too weird for a high-end graphics card. The first thing that catches your eye is a noticeable simplification of the memory subsystem: only 256 bits versus 384 bits for the GTX 580. More to come. The power consumption of the novelty is surprisingly small - 195 watts. And while this is in line with the current trend of increasing performance per watt, we all know that NVIDIA has never been shy about making big, hot chips that consume 250W. The suspiciously modest amount of GDDR5 is also confusing - only 2 GB, and this is for a board that should provide a crazy resolution of 5760x1080 when playing on three monitors. By the way, about the latter: GK104 supports only four displays at the same time, although it could well pull six, like the older representatives of AMD. And, finally, a negligibly small increase in elements and a modest core area. And after all NVIDIA never aspired to minimalism.

Drawing parallels with past generations, one gets the feeling that we have a stripped-down version of the crystal and soon we may well expect something more powerful. Of course, these are just thoughts out loud, and perhaps NVIDIA really went against their own principles, but the GTX 680 is very similar to some kind of GTX 670.

In the main part of the article, we mentioned several new technologies. Let's talk about them in more detail. Let's start with Adaptive VSync.

Every monitor has its own refresh rate. For modern LCD panels it is 60 or 120 Hz with 3D Stereo support. These numbers indicate the maximum number of frames that the screen can show in one second. However, in some games, fps exceeds this limit. On the one hand, it's good - no brakes. On the other hand, overscan results in image twitching.

VSync allows you to adjust the refresh rate to the monitor's capabilities. If the matrix is ​​limited to 60 Hz, then the video card will not give out more than 60 fps, which means there will be no image distortion. But this approach has one significant drawback. When the GPU cannot provide the required number of frames, VSync sets the bar "no more" than 30, then 20 and 10 fps. And he does not pay attention to the fact that the card easily holds, for example, 40 fps. NVIDIA has solved this problem.

Adaptive VSync monitors system performance, and if the speed drops below 60 fps, it disables VSync, preventing the board from being idle. As soon as the game gains momentum, the restriction returns. Bottom line: no artifacts and artificial freezes.

Smoothing

Next on the agenda are new antialiasing methods designed to replace the usual Multisampling Antialiasing. To understand why the standard MSAA is bad, you need to understand the principles of its work.

The technology was invented to get rid of aliasing at the corners of objects. Unpleasant notches appear for a very simple reason. The LCD panel can be imagined as a sheet of a notebook in a small cell, each of which can be painted over in one color. Try to draw some kind of house according to these rules: horizontal and vertical lines will turn out to be even, and slanted lines will be herringbone. Everything is the same in the computer. The video card turns the shapes into pixels (ROP blocks do this) and fills the screen with them. Since there are few dots on the display, characteristic “teeth” come out on uneven planes. Getting rid of them is relatively simple: you need to recolor the pixels adjacent to the edges of objects in transitional colors. To determine them - and use anti-aliasing.

At first, SuperSampling was used for this: the scene was rendered at a resolution 2/4/8 times higher than necessary, then the picture was compressed to the capabilities of the matrix and displayed on the monitor - the notches disappeared. One thing is bad - this approach consumed an indecently many resources, so Multisampling soon replaced it. With the help of cunning algorithms, he processed only problem areas and did not touch the rest of the image, which made it possible to significantly increase performance.

Unfortunately, with the increase in the number of polygons and the emergence of new complex effects, this technology turned out to be ineffective: the loss in speed no longer corresponded to the increase in quality. Plus, many problems began to create engines with deferred rendering. They first calculate the geometry and only then apply the lighting. A vivid example is Metro 2033, where the inclusion of MSAA led to a monstrous drop in fps. In general, there are enough difficulties with modern anti-aliasing, and manufacturers are looking for an alternative to it.

The first sign was Morphological AA, announced with AMD Radeon HD 6970. The video card renders a frame in standard resolution and applies a light "photoshop" to it - it blurs the image. As a result, there are no "jaggies", but the clarity of the textures is lost. The same technique is now used by NVIDIA, although it is called FXAA.

But TXAA is already something new. It combines both traditional anti-aliasing methods and post-processing based on HDR and information from previous frames. The final quality result exceeds MSAA 8x, and consumes less resources than MSAA 4x.

Note that all these innovations are not games of marketers. We did a test comparing MSAA and FXAA. The performance difference was 10%. You can see the result in the screenshots below. FXAA does not look as beautiful as traditional MSAA, but in dynamics it is almost not noticeable.

Unfortunately, we were unable to test the brand new TXAA. It must be built into the game; its support in future projects has already been announced epic, Crytek, gear box And CCP.

Overclocking

GPU Boost, implemented in the GTX 680, has long been requested for video cards. In central processors, automatic overclocking has been used since the days of the first Core i7. It entered the GPU only with the advent of the GTX 580, and even then in a truncated form. The technology was intended to protect against overheating: it lowered the speed when TDP was exceeded. This is how NVIDIA protected its chips from special heaters like furmark and could significantly increase the final frequency of work. The downside of the old approach was the implementation: everything was controlled by drivers that responded to pre-defined applications.

This bug has been fixed by AMD. PowerTune, introduced with the Radeon HD 6970, also struggled with overheating, but the temperature was monitored by sensors built into the crystal, which turned out to be much quicker than software solutions and did not depend on specific programs.

With the GTX 680, the engineers went even further. Hardware controls now overclock the GPU. When the game cannot fully load the stone and there is a TDP headroom, the performance of the board increases. As soon as the maximum level of power consumption is reached, the card slows down. Interestingly, the declared 1056 MHz is not the limit for GPU Boost. In our tests, Kepler often rose to 1100 MHz.

The only negative point is that “doping” cannot be turned off. Even with an increase in the base frequency, GPU Boost continues to work. So, setting the Base Clock to 1100 MHz, we witnessed how the GK104 got to 1300 MHz! True, only professional testers will complain about this, a “free” increase will never hurt us.

Physics

Another piece of news is the PhysX update. With the GTX 680, he got two fresh effects. The first is a modified version of the hair calculation. If individual strands were shown on the GTX 580, then at the Kepler presentation we were shown a technical demo with a natural yeti, each hair of which reacted to external influences such as wind or stroking.

The second is the possibility of destruction of objects. NVIDIA taught the engine to break objects in real time. And not just calculate the animation of pre-prepared blocks, but naturally break entire columns, apply textures to the resulting fragments and scatter them according to all the laws of physics.

Table 1.

Specifications

NVIDIA GeForce GTX 690

AMD Radeon HD 7970

AMD Radeon HD 7950

AMD Radeon HD 6970

NVIDIA GeForce GTX 580

NVIDIA GeForce GTX 590

Number of transistors

NVIDIA GeForce GTX 690

AMD Radeon HD 7970

AMD Radeon HD 7950

AMD Radeon HD 6970

NVIDIA GeForce GTX 580

NVIDIA GeForce GTX 590

Process technology

NVIDIA GeForce GTX 690

AMD Radeon HD 7970

AMD Radeon HD 7950

AMD Radeon HD 6970

NVIDIA GeForce GTX 580

NVIDIA GeForce GTX 590

Number of stream processors

NVIDIA GeForce GTX 690

AMD Radeon HD 7970

AMD Radeon HD 7950

AMD Radeon HD 6970

NVIDIA GeForce GTX 580

NVIDIA GeForce GTX 590

Graphics core frequency

NVIDIA GeForce GTX 690

AMD Radeon HD 7970

AMD Radeon HD 7950

AMD Radeon HD 6970

NVIDIA GeForce GTX 580

NVIDIA GeForce GTX 590

Stream processor frequency

NVIDIA GeForce GTX 690

AMD Radeon HD 7970

AMD Radeon HD 7950

AMD Radeon HD 6970

NVIDIA GeForce GTX 580

NVIDIA GeForce GTX 590

Type, memory size

NVIDIA GeForce GTX 690

AMD Radeon HD 7970

AMD Radeon HD 7950

AMD Radeon HD 6970

NVIDIA GeForce GTX 580

NVIDIA GeForce GTX 590

Memory frequency

NVIDIA GeForce GTX 690

AMD Radeon HD 7970

AMD Radeon HD 7950

AMD Radeon HD 6970

NVIDIA GeForce GTX 580

NVIDIA GeForce GTX 590

Data bus

NVIDIA GeForce GTX 690

AMD Radeon HD 7970

AMD Radeon HD 7950

AMD Radeon HD 6970

NVIDIA GeForce GTX 580

NVIDIA GeForce GTX 590

Number of texture blocks

NVIDIA GeForce GTX 690

AMD Radeon HD 7970

AMD Radeon HD 7950

AMD Radeon HD 6970

NVIDIA GeForce GTX 580

NVIDIA GeForce GTX 590

Number of rasterization blocks

NVIDIA GeForce GTX 690

AMD Radeon HD 7970

AMD Radeon HD 7950

AMD Radeon HD 6970

NVIDIA GeForce GTX 580

NVIDIA GeForce GTX 590

Energy consumption

NVIDIA GeForce GTX 690

AMD Radeon HD 7970

AMD Radeon HD 7950

AMD Radeon HD 6970

NVIDIA GeForce GTX 580

NVIDIA GeForce GTX 590

Board length

NVIDIA GeForce GTX 690

AMD Radeon HD 7970

AMD Radeon HD 7950

AMD Radeon HD 6970

NVIDIA GeForce GTX 580

NVIDIA GeForce GTX 590

Interface

NVIDIA GeForce GTX 690

AMD Radeon HD 7970

AMD Radeon HD 7950

AMD Radeon HD 6970

NVIDIA GeForce GTX 580

NVIDIA GeForce GTX 590

Price for January 2012

NVIDIA GeForce GTX 690

17 990 rubles

AMD Radeon HD 7970

16 500 rubles

AMD Radeon HD 7950

15 000 rubles

AMD Radeon HD 6970

10 000 rubles

NVIDIA GeForce GTX 580

12 500 rubles

NVIDIA GeForce GTX 590

23 500 rubles

Table 2.

Synthetic tests

3DMark11

NVIDIA GeForce GTX 680

VTX Radeon HD 6970

Point of View GeForce GTX 590

NVIDIA GeForce GTX 680

XFX R7970 Double Dissipation (925/5500 MHz)

ASUS HD 7970 DirectCU II Top (1035/5500 MHz)

VTX Radeon HD 6970

ZOTAC GeForce GTX 580 AMP! Edition (772/4008 MHz)

Point of View GeForce GTX 590

NVIDIA GeForce GTX 680

XFX R7970 Double Dissipation (925/5500 MHz)

ASUS HD 7970 DirectCU II Top (1035/5500 MHz)

VTX Radeon HD 6970

ZOTAC GeForce GTX 580 AMP! Edition (772/4008 MHz)

Point of View GeForce GTX 590

NVIDIA GeForce GTX 680

XFX R7970 Double Dissipation (925/5500 MHz)

ASUS HD 7970 DirectCU II Top (1035/5500 MHz)

VTX Radeon HD 6970

ZOTAC GeForce GTX 580 AMP! Edition (772/4008 MHz)

Point of View GeForce GTX 590

Unigine Heaven Benchmark 2.5

NVIDIA GeForce GTX 680

XFX R7970 Double Dissipation (925/5500 MHz)

ASUS HD 7970 DirectCU II Top (1035/5500 MHz)

VTX Radeon HD 6970

ZOTAC GeForce GTX 580 AMP! Edition (772/4008 MHz)

Point of View GeForce GTX 590

NVIDIA GeForce GTX 680

XFX R7970 Double Dissipation (925/5500 MHz)

ASUS HD 7970 DirectCU II Top (1035/5500 MHz)

VTX Radeon HD 6970

ZOTAC GeForce GTX 580 AMP! Edition (772/4008 MHz)

Point of View GeForce GTX 590

NVIDIA GeForce GTX 680

XFX R7970 Double Dissipation (925/5500 MHz)

ASUS HD 7970 DirectCU II Top (1035/5500 MHz)

VTX Radeon HD 6970

ZOTAC GeForce GTX 580 AMP! Edition (772/4008 MHz)

Point of View GeForce GTX 590

Table 3

Gaming benchmarks (fps)

Aliens vs. Predator (DX11)

Very high. 1680x1050, AF 16x, AA 2x

NVIDIA GeForce GTX 680

XFX R7970 Double Dissipation (925/5500 MHz)

ASUS HD 7970 DirectCU II Top (1035/5500 MHz)

VTX Radeon HD 6970

ZOTAC GeForce GTX 580 AMP! Edition (772/4008 MHz)

Point of View GeForce GTX 590

NVIDIA GeForce GTX 680

XFX R7970 Double Dissipation (925/5500 MHz)

ASUS HD 7970 DirectCU II Top (1035/5500 MHz)

VTX Radeon HD 6970

ZOTAC GeForce GTX 580 AMP! Edition (772/4008 MHz)

Point of View GeForce GTX 590

Very High, 1920x1080, AF 16x, AA 2x

NVIDIA GeForce GTX 680

XFX R7970 Double Dissipation (925/5500 MHz)

ASUS HD 7970 DirectCU II Top (1035/5500 MHz)

VTX Radeon HD 6970

ZOTAC GeForce GTX 580 AMP! Edition (772/4008 MHz)

Point of View GeForce GTX 590

NVIDIA GeForce GTX 680

XFX R7970 Double Dissipation (925/5500 MHz)

ASUS HD 7970 DirectCU II Top (1035/5500 MHz)

VTX Radeon HD 6970

ZOTAC GeForce GTX 580 AMP! Edition (772/4008 MHz)

Point of View GeForce GTX 590

NVIDIA GeForce GTX 680

XFX R7970 Double Dissipation (925/5500 MHz)

ASUS HD 7970 DirectCU II Top (1035/5500 MHz)

VTX Radeon HD 6970

ZOTAC GeForce GTX 580 AMP! Edition (772/4008 MHz)

Point of View GeForce GTX 590

DiRT 2 (DX11)

NVIDIA GeForce GTX 680

XFX R7970 Double Dissipation (925/5500 MHz)

ASUS HD 7970 DirectCU II Top (1035/5500 MHz)

VTX Radeon HD 6970

ZOTAC GeForce GTX 580 AMP! Edition (772/4008 MHz)

Point of View GeForce GTX 590

NVIDIA GeForce GTX 680

XFX R7970 Double Dissipation (925/5500 MHz)

ASUS HD 7970 DirectCU II Top (1035/5500 MHz)

VTX Radeon HD 6970

ZOTAC GeForce GTX 580 AMP! Edition (772/4008 MHz)

Point of View GeForce GTX 590

NVIDIA GeForce GTX 680

XFX R7970 Double Dissipation (925/5500 MHz)

ASUS HD 7970 DirectCU II Top (1035/5500 MHz)

VTX Radeon HD 6970

ZOTAC GeForce GTX 580 AMP! Edition (772/4008 MHz)

Point of View GeForce GTX 590

NVIDIA GeForce GTX 680

XFX R7970 Double Dissipation (925/5500 MHz)

ASUS HD 7970 DirectCU II Top (1035/5500 MHz)

VTX Radeon HD 6970

ZOTAC GeForce GTX 580 AMP! Edition (772/4008 MHz)

Point of View GeForce GTX 590

NVIDIA GeForce GTX 680

XFX R7970 Double Dissipation (925/5500 MHz)

ASUS HD 7970 DirectCU II Top (1035/5500 MHz)

VTX Radeon HD 6970

ZOTAC GeForce GTX 580 AMP! Edition (772/4008 MHz)

Point of View GeForce GTX 590

Just Cause 2

High, 1680x1050, AF 16x, AA 4x

NVIDIA GeForce GTX 680

XFX R7970 Double Dissipation (925/5500 MHz)

ASUS HD 7970 DirectCU II Top (1035/5500 MHz)

VTX Radeon HD 6970

ZOTAC GeForce GTX 580 AMP! Edition (772/4008 MHz)

Point of View GeForce GTX 590

NVIDIA GeForce GTX 680

XFX R7970 Double Dissipation (925/5500 MHz)

ASUS HD 7970 DirectCU II Top (1035/5500 MHz)

VTX Radeon HD 6970

ZOTAC GeForce GTX 580 AMP! Edition (772/4008 MHz)

Point of View GeForce GTX 590

High, 1920x1080, AF 16x, AA 4x

NVIDIA GeForce GTX 680

XFX R7970 Double Dissipation (925/5500 MHz)

ASUS HD 7970 DirectCU II Top (1035/5500 MHz)

VTX Radeon HD 6970

ZOTAC GeForce GTX 580 AMP! Edition (772/4008 MHz)

Point of View GeForce GTX 590

NVIDIA GeForce GTX 680

XFX R7970 Double Dissipation (925/5500 MHz)

ASUS HD 7970 DirectCU II Top (1035/5500 MHz)

VTX Radeon HD 6970

ZOTAC GeForce GTX 580 AMP! Edition (772/4008 MHz)

Point of View GeForce GTX 590

NVIDIA GeForce GTX 680

XFX R7970 Double Dissipation (925/5500 MHz)

ASUS HD 7970 DirectCU II Top (1035/5500 MHz)

VTX Radeon HD 6970

ZOTAC GeForce GTX 580 AMP! Edition (772/4008 MHz)

Point of View GeForce GTX 590

Batman: Arkham City (DX11, Full PhysX)

High, 1680x1050, AF 16x, AA 4x

NVIDIA GeForce GTX 680

XFX R7970 Double Dissipation (925/5500 MHz)

ASUS HD 7970 DirectCU II Top (1035/5500 MHz)

VTX Radeon HD 6970

ZOTAC GeForce GTX 580 AMP! Edition (772/4008 MHz)

Point of View GeForce GTX 590

High, 1680x1050, AF 16x, AA 8x

NVIDIA GeForce GTX 680

XFX R7970 Double Dissipation (925/5500 MHz)

ASUS HD 7970 DirectCU II Top (1035/5500 MHz)

VTX Radeon HD 6970

ZOTAC GeForce GTX 580 AMP! Edition (772/4008 MHz)

Point of View GeForce GTX 590

High, 1920x1080, AF 16x, AA 4x

NVIDIA GeForce GTX 680

XFX R7970 Double Dissipation (925/5500 MHz)

ASUS HD 7970 DirectCU II Top (1035/5500 MHz)

VTX Radeon HD 6970

ZOTAC GeForce GTX 580 AMP! Edition (772/4008 MHz)

Point of View GeForce GTX 590

High, 1920x1080, AF 16x, AA 8x

NVIDIA GeForce GTX 680

XFX R7970 Double Dissipation (925/5500 MHz)

ASUS HD 7970 DirectCU II Top (1035/5500 MHz)

VTX Radeon HD 6970

ZOTAC GeForce GTX 580 AMP! Edition (772/4008 MHz)

Point of View GeForce GTX 590

NVIDIA GeForce GTX 680

XFX R7970 Double Dissipation (925/5500 MHz)

ASUS HD 7970 DirectCU II Top (1035/5500 MHz)

VTX Radeon HD 6970

ZOTAC GeForce GTX 580 AMP! Edition (772/4008 MHz)

Point of View GeForce GTX 590

Batman: Arkham City (DX11, no PhysX)

Ultra. 1680x1050, AF 16x, AA 4x

NVIDIA GeForce GTX 680

ASUS HD 7970 DirectCU II Top (1035/5500 MHz)

VTX Radeon HD 6970

ZOTAC GeForce GTX 580 AMP! Edition (772/4008 MHz)

Point of View GeForce GTX 590

Ultra, 1680x1050, AF 16x, AA 8x

NVIDIA GeForce GTX 680

ASUS HD 7970 DirectCU II Top (925/5500 MHz)

ASUS HD 7970 DirectCU II Top (1035/5500 MHz)

VTX Radeon HD 6970

ZOTAC GeForce GTX 580 AMP! Edition (772/4008 MHz)

Point of View GeForce GTX 590

Ultra, 1920x1080, AF 16x, AA 4x

NVIDIA GeForce GTX 680

ASUS HD 7970 DirectCU II Top (925/5500 MHz)

ASUS HD 7970 DirectCU II Top (1035/5500 MHz)

VTX Radeon HD 6970

ZOTAC GeForce GTX 580 AMP! Edition (772/4008 MHz)

Point of View GeForce GTX 590

Ultra, 1920x1080, AF 16x, AA 8x

NVIDIA GeForce GTX 680

ASUS HD 7970 DirectCU II Top (925/5500 MHz)

ASUS HD 7970 DirectCU II Top (1035/5500 MHz)

VTX Radeon HD 6970

ZOTAC GeForce GTX 580 AMP! Edition (772/4008 MHz)

Point of View GeForce GTX 590

NVIDIA GeForce GTX 680

ASUS HD 7970 DirectCU II Top (925/5500 MHz)

ASUS HD 7970 DirectCU II Top (1035/5500 MHz)

VTX Radeon HD 6970

ZOTAC GeForce GTX 580 AMP! Edition (772/4008 MHz)

Point of View GeForce GTX 590

Total War: Shogun 2 (DX11)

Ultra. 1680x1050, AF 16x, AA 4x

NVIDIA GeForce GTX 680

ASUS HD 7970 DirectCU II Top (925/5500 MHz)

ASUS HD 7970 DirectCU II Top (1035/5500 MHz)

VTX Radeon HD 6970

ZOTAC GeForce GTX 580 AMP! Edition (772/4008 MHz)

Point of View GeForce GTX 590

Ultra, 1680x1050, AF 16x, AA 8x

NVIDIA GeForce GTX 680

ASUS HD 7970 DirectCU II Top (925/5500 MHz)

ASUS HD 7970 DirectCU II Top (1035/5500 MHz)

VTX Radeon HD 6970

ZOTAC GeForce GTX 580 AMP! Edition (772/4008 MHz)

Point of View GeForce GTX 590

Ultra, 1920x1080, AF 16x, AA 4x

NVIDIA GeForce GTX 680

ASUS HD 7970 DirectCU II Top (925/5500 MHz)

ASUS HD 7970 DirectCU II Top (1035/5500 MHz)

VTX Radeon HD 6970

ZOTAC GeForce GTX 580 AMP! Edition (772/4008 MHz)

Point of View GeForce GTX 590

Ultra, 1920x1080, AF 16x, AA 8x

NVIDIA GeForce GTX 680

ASUS HD 7970 DirectCU II Top (925/5500 MHz)

ASUS HD 7970 DirectCU II Top (1035/5500 MHz)

VTX Radeon HD 6970

ZOTAC GeForce GTX 580 AMP! Edition (772/4008 MHz)

Point of View GeForce GTX 590

NVIDIA GeForce GTX 680

ASUS HD 7970 DirectCU II Top (925/5500 MHz)

ASUS HD 7970 DirectCU II Top (1035/5500 MHz)

VTX Radeon HD 6970

ZOTAC GeForce GTX 580 AMP! Edition (772/4008 MHz)

Point of View GeForce GTX 590

Table 4

Price/performance ratio

Performance

NVIDIA GeForce GTX 680

AMD Radeon HD 6970

NVIDIA GeForce GTX 580

NVIDIA GeForce GTX 590

NVIDIA GeForce GTX 680

AMD Radeon HD 7970 (925/5500 MHz)

AMD Radeon HD 7970 (1035/5500 MHz)

AMD Radeon HD 6970

NVIDIA GeForce GTX 580

NVIDIA GeForce GTX 590

The first news about the original video card from Gigabyte Technology Co., Ltd. with the WindForce 5X cooler appeared in the early spring of 2012. Then the novelty, without reservations, was able to surprise everyone with a very original approach to organizing cooling, using five small fans in the cooler design at once. WindForce 5X seemed so extraordinary that many even doubted the appearance of Gigabyte video cards with such a cooler on free sale. However, this, fortunately, did not happen, and in the middle of summer the company successfully launched two video cards in a series at once using WindForce 5X: Gigabyte Radeon HD 7970 SOC WindForce 5X (GV-R797SO-3GD) And Gigabyte GeForce GTX 680 SOC WindForce 5X (GV-N680SO-2GD). True, the first one has already been withdrawn from sale by now due to the appearance of faster “GHz Editions”, but the second one is freely sold, albeit not cheaply. We will study and test it today.

1. Overview of the video card Gigabyte GeForce GTX 680 SOC WindForce 5X 2 GB (GV-N680SO-2GD)

specifications

The technical characteristics of the Gigabyte video card are shown in the table in comparison with the reference NVIDIA GeForce GTX 680 video card:



packaging and equipment

The video card comes in a large cardboard box with key product information on the front:


On the back is a detailed description of the cooling system, technologies used, a proprietary utility for overclocking and monitoring, as well as other, less significant information.

The delivery set includes two adapter cables from PATA connectors to eight-pin connectors, a CD with drivers and utilities, installation instructions and an advertising booklet:


Frankly speaking, the package bundle for a product of the upper price segment is very modest.

The Gigabyte GeForce GTX 680 SOC WindForce 5X is available in Taiwan for $549. Product warranty - 3 years.

peculiarities

The video card has dimensions of 295x120x64 mm and looks monumental, resembling a kind of armadillo:


Instead of the usual fans or a turbine on the front side of the video card, we see a metal plastic casing with decorative slots, the name of the manufacturer and the Super OverClock labels. The reverse side of the video card is also covered with a perforated metal plate.

Above is a row of five small fans, partially covered by a metal plate, and below is a massive heatsink with heat pipes:


The Gigabyte GeForce GTX 680 SOC WindForce 5X features two DVI outputs (one with Dual Link), one HDMI 1.4a output, and one DisplayPort 1.2 output:


On the other side of the video card, three fan connectors and some kind of interface output of an incomprehensible purpose are immediately visible.

Two MIO connectors for creating various SLI configurations are placed in their standard places: in the upper part of the printed circuit board of the video card at its outputs:


But the standard six- and eight-pin power connectors were replaced by Gigabyte engineers with two eight-pin ones, trying to provide the necessary current reserve when operating at higher frequencies. The recommended power supply for a system with one such video card is declared at around 650 watts, which is 100 watts higher than the specifications of the reference NVIDIA GeForce GTX 680.

The printed circuit board Gigabyte GeForce GTX 680 SOC WindForce 5X is made according to the original design and has a reinforced power system:


Eight phases are responsible for providing power to the GPU, and three more are allocated to power the memory and power circuits:


The video card belongs to the Super Overclock series, built on the concept of using high-quality components with an extended service life, Japanese capacitors, Proadlizer NEC / TOKIN film capacitors with low ESR, installed on the back of the PCB:


The eight-phase digital PWM controller CHiL CHL8318 is responsible for managing the power of the GPU:


At the top of the board, a group of contacts J13 is visible, which is the points for measuring the operating voltages of the V-PLL, V-MEM and V-GPU. This is not as convenient as, for example, MSI, but still rare video cards are equipped with this feature.

In addition to all of the above, the video card is equipped with two BIOS versions, which can be switched by a square backlit button located at the board outputs. In the regular version (blue backlight), the increased frequencies are simply hardwired, but when switching to the so-called extreme version of the video card BIOS (red backlight), all current restrictions are removed, and the GPU power limit increases to 300%. This BIOS is designed for extreme overclocking at ultra-low temperatures.

Gigabyte's Super Overclock GPUs pass through a special "GPU Gauntlet" selection system through several cycles of stress testing. As a result, only the highest quality chips with high overclocking potential are installed on such video cards. Our GK104 crystal was released in Taiwan on the 25th week of 2012 (mid-June) and belongs to revision A2:


Its base frequency is increased from 1006 to 1137 MHz or immediately by 13%, and at times of high load, the frequency can increase to 1202 MHz. In all other respects, this is the usual "Kepler" with voltages of 1.175 V for 3D mode and 0.988 V for 2D mode, in which the frequency is reduced to 324 MHz.

The video card has 2 GB of GDDR5 memory manufactured by Samsung with the marking K4G20325FD-FC03 :


Its effective frequency has been increased from the nominal 6008 MHz to 6200 MHz or +3.2%, which is not much, but quite a nice bonus. In 2D mode, the video memory frequency is reduced to 648 MHz.

As a result, the specifications of the Gigabyte GeForce GTX 680 SOC WindForce 5X are as follows:


We add that you can download the BIOS of the Gigabyte GeForce GTX 680 SOC WindForce 5X video card from our file archive ( BIOS Gigabyte GeForce GTX 680 SOC WindForce 5X, and let's move on to the study of the WindForce 5X cooling system.

cooling system WindForce 5X

Whatever the printed circuit board of the Gigabyte GeForce GTX 680 SOC WindForce 5X video card, by and large, we have already seen all this in the products of ASUS, MSI and Gigabyte itself. What is truly unique about this product is the WindForce 5X cooling system. The company does not hide that they are proud of this development, and it really does not hold originality, because its design combines a thermal chamber, nine copper heat pipes with a diameter of 6 mm, a giant aluminum radiator and five 40 mm fans:



Despite the huge mass (the weight of the video card with the cooler is 1622 (!) grams), it is attached to the printed circuit board with only six screws, four of which are located at the corners of the GPU and two on the heatsink for power elements:


It is a little more difficult to remove the protective cover, but without it it becomes clear that this cover is more decorative than protective:


As you can see, the sides of the radiator are tightly closed with fins that are bent and closed with each other, and a long steel bar serves as a fastener for the casing itself and nothing more.

With a heatsink thickness of 45 mm, it is clear that the thermal camera would not be able to evenly distribute the entire heat flow along the fins, so Gigabyte engineers equipped the heatsink with nine copper heat pipes with a diameter of 6 mm at once. Four go one way, and five go the other, but they all evenly penetrate the top of the radiator fins:


The base of the cooler is in contact with the GPU using thick and gray thermal paste, and the heatsink is in contact with memory chips and power elements through thermal pads. All connections in the radiator are soldered.

Five 40-mm fans at once, mounted on the upper end of the radiator, are manufactured by Power Logic (model PLA04015S12HH), are based on rolling bearings and are controlled by pulse-width modulation (PWM):


The fans are installed upside down, that is, during operation, they will blow hot air out of the radiator, sucking it in from below. And if there is no “exhaust” fan on the side wall of your case, then the air heated by the video card will remain inside the case and will have to be removed by other case fans.

Let's check how efficient the WindForce 5X cooler is. To do this, we used five test cycles of a very resource-intensive game Aliens vs. Predator (2010) at maximum graphics quality at 2560x1440 pixels with 16x anisotropic filtering and 4x MSAA:



The program was used to monitor temperatures and other parameters. MSI Afterburner version 2.2.5 and GPU-Z utility version 0.6.6. All tests were carried out in a closed case of the system unit, the configuration of which you can see in the next section of the article, at a room temperature of 24 degrees Celsius. Testing the efficiency of the cooling system of the video card was carried out before its disassembly using a standard thermal interface.

Let's look at the efficiency of the Gigabyte GeForce GTX 680 SOC WindForce 5X cooler in automatic mode and at maximum fan speed:


Automatic modeMaximum power


With automatic fan speed control, when their speed (according to monitoring data) increased to 7260 rpm, the GPU warmed up to 68 degrees Celsius, which is an excellent result for an overclocked GK104. If we set the fan speed to maximum (9750 rpm according to monitoring data), then the temperature of the GPU was only 54 degrees Celsius. This is somehow even funny for an overclocked GeForce GTX 680. But it's not at all funny for the user's hearing.

Unfortunately, there will be no traditional measurements of the noise level in today's article. There are two reasons for that. Firstly, to connect five fans to the printed circuit board, three connectors of two different types are used at once, so it was not possible to connect them to our device for monitoring and control. The second reason was one defective fan, just the one we shot in the photo above. When turned on, its bearing howled so that it was extremely difficult to endure it. True, after warming up, he calmed down somewhat, but still, we did not find a better solution than forcibly stopping him. By the way, temperature tests were carried out with all five fans activated. Subjectively, without a rattling fan, the WindForce 5X cooler runs quietly up to 4000-4500 rpm, comfortably up to 6000 rpm, and above this mark, alas, it already noticeably stands out against the background of a quiet system unit. As you remember, in automatic mode, the fans spin up to 7260 rpm, so we cannot call the Gigabyte GeForce GTX 680 SOC WindForce 5X a quiet video card. Only in 2D mode, when the fan speed is around 2000 rpm, the video card is really not audible, although this is not so important for a gaming product.

overclocking potential

We started studying the overclocking potential of the Gigabyte GeForce GTX 680 SOC WindForce 5X at a nominal voltage of 1.175 V and the maximum Power limit. The cooling system functioned in automatic mode. We managed to raise the base frequency of the GPU by 65 MHz, and the frequency of the video memory by 1120 effective megahertz:


As a result, the video card frequencies were 1202/1267/7328 MHz:


The result of GPU overclocking is quite normal for the GeForce GTX 680, and we were not able to achieve anything better either by enabling the extreme BIOS of the video card or by increasing the voltage on the GPU. However, the achieved memory frequency cannot but rejoice.

After overclocking the graphics card, the maximum temperature of the GPU increased by 3 degrees Celsius to 71 degrees, and the fan speed increased by 150 rpm, with a final 7410 rpm:



At the end of the review of the video card, one cannot fail to say a few words about the Gigabyte OC Guru II proprietary utility, which can not only control all the parameters of the Gigabyte GeForce GTX 680 SOC WindForce 5X, but also manage them:


In addition to the ability to change frequencies, voltages and monitor them, the utility can fine-tune PWM frequencies and load the GPU, memory, and PCI-Express bus:


In a separate window, you can manually adjust the fan speed depending on the temperature, or select one of three presets:



Energy consumption

Measurement of system power consumption with various video cards was carried out using the Zalman ZM-MFC3 multifunctional panel, which shows the consumption of the system "from the outlet" as a whole (excluding the monitor). The measurement was carried out in 2D mode, during normal work in Microsoft Word or Internet "surfing", as well as in 3D mode, the load in which was created using a three-time test from the game Metro 2033: The Last Refuge at a resolution of 2560x1440 at maximum settings graphics quality.

Let's look at the results:



Interestingly, a system with a Gigabyte video card turned out to be slightly more economical in terms of power consumption than a system with an almost identical ASUS video card. Even after overclocking the Gigabyte GeForce GTX 680 SOC WindForce 5X to 1202/7320 MHz, it consumes slightly less than the ASUS GeForce GTX 680 DirectCU II TOP at its nominal frequencies of 1137/6008 MHz. In addition, we note that the system with the Radeon HD 7970 GHz Edition (performed by Sapphire) is still more demanding on power consumption than both systems with the GeForce GTX 680. Well, the system with the dual-processor GeForce GTX 690 quite naturally became the leader today. idle mode, the power consumption of all systems differs to the minimum degree.

2. Test configuration, tools and testing methodology

Video cards were tested on a system with the following configuration:

Motherboard: Intel Siler DX79SI (Intel X79 Express, LGA 2011, BIOS 0537 dated 07/23/2012);
CPU: Intel Core i7-3960X Extreme Edition 3.3 GHz(Sandy Bridge-E, C1, 1.2V, 6x256KB L2, 15MB L3);
CPU cooling system: Phanteks PH-TC14PE (two 140 mm fans Corsair AF140 Quiet Edition at 900 rpm);
Thermal interface: ARCTIC MX-4 ;
RAM: DDR3 4x4 GB Mushkin Redline(2133 MHz, 9-10-10-28, 1.65 V);
Video cards:

NVIDIA GeForce GTX 690 2x2 GB 256-bit GDDR5, 915/6008 MHz;
Gigabyte GeForce GTX 680 SOC WindForce 5X 2GB 256-bit GDDR5, 1137/6200 and 1202/7320 MHz;
ASUS GeForce GTX 680 DirectCU II TOP 2 GB 256-bit GDDR5, 1137/6008 MHz;
Sapphire Radeon HD 7970 OC Dual-X 3 GB 384-bit GDDR5, 1050/6000 MHz;

System Drive: 256 GB Crucial m4 SSD (SATA-III, CT256M4SSD2, BIOS v0009);
Drive for programs and games: Western Digital VelociRaptor (SATA-II, 300 GB, 10000 rpm, 16 MB, NCQ) in a Scythe Quiet Drive 3.5" box;
Backup disk: Samsung Ecogreen F4 HD204UI (SATA-II, 2 TB, 5400 rpm, 32 MB, NCQ);
Case: Antec Twelve Hundred (front wall - three Noiseblocker NB-Multiframe S-Series MF12-S2 at 1020 rpm; back - two Noiseblocker NB-BlackSilentPRO PL-1 at 1020 rpm; top - standard 200 mm fan at 400 rpm);
Control and monitoring panel: Zalman ZM-MFC3 ;
Power unit: Seasonic SS-1000XP Active PFC F3(1000W), 120mm fan;
Monitor: 27" Samsung S27A850D (DVI-I, 2560x1440, 60 Hz).

We will compare the performance of the Gigabyte video card with a very fast ASUS GeForce GTX 680 DirectCU II TOP at rated frequencies and with Sapphire Radeon HD 7970 OC Dual-X, overclocked to the GHz Edition level:




In addition, the reference NVIDIA GeForce GTX 690 at its nominal frequencies has been added to the test as a benchmark in terms of speed:




To reduce the dependence of graphics card performance on platform speed, a 32nm six-core processor with a multiplier of 37, a reference frequency of 125 MHz and the activated "Load-Line Calibration" function was overclocked to 4.625 GHz by increasing the voltage in the motherboard BIOS to 1.49 V:



Hyper-Threading technology is activated. At the same time, 16 GB of RAM operated at a frequency of 2 GHz with timings of 9-11-10-28 at a voltage of 1.65 V.

Testing commenced on November 3, 2012 was conducted using Microsoft Windows 7 Ultimate x64 SP1 operating system with all critical updates as of that date and with the following drivers installed:

motherboard chipset Intel Chipset Drivers - 9.3.0.1025 WHQL dated 10/25/2012;
DirectX End-User Runtimes libraries, release date November 30, 2010;
video card drivers for AMD GPUs - Catalyst 12.11 Beta from 10/23/2012+ Catalyst Application Profiles 12.10 (CAP1);
NVIDIA graphics card drivers - GeForce 310.33 beta from 10/23/2012.

The performance of video cards was tested in two resolutions: 1920x1080 and 2560x1440 pixels. For tests, two graphics quality modes were used: "Quality + AF16x" - the texture quality in the drivers by default with the anisotropic filtering of the 16x level enabled, and "Quality + AF16x + MSAA 4x (8x)" with the inclusion of the anisotropic filtering of the 16x level and full-screen anti-aliasing powers of 4x or 8x, in cases where the average frames per second remained high enough for a comfortable game. Anisotropic filtering and full-screen anti-aliasing were enabled directly in the game settings. If these settings were not available in games, then the parameters were changed in the control panels of the Catalyst and GeForce drivers. Vertical sync has also been disabled. No other changes were made to the driver settings.

Since we repeatedly tested the GeForce GTX 680 and its competitor, today's list of test applications has been reduced to one semi-synthetic package and 8 games (according to the principle of the most resource-intensive and fresh):

3D Mark 2011(DirectX 11) - version 1.0.3.0, "Performance" and "Extreme" settings profiles;
Metro 2033: The Last Refuge(DirectX 10/11) - version 1.2, official test used, "Very High" quality settings, tessellation, DOF enabled, AAA anti-aliasing used, "Frontline" scene double pass;
Total War: SHOGUN 2 - Fall of the Samurai(DirectX 11) - version 1.1.0, built-in test (Battle of Sekigahara) at maximum graphics quality settings and using one of the MSAA 4x modes;
Crysis 2(DirectX 11) - version 1.9, used Adrenaline Crysis 2 Benchmark Tool v1.0.1.14 BETA, "Ultra High" graphics quality profile, high-resolution textures activated, double-cycle demo recording on the "Times Square" stage;
Battlefield 3(DirectX 11) - version 1.4, all graphics quality settings on "Ultra", double sequential passage of the scripted scene from the beginning of the "Hunting" mission lasting 110 seconds;
Sniper Elite V2 Benchmark(DirectX 11) - version 1.05, used Adrenaline Sniper Elite V2 Benchmark Tool v1.0.0.2 BETA maximum graphics quality settings ("Ultra"), Advanced Shadows: HIGH, Ambient Occlusion: ON, Stereo 3D: OFF, double sequential test run;
Sleeping Dogs(DirectX 11) - version 1.5, used Adrenaline Sleeping Dogs Benchmark Tool v1.0.0.3 BETA, maximum graphics quality settings for all items, Hi-Res Textures pack installed, FPS Limiter and V-Sync disabled, double sequential test run with total anti-aliasing at the "Normal" level and at the "Extreme" level;
F1 2012(DirectX 11) - update 9, used Adrenaline Racing Benchmark Tool v1.0.0.13, graphics quality settings at "Ultra", Brazilian track "Interlagos" 2 laps, 24 cars, light rain, camera mode - "Bonnet";
Borderlands 2(DirectX 9) - version 1.1.3, built-in test at maximum graphics quality settings and maximum PhysX level, FXAA anti-aliasing is enabled.

If the games implemented the ability to fix the minimum number of frames per second, then it was also reflected in the diagrams. Each test was carried out twice, the best of the two obtained values ​​was taken as the final result, but only if the difference between them did not exceed 1%. If the deviations of the test runs exceeded 1%, then the testing was repeated at least once more in order to obtain a reliable result.





Borderlands 2


We did not comment on each diagram, since, by and large, we did not see anything new on them. In general, the Gigabyte GeForce GTX 680 SOC WindForce 5X is slightly faster than the ASUS GeForce GTX 680 DirectCU II TOP due to faster memory, but the difference is negligible. Overclocking Gigabyte brings it 5-7% of performance plus, which is also not much, but you should not neglect such a free performance increase. In the confrontation with the Sapphire Radeon HD 7970 OC Dual-X at 1050/6000 MHz on new driver versions, the new Gigabyte video card wins in 3DMark 2011, Crysis 2, Borderlands 2, as well as Total War quality modes: SHOGUN 2 and F1 2012. of these video cards only in Battlefield 3. In turn, the Radeon is ahead of the GeForce in Metro 2033: The Last Refuge, Sniper Elite V2, Sleeping Dogs, as well as in Total War non-anti-aliasing modes: SHOGUN 2 and F1 2012. Of the features of the dual-processor NVIDIA GeForce GTX 690 on new drivers and in new tests, we highlight the lack of SLI support in the F1 2012 game, and the rest of the tests did not present any surprises.

Conclusion


The Gigabyte GeForce GTX 680 SOC WindForce 5X certainly turned out to be a very interesting graphics card. A well-designed PCB with an eight-phase GPU power system and the use of high-quality components for the "Super Overclock" series will ensure stable overclocking and long-term operation. Selected under the special program "GPU Gauntlet" graphics processor, additional BIOS with settings for overclocking under nitrogen and points for measuring key voltages will surely attract the attention of extreme overclockers. The high factory frequencies, coupled with the WindForce 5X's highly efficient cooling system and competitive price point, simply won't fail to attract most casual users.

At the same time, we cannot call the cooling system of this stylish and outwardly monumental product quiet. In addition, to unlock its potential, it needs a spacious building with a well-organized ventilation system. In addition, products of this class should have a richer and more attractive package bundle, and as a moral reassurance, Gigabyte's flagship product could be equipped with four gigabytes of memory instead of the current two. Nevertheless, we have a positive impression about the Gigabyte GeForce GTX 680 SOC WindForce 5X. The choice, as always, is yours.

On March 22, 2012, a super-new product from the company was presented NVIDIA– architecture graphics card KeplerNVIDIA GeForce GTX 680. Another technological breakthrough from engineers has brought more energy efficiency, more performance. Of course, more money from the population. I can evaluate the capabilities of a new generation card using the example ASUS GeForce GTX 680.

The ASUS video card comes in a cardboard box, the background of which is designed as an aged metal, torn apart by the claws of an unknown predator. The icon reminds us that the proprietary GPU Tweak utility will come to the rescue if you want to overclock and fine-tune the video card right in real time. In addition, ASUS talks about 2 gigabytes of video memory, support for DirectX 11 and PCI-Express 3.0 bus.

The reverse side, as usual, is left to describe proprietary gadgets and technologies, including GPU Boost and adaptive vertical sync. GPU Boost is some analogue of Intel technology, with the difference that processors increase the multiplier, and the video card adds frequency by just megahertz. With the help of programs, you can set a heat dissipation limit and try to overclock the video card higher than the Boost Clock frequency. Looking ahead a little, I’ll say that the GeForce GTX 680 is all right with overclocking.

The ASUS GeForce GTX 680 video card is a copy of the reference sample. The original solutions of engineers are used on other models, such as DirectCu II, which I hope will get to my tests.

The video core is powered by a four-phase power system. Video memory receives power in a two-phase scheme.

Additional power is supplied to the video card through two PCI Express 6-pin connectors. Their location is very unusual, one above the other, while the connector farthest from the board is recessed by a centimeter in height. Power consumption is 195 watts and the manufacturer recommends using at least a 550-watt power supply.

The video card supports the simultaneous connection of four monitors, for which it has four video outputs - two DVI, HDMI 1.4a and DisplayPort.

The core is cooled with a small heatsink. A complex design using three flat heat pipes copes with the temperature perfectly.

A separate metal plate covers the entire board and cools the video memory chips and power elements on the board. The entire system is blown with a turbine-type fan.

To test the performance of the system, the following system was assembled:

CPU

  • Intel Core i7 3770K (3.5 GHz ), Socket 1155

Motherboard

  • MSI Z77A-GD65, Intel Z77 chipset, Socket 1155

video card

  • Leadtek GeForce GTX 580

RAM

  • G.Skill RipjawsX DDR3-1866 CL9 2*4096 Mb

power unit

  • Enermax Revolution 85+ 1020W

HDD

  • Kingston HyperX SSD 240 Gb

Frame

  • Dimastech Benchtable

Monitor

  • Acer V243H

Keyboard

  • Logitech Illuminated Keyboard

Mouse

  • Logitech MX518

A set of test applications with settings:

Synthetic tests:

  • 3D Mark 11, Entry, Performance, Extreme presets
  • 3D Mark Vantage, Performance preset
  • Unigine Heaven HWBOT Edition, DX 11, Extreme preset

Game tests:

  • Aliens vs Predator DX11 Benchmark, 1920*1080
  • Dirt 3, 1920*1080, built-in benchmark
  • Metro 2033, 1920*1080, built-in benchmark
  • Lost Planet 2 DX 11, 1920*1080, benchmark A
  • Street Fighter benchmark, 1920*1080

For testing, the ASUS GeForce GTX 680 video card was overclocked to 1220 MHz for the graphics core and 1600 MHz for the video memory. For comparison, we chose the flagship of the previous generation - Leadtek GeForce GTX 580, the current leader from the competing camp - AMD Radeon HD 7970 and the dual-chip card of the previous generation - AMD Radeon HD 6990.

Previously, 3D Mark Vantage was considered the patrimony of NVIDIA cards - those days are gone. The Reds are ahead by a serious margin.

3D Mark 11 was released quite recently and its results are more fair. In the nominal mode, the ASUS GeForce GTX 680 loses a little to its competitor in the face of the AMD Radeon HD 7970, but overclocking allows you to pull yourself up very close.

The Aliens vs Predator gaming test also favored red. The advantage of the GeForce GTX 680 over its predecessor is about 20%

Dirt3 reveals the potential of NVIDIA video cards - the leadership is deserved and natural.

Lost Planet 2's love for NVIDIA graphics accelerators is known and confirmed.

Metro 2033 goes better on AMD Radeon, but the advantage is minimal, so we should rather talk about parity.

The effect of overclocking the ASUS GeForce GTX 680 is practically absent, which is somewhat incomprehensible. Perhaps some feature of the drivers or architecture.

The synthetic test favored Radeon, but keep in mind that AMD products disable tessellation, which is actively used in this test.

Final thoughts.

The ASUS GeForce GTX 680 video card demonstrates high performance and a significant advantage over the flagships of the previous generation based on the Fermi core. In the battle with the red competitors, it is not always possible to be ahead, but this is compensated by lower power consumption and better anti-aliasing technologies. Expect even higher frequencies from non-reference versions and then the product will be even more successful. According to my personal assumptions, the potential of the GeForce GTX 680 is very high. ASUS will certainly be able to use it in the right direction. In the meantime, for high quality and excellent performance, ASUS GeForce GTX 680 receives the "Our Choice" award from OCClub

Let's move on to overclocking tests. We tried to squeeze the maximum frame rate out of all three video cards, bypassing the stock levels chosen by the manufacturers. As usual, we set the Power Limit to maximum, and the maximum allowable voltage to 1.175 V.

EVGA GeForce GTX 680 SC Signature 2:

We started with the stock GPU frequency of 1098 MHz, reaching overclocking to 1144 MHz. This corresponds to an overclock of about four percent. We were able to increase the memory frequency of 1552 MHz to 1706 MHz. Considering EVGA's reference PCB design and the chosen cooling system, the result is quite acceptable.

Gigabyte GeForce GTX 680 Super Overclock:

Due to the complex powerful cooling and enhanced power subsystem, we expected good results from the Gigabyte GeForce GTX 680 Super Overclock. We increased the GPU frequency from 1137 to 1262 MHz. This equates to an overclock of about 11 percent. We were able to overclock the memory from 1552 MHz to 1728 MHz. Of course, we couldn't reach the results of an overclocker graphics card, but overall we were satisfied.

Point of View TGT GeForce GTX 680 Beast:

POV / TGT set the Beast graphics card to already quite high clock speeds. In any case, we tried to pick them up. We were able to overclock the GPU from 1163MHz to 1212MHz, which equates to over four percent overclocking. We were able to increase the memory frequency from 1502 MHz to 1754 MHz.

Overclocking resulted in the following performance results:

In terms of energy consumption, temperature and noise level, we got the following results:

Power consumption of the entire system under load, W.

Temperature under load, degrees Celsius.

Noise level under load, dB(A).


2023
maccase.ru - Android. Brands. Iron. News