It Has Arrived
nVidia’s hotly anticipated GTX 680M has finally arrived! If you are a regular Tech|Inferno visitor, then you know that we took a poll in our forum where members got to choose which games they wanted to see benchmarked against the GTX 580M in SLi. Before we get into the results, lets have a quick overview of what the new GTX 680M brings to the table.
The GTX 680M is built on TSMC’s 28nm process with the GK104 core which uses the Kepler architecture. Kepler is the successor to the Fermi architecture that was used in nVidia’s previous generation GTX 580M (later rebadged as GTX 675M) and compared to its predecessor, it has support for PCI Express 3.0 with more power efficient shaders along with a host of other new features.
The biggest change nVidia made with Kepler is doing away with the shader clock, thus Kepler shaders are running at core frequency (e.g. 720 MHz for 680M) versus the ones in Fermi which operate at twice the core frequency. Consequently, Kepler shaders have to do twice as much work as a Fermi shader. However, with the SMX changes in Kepler, nVidia claims it is about twice as powerful as Fermi SM.
nVidia also introduced some new features with Kepler including GPU Boost, Adaptive Vsync and TXAA. The Kepler architecture is all about efficiency while achieving stellar performance and one way it does that is through a feature called GPU Boost.
GPU Boost is similar to Intel’s Turbo Boost – it works in real time by monitoring variables such as power consumption, temperature, GPU utilization, and memory utilization and if it sees that resources aren’t being fully utilized by a particular game, it will increase the core frequency and corresponding voltage to give a performance boost. So for example, with the 680M being a 100W TDP (thermal design point) video card, if a game like Battlefield 3 does not saturate the 680M’s TDP, the additional headroom will allow the card to boost to a higher clock for added performance.
Thus there are now two clocks, the base clock and boost clock and the amount of boost you get depends on the game you’re playing along with resource utilization. Theoretically, if your notebook has excellent cooling built in by the manufacturer, the 680M should be able to frequently reach boost clocks in most games. Additionally, when the base clock is overclocked, the boost clock increases as well thereby providing wider headroom for potential performance.
Anytime vsync is enabled when gaming on a typical 60 Hz display, there is evident stuttering that happens whenever the framerate dips below 60 FPS. Because of the way vsync works, it isn’t a smooth transition (e.g. 60 fps to 57 to 45 etc.) but rather it decreases in multiples from 60 fps to 30 fps and so on. As a result, there is noticeable stuttering that takes place.
nVidia’s solution is a simple one, the driver monitors how many frames are being rendered while Adaptive Vsync is enabled and anytime it detects a drop below 60 fps, it shuts vsync off so the transition is a smooth one rather than one big jump from 60 fps to 30 fps. Theoretically this should alleviate a lot of stuttering gamers experience while playing with vsync enabled.
TXAA is a new feature integrated into game engines that nVidia claims will give CG movie quality level anti-aliasing by combining MSAA with resolve filters to produce a smoother image. TXAA also features temporal anti-aliasing which jitters a sample between a scene. Temporal anti-aliasing isn’t something new and has been available for AMD GPUs for awhile now. Anyone that has used it will know that it can sometimes produce a blurry picture.
TXAA comes in two flavors, TXAA 2x and TXAA 4x. TXAA 2x is supposed to provide image quality similar to 8xMSAA with the performance penalty of 2xMSAA and TXAA 4x is supposed to produce image quality superior to 8xMSAA with the performance penalty of 4xMSAA. Currently there aren’t any games on the market to test this feature out but nVidia’s website mentions the following games and engines which will have it: MechWarrior Online, Secret World, Eve Online, Borderlands 2, Unreal Engine 4, BitSquid, Slant Six Games, and Crytek.