Jump to content

Gtx 680 Lightning Drivers?


6 replies to this topic

#1 Rex Budman

    Member

  • PipPipPipPipPipPipPip
  • Survivor
  • Survivor
  • 841 posts

Posted 15 March 2014 - 08:24 PM

Hey all :D

I have a GTX680 Lightning (I think its a ti but im not sure)

Its the 2GB version PCI3.0 GDDR5

Anyway, I usually just get my drivers from Nvidia Geforce experience. However in the My Rig screen it just says I have a 680 - regular old 680. Isnt it supposed to read Lightning?

So my question is where should I get my drivers for this.

Also, its strange, because in MSI Afterburner, It only reads;

Core Clock: 705MHz
Shader is locked
Mem Clock: 3004MHz

Isn't this supposed to be a lot higher than this?

Wow I fell so far behind in keeping up with technology B)

Oh and also, I can't find any definitive info on this on google (damn my search capabilities); Should I flash the Bios to the LN2? Is this something that suits this card? At the moment im just using it straight out of the box

If someone can give me some advice It would be extremely appreciated and I welcome any comments to the thread.

Thank you all for your time.

Edited by Rex Budman, 15 March 2014 - 08:27 PM.


#2 Goose

    Member

  • PipPipPipPipPipPipPipPipPip
  • Civil Servant
  • Civil Servant
  • 3,463 posts
  • Twitch: Link
  • LocationThat flattop, up the well, overhead

Posted 15 March 2014 - 09:17 PM



guru3d.com

#3 Durant Carlyle

    Member

  • PipPipPipPipPipPipPipPipPip
  • Survivor
  • Survivor
  • 3,877 posts
  • LocationClose enough to poke you with a stick.

Posted 15 March 2014 - 10:56 PM

1. Lightning is part of the MSI branding. It is not an official Nvidia designation, so it will not show up on GeForce Experience that way. If it shows up as a plain 680 in GFE, then it's not a 680 Ti.
2. Get your drivers form Geforce.com like all other Nvidia graphics cards. I find upgrading through GeForce Experience is more hassle than it's worth.
3. Core Clock is supposed to be higher than that, yes. Play a match in MW:O and when it finishes, Alt-Tab out and check the Afterburner graph. If the graph shows 705MHz as the speed even when it was in game, then there's a problem. It should actually be somewhere around 1100MHz while in-game.
4. Straight out of the box is fine. The LN2 BIOS only improves anything if you are a professional overclocker and have extreme cooling like LN2.

Edit: Is there such a thing as a 680Ti? I couldn't find anything except on some russian-looking web sites that could easily have been typos.

Edited by Durant Carlyle, 15 March 2014 - 11:06 PM.


#4 Goose

    Member

  • PipPipPipPipPipPipPipPipPip
  • Civil Servant
  • Civil Servant
  • 3,463 posts
  • Twitch: Link
  • LocationThat flattop, up the well, overhead

Posted 15 March 2014 - 11:23 PM

There's a 780Ti, but not a 680Ti …

#5 Smokeyjedi

    Member

  • PipPipPipPipPipPipPipPip
  • Liquid Metal
  • Liquid Metal
  • 1,040 posts
  • LocationCanada

Posted 16 March 2014 - 05:53 AM

Nope you have the big mackdaddy fully uncorked GK104, no Ti neccesary. Badassness outta the box.

#6 Lord Letto

    Member

  • PipPipPipPipPipPipPip
  • Giant Helper
  • 900 posts
  • LocationSt. Clements, Ontario

Posted 17 March 2014 - 07:21 AM

if this is the one: http://www.newegg.co...N82E16814127693
Core clock should be 1110MHz with a Turbo boost to go to 1176MHz, Memory clock should be 6008MHz.
Shader should be linked with core clock, at least it is with my EVGA 560TI 2GB GDDR5.

#7 Smokeyjedi

    Member

  • PipPipPipPipPipPipPipPip
  • Liquid Metal
  • Liquid Metal
  • 1,040 posts
  • LocationCanada

Posted 17 March 2014 - 09:50 AM

View PostLord Letto, on 17 March 2014 - 07:21 AM, said:

if this is the one: http://www.newegg.co...N82E16814127693
Core clock should be 1110MHz with a Turbo boost to go to 1176MHz, Memory clock should be 6008MHz.
Shader should be linked with core clock, at least it is with my EVGA 560TI 2GB GDDR5.

580 = GF(ermi)110. / 680 = GK(epler)
[color=#000000]Where the goal of the previous architecture, Fermi, was to increase raw performance (particularly for compute and tessellation), Nvidia's goal with the Kepler architecture was to increase performance per watt, while still striving for overall performance increases.[/color][2][color=#000000] The primary way they achieved this goal was through the use of a unified clock. By abandoning the shader clock found in their previous GPU designs, efficiency is increased, even though it requires more cores to achieve similar levels of performance. This is not only because the cores are more power efficient (two Kepler cores using about 90% of the power of one Fermi core, according to Nvidia's numbers), but also because the reduction in clock speed delivers a 50% reduction in power consumption in that area.[/color][3]


GPU Boost[edit]
[color=#000000]
GPU Boost is a new feature which is roughly analogous to turbo boosting of a CPU. The GPU is always guaranteed to run at a minimum clock speed, referred to as the "base clock". This clock speed is set to the level which will ensure that the GPU stays within TDP specifications, even at maximum loads.[2] When loads are lower, however, there is room for the clock speed to be increased without exceeding the TDP. In these scenarios, GPU Boost will gradually increase the clock speed in steps, until the GPU reaches a predefined power target (which is 170W by default).[3] By taking this approach, the GPU will ramp its clock up or down dynamically, so that it is providing the maximum amount of speed possible while remaining within TDP specifications.
[/color]
[color=#000000]
[/color]
[color=#000000]
reducing power, TDP, while boosting efficency and parallel processing ability as well and a 2000mhz increase in GDDR5 speeds.........
[/color]
[color=#000000]
kepler looses the shader clock that held the 4**series and 5** series back....... they still do okay, but are generally hotter and more power hungry.
[/color]





1 user(s) are reading this topic

0 members, 1 guests, 0 anonymous users