GTX 680 vs GTX 980 performance difference?

In summary, the conversation discussed the possibility of upgrading from two Nvidia GeForce GTX 680's in SLI to two GTX 980's and putting them in SLI. The 980 was estimated to be more than twice as fast as the Kepler architecture used in the 680's. There was a question about whether an Intel Core i7 3770K with 16GB of RAM would bottleneck the performance of the new GPUs. It was also debated whether it was necessary to upgrade to the latest iteration of the Core i7 processor and have PCIe 4.0 x16 to properly run the new cards. The conversation then shifted to discussing which games the person was trying to max out settings for, and the difference in performance between the GTX
  • #1
ElliotSmith
168
104
I have two Nvidia GeForce GTX 680's in SLI and was considering saving up some money to buy two GTX 980's and put them in SLI.

How much more performance should I expect to see?

Based on what I've read, the 980 is more than twice as fast as the Kepler. And would an Intel Core i7 3770K (Ivy Bridge) @4.0 GHz with 16GB of RAM bottleneck those two GPU's?

Do I really need to upgrade my CPU and motherboard to the latest iteration of the Core i7 processor to handle the massive throughput of these monster GPU's? Is it absolutely imperative to have PCIe 4.0 x16 to properly run these cards?
 
Computer science news on Phys.org
  • #3
ElliotSmith said:
I have two Nvidia GeForce GTX 680's in SLI and was considering saving up some money to buy two GTX 980's and put them in SLI.

How much more performance should I expect to see?

Based on what I've read, the 980 is more than twice as fast as the Kepler.
nVidia publishes a "compute capability" measure for their GPUs (https://developer.nvidia.com/cuda-gpus). The GTX 680's you have are shown as having a 3.0 compute capability, using the Tesla architecture. The 980 is shown as having a 5.2 compute capability, and uses, I'm pretty sure, the Maxwell architecture, the latest one they've produced.

Here are some of the specs for the Tesla architecture (https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#architecture-3-0):
192 CUDA cores for arithmetic operations
32 special function units for single-precision floating-point transcendental functions
4 warp schedulers

For the Maxwell architecture:
Same as above except that there are 128 CUDA cores. Presumably the clock speeds are much higher to produce a significantly higher compute capability.
ElliotSmith said:
And would an Intel Core i7 3770K (Ivy Bridge) @4.0 GHz with 16GB of RAM bottleneck those two GPU's?
Don't know. However, if the games are well-written, most of the work should be done on the GPUs, with the CPU acting as host to start the ball rolling.
ElliotSmith said:
Do I really need to upgrade my CPU and motherboard to the latest iteration of the Core i7 processor to handle the massive throughput of these monster GPU's? Is it absolutely imperative to have PCIe 4.0 x16 to properly run these cards?
What CPU are you running now? If it's a fairly modern CPU such as Sandy Bridge, I wouldn't think so, as I don't believe there is all that much difference between these two CPUs.

BTW, I just bought a GEForce GTX 750, which has a compute capability of 5.0, and set me back only about $100. I'm not at all interested in gaming, but I am very much interested in getting involved with CUDA programming to write code that uses the parallel capabilities of the GPU.
 
  • #4
Greg Bernhardt said:
Just out of curiosity what games are you try to max out settings for?

All of the graphics-intensive games like Battlefield 4 and Battlefield Hardline, Crysis 3, and future titles like Grand Theft Auto 5, Alien Isolation, Doom 4, etc...

Someone told me that the GTX 970 SLI gives you the biggest bang for your buck and saves you $400 instead of getting two GTX 980's.

There is only a 10% performance difference between the 970 and 980.
 
  • #5
If you're doing CUDA, remember Nvidia cripples double-precision math on all but a few of its gaming cards.
 
  • #6
vociferous said:
If you're doing CUDA, remember Nvidia cripples double-precision math on all but a few of its gaming cards.
This is where it's helpful to know the compute capability of the card in question. A double-precision floating point unit is available only on devices with compute capability of 1.3 or above. This page, https://developer.nvidia.com/cuda-gpus, lists the compute capabilities of the various NVIDIA GPUs.
 
  • #7
Mark44 said:
This is where it's helpful to know the compute capability of the card in question. A double-precision floating point unit is available only on devices with compute capability of 1.3 or above. This page, https://developer.nvidia.com/cuda-gpus, lists the compute capabilities of the various NVIDIA GPUs.

This page lists the specs:

http://en.wikipedia.org/wiki/List_of_Nvidia_graphics_processing_units

You'll notice that Nvidia has so-far crippled double-precision floating point math on all but three of its GPU's (GTX Titan, Titan Black, Titan Z) in order to stifle competition with its Tesla and Quatro series. Still, if you have priced a similar Tesla, you would understand that even the dramatically overpriced Titan series is still quite a deal compared to the "professional" solution Nvidia is pushing for CUDA.
 
  • #8
vociferous said:
This page lists the specs:

http://en.wikipedia.org/wiki/List_of_Nvidia_graphics_processing_units

You'll notice that Nvidia has so-far crippled double-precision floating point math on all but three of its GPU's (GTX Titan, Titan Black, Titan Z) in order to stifle competition with its Tesla and Quatro series. Still, if you have priced a similar Tesla, you would understand that even the dramatically overpriced Titan series is still quite a deal compared to the "professional" solution Nvidia is pushing for CUDA.
What I did was look for the GPUs with the highest compute capability, which turns out to be the Maxwell architecture, the most recent. I picked the GEFORCE GTX 750, which I got for about $100 US.

Again, my interest is getting up to speed in CUDA programming. I have no interest in gaming, other than to play Solitaire :D.
 
  • #9
vociferous said:
This page lists the specs:

http://en.wikipedia.org/wiki/List_of_Nvidia_graphics_processing_units

You'll notice that Nvidia has so-far crippled double-precision floating point math on all but three of its GPU's (GTX Titan, Titan Black, Titan Z) in order to stifle competition with its Tesla and Quatro series. Still, if you have priced a similar Tesla, you would understand that even the dramatically overpriced Titan series is still quite a deal compared to the "professional" solution Nvidia is pushing for CUDA.

No, I won't be doing any CUDA computing, just hardcore gaming.

I haven't decided which brand of GTX 970 I should buy. Someone told me that the EVGA ACX cooler has one of it's heat pipes misaligned and it does not make contact with the GPU. However, the ASUS STRIX and MSI Twinfrozr are also two other very good options.

Off-topic, but before I quit playing a few months ago (because of health concerns) I was ranked as one of the top 5 players in the world for the PC version of battlefield 4. That's how much of a gaming enthusiast I am.
 
  • #10
IMO, go for the MSI Twinfrozr. I've always been an ASUS fanboy, but the Twinfrozr's are hard to beat for the price/performance.
 
  • #11
MSI is known for over-engineering their cards, especially their flagship brands.
 
  • #12
I've been playing Wolfenstien New Order on medium/high with an old Geforce 560M and it's smooth. It's hard for me to imagine that two 980s is going to make enough difference to justify the price.
 

Related to GTX 680 vs GTX 980 performance difference?

What is the difference in performance between the GTX 680 and GTX 980?

The GTX 980 generally performs better than the GTX 680, with an average improvement of about 30% in benchmarks and real-world applications.

What factors contribute to the performance difference between the GTX 680 and GTX 980?

There are several factors that contribute to the performance difference, including the newer architecture of the GTX 980, higher clock speeds, more CUDA cores, and increased memory bandwidth.

Is the performance difference between the GTX 680 and GTX 980 significant?

Yes, the performance difference between the GTX 680 and GTX 980 is significant and can make a noticeable impact on gaming and other graphics-intensive tasks.

Can the performance difference between the GTX 680 and GTX 980 be attributed solely to hardware upgrades?

No, while the hardware upgrades do play a major role in the performance difference, software optimizations and driver updates also contribute to the improved performance of the GTX 980.

Are there any specific use cases where the GTX 680 may outperform the GTX 980?

In some cases, the GTX 680 may perform better than the GTX 980 in older or less demanding games or applications. However, in most modern and graphics-intensive tasks, the GTX 980 will outperform the GTX 680.

Similar threads

  • Computing and Technology
Replies
11
Views
3K
Replies
27
Views
5K
  • Computing and Technology
Replies
17
Views
4K
  • Computing and Technology
Replies
11
Views
6K
  • Computing and Technology
Replies
29
Views
4K
  • MATLAB, Maple, Mathematica, LaTeX
Replies
7
Views
1K
  • Computing and Technology
Replies
2
Views
3K
Replies
9
Views
7K
  • Computing and Technology
Replies
2
Views
4K
Replies
4
Views
3K
Back
Top