Jump to content

New Gpu Time... 390 Or 970. 980?


41 replies to this topic

#21 Napoleon_Blownapart

    Member

  • PipPipPipPipPipPipPipPip
  • Shredder
  • Shredder
  • 1,171 posts

Posted 24 February 2016 - 10:39 AM

its hard to give an informed answer with the DX12 change in the future but so far everything i have read is AMD performs better with DX12.

#22 BigBadVlad

    Member

  • PipPipPipPipPipPip
  • Elite Founder
  • Elite Founder
  • 242 posts

Posted 25 February 2016 - 08:58 AM

Thanks for that Techreport article... good read about DSR. And nope didn't realize that DSR tricks your application (game) into enabling higher resolution options than what your monitor can do. I am still on 1080 heh woops!Will play with probably 2X DSR and some form of AA versus 4X DSR. Have to see if maybe she can run 4X DSR and some level of AA.
I'll look for the MFAA option but it sounds like that may not be available...
Oh yeah, thanks a lot Goose, Catamount and others!

Edited by BigBadVlad, 25 February 2016 - 09:00 AM.


#23 Goose

    Member

  • PipPipPipPipPipPipPipPipPip
  • Civil Servant
  • Civil Servant
  • 3,463 posts
  • Twitch: Link
  • LocationThat flattop, up the well, overhead

Posted 25 February 2016 - 09:18 AM

I've got a 980Ti: MFAA is there for the taking …

Look: TXAA2x + MFAA loads me up about like three "bumps" of DSR + PostAA + SMAA via ReShade. I'm 1800 x 1440 @ 70Hz. The trade-off seems to be between lacking footprints in MSAA-related modes and putting up with much swimming at all …

Oh: Don't forget the Sharpness slider in DSR.

#24 dwwolf

    Member

  • PipPipPipPipPipPip
  • Overlord
  • Overlord
  • 476 posts

Posted 27 February 2016 - 08:57 AM

None at this point in time.
Gfx cards will be making the jump from 28nm to 14ish nm nodes this year. That means more transistors per mm^2 = better performance.
Likely they will all become fully dx12 compliant at the same time and the upper tier will switch to Hbm2 and mid tier likely to gddr5X memory.

It makes zero sense to invest now so Dont invest in a god tier gfx card.
Its the first time in 4 years that gfx card manufacturers are changing the node process mainly due to TSMCs botched 20nm node process that we had significant performance increases is only due to very rigorous design optimisations.
I reluctantly bought an AMD 380 in october 2015 since my HD 6870 fried. Very reluctantly .

Edited by dwwolf, 27 February 2016 - 09:00 AM.


#25 Catamount

    Member

  • PipPipPipPipPipPipPipPipPip
  • LIEUTENANT, JUNIOR GRADE
  • 3,305 posts
  • LocationBoone, NC

Posted 27 February 2016 - 01:26 PM

I'm not really expecting any quantum leaps from the new series, not because we're now past the point at which we should expect that, but because we were past that point in 2011. New fab processes haven't been giving magical increases in performance for years, either, in any area of computing (flash storage maybe?). Power efficiency gains are likely to be decent, but beyond that, I'm expecting little more than the progressive plodding along with incremental improvements that we've had since this decade began. GM200 cards are also already DX12.1 compliant.

Moreover, we don't know when these cards are coming out. The fact that we don't even have a release date suggests it's not going to be tomorrow, the day after that, the week after that, or the next few months after that. H2 sometime? Christmas? 2017 if a delay hits or yields on the new process are low and we get a paper launch? Who knows?

Given what we do, and more important what we don't know, I'm really not inclined to criticize buying right now. If these cards were coming out next week or next month I might feel differently, but they aren't.

#26 Rykiel

    Member

  • PipPipPip
  • 82 posts
  • LocationNew York City

Posted 27 February 2016 - 08:47 PM

View PostCatamount, on 27 February 2016 - 01:26 PM, said:

Moreover, we don't know when these cards are coming out. The fact that we don't even have a release date suggests it's not going to be tomorrow, the day after that, the week after that, or the next few months after that. H2 sometime? Christmas? 2017 if a delay hits or yields on the new process are low and we get a paper launch? Who knows?

Given what we do, and more important what we don't know, I'm really not inclined to criticize buying right now. If these cards were coming out next week or next month I might feel differently, but they aren't.


From all that I have read, both AMD and NVIDIA are planning releases on late Q2 2016 or Q3 2016 -- I'm personally going with a late Q2 release (June). Most likely (at least for NVIDIA) it will be their successor to the Titan X -- Titan V as the possible name?

Vulkan's steady acceptance (including NVIDIA) makes for some interesting possibilities in lots of future games. Possibly more (and better) usage of multicores/multithreads?

Edited by Rykiel, 27 February 2016 - 09:08 PM.


#27 dwwolf

    Member

  • PipPipPipPipPipPip
  • Overlord
  • Overlord
  • 476 posts

Posted 28 February 2016 - 01:25 AM

I disagree on the performance increase.
This smaller node process gives em a huge boost in transistor count and heat/power budget.

That enables em to check off dx12 support levels ( both amd and nvidia do not fully support all the options.) And it also gives them the option to work on their respective cards weakpoints ( amd being weaker in tesselation for instance and Nvidia in async shader ops ) without needing to compromise the rest of the chip design.

Memory tech upgrades will also enable higher throughput at the mid and highend cards.( HBM2 and GDDR5X ).

Will it be a 100% overall increase ? Dont think so. But It should be more significant than we have seen for a long time.

Edited by dwwolf, 28 February 2016 - 01:31 AM.


#28 Catamount

    Member

  • PipPipPipPipPipPipPipPipPip
  • LIEUTENANT, JUNIOR GRADE
  • 3,305 posts
  • LocationBoone, NC

Posted 29 February 2016 - 06:16 PM

Smaller processes were, at one time, in an age of big transistors and little problem in shrinking them, a source of extreme performance gain. What we've seen on the CPU side is that as processes have gotten to a certain point, the long-predicted problems have hit. No, we didn't see it at 40nm on the GPU side, because that's still pretty large, but even at 28nm inklings of problems began to likewise arise there. Nvidia vacillated between permanently voltage-locking cards and not because the cards started to become voltage-intolerant, even threatening to rescind warranty coverage for board partners who didn't cooperate in those endeavors as they found workarounds for the voltage-locking (which EVGA had to re-enable via an external device on the GTX 680).

Even as Nvidia has loosened up on voltage control a bit, their boards haven't gotten any better at taking it. Increasing voltage on Maxwell is a dangerous game.

Intel has found power and voltage limitations becoming steep as well, yes their chips get more power efficient, but they take less power as well. Intel hasn't actually gotten much net performance out of their processors with die shrinks. In fact, you could argue that they've gotten no net performance, because new series with no die shrink have netted as much performance as those that have had them, typically just with fewer power draw benefits. Sandy Bridge won over the similarly 32nm Westmere due to an architectural change, Haswell gained as much performance as Ivy Bridge, etc.

If one looks back on GPUs to see where most of the performance increase came from in past years, before the slowdown occurred, it wasn't from smaller processes. It was from TDP increases. Yes, smaller processes, architectural changes, etc, obviously played a role, but once the TDP increases stopped, once we hit the effective limits of dual-slot air coolers, the magical year over year doubling ceased overnight.


28nm was a comfort zone. Nvidia themselves said years ago that smaller processes were netting diminishing returns, or even resulting in more expensive transistors (you can't pack more transistors for a given cost if each transistor costs more, even if they physically fit on the chip), and once they tried to move to 20, huge yield problems occurred. It seems CPUs and GPUs started suffering some of the same problems at just about the same changeover in size. I'm sure 16nm will be hugely beneficial to certain things. It means laptop chips might finally be equal to their desktop counterparts again for the first in 15 years. We've already been closing that gap since it opened wide with the aforementioned upwards TDP race. It means GPUs won't require such beefy power supplies or cooling, may fit into smaller form factors more easily, etc etc. Finfets will help even more.

Magical performance increases are another matter, and not something I think should be predicted at this stage, let alone with any certainty. Smaller processes just haven't been kind to tech companies of late.

Edited by Catamount, 29 February 2016 - 06:17 PM.


#29 Catamount

    Member

  • PipPipPipPipPipPipPipPipPip
  • LIEUTENANT, JUNIOR GRADE
  • 3,305 posts
  • LocationBoone, NC

Posted 29 February 2016 - 06:21 PM

Also, what particular DX12 features do current-gen cards not support? What features there are we likely to see?

Games have only just, just now begun to tiptoe into DX12, which is to say they've only barely begun to meander into featuresets card from 2009 didn't support (I have yet to see a DX11 game that a, say, a 5970 couldn't run well, not until last year, and not for DX11 compliance reasons).

#30 xWiredx

    Member

  • PipPipPipPipPipPipPipPip
  • Elite Founder
  • Elite Founder
  • 1,805 posts

Posted 01 March 2016 - 06:07 AM

View PostCatamount, on 29 February 2016 - 06:16 PM, said:

snip

Yes, but you are also seeming to forget that Intel has dedicated all of that extra space they gained from die shrinks to other things that aren't necessarily compute-related, like stronger integrated graphics and moving components traditionally found on the motherboard onto the CPU package. Nvidia and AMD won't be doing a whole lot of that, the extra die space for transistors is going to directly to horsepower.

They are estimating that they can more than double the amount of transistors with this shrink, from 8 billion to 17 billion. They are also moving from GDDR5 to GDDR5X and HBM 2.0. It isn't illogical to believe that small architecture improvements won't also be present. We should see a decent bump in performance, though it might be reserved mostly for the high-end cards.

#31 Catamount

    Member

  • PipPipPipPipPipPipPipPipPip
  • LIEUTENANT, JUNIOR GRADE
  • 3,305 posts
  • LocationBoone, NC

Posted 01 March 2016 - 01:34 PM

I really doubt Intel has been eschewing performance intentionally on their chips, even for secondary gains. They may not have to compete with AMD, but they do have to compete with their own older generations of chips in justifying upgrades.

When Intel moved from Sandy Bridge to Ivy Bridge, they did hike the overall transistor count (and it most certainly didn't all go to the GPU), but more importantly they shrunk the chip and tdp. That's important because it shows that Intel could have released a bigger, more powerful chip, however what they couldn't do was release such a chip at the same price point. We'd later see bigger chips in the Ivy Bridge E family, but at considerable price premiums.

This is exactly what Nvidia has been noting, something that came to the forefront of discussions when 20nm fell apart at the seams

http://www.extremete...ially-worthless
The key advancement in smaller fabrication processes isn't just that you can technically fit more transistors, but that you can fit cheaper transistors. This is no longer the case. Power advancements may enable a 17 billion transistor Pascal GPU, but that's on rumor mills alone as far as I'm aware and let's be clear on what this will and won't be. It almost certainly won't be a 17 billion transistor GPU for the price of an 8 billion transistor Maxwell. It sounds like most or all of that is going into re-adding real compute performance into the cards, not boosting gaming performance, so basically, what we're being told is that there's a professional GPU coming with 17 billion transistors, a Titan that'll be meant more for modeling or GPGPU than gaming, or maybe a Quadro.

Unless our OP intends to have almost twice the cash to blow when Pascal releases (I have no doubt they'll milk some pricing improvements per transistor count out), he'll no more be able to afford a 17 billion transistor Pascal than he can afford two 8 billion transistor Maxwell GPUs today, and it doesn't sound like he's in for a GPGPU-focused card either.


In many ways, I think the FinFETs will be the better advancement here, and that may indeed net some cool stuff. I won't be night and day, however, in terms of performance. If these new cards offer even 50% more performance/$, I will literally eat my shirt. Given how much Maxwell chips have dropped in price, I'm thinking maybe we see 50% performance hikes at 30% price hikes, not counting deals or MIRs, so maybe we get a GTX 1070, for example, and it's 50% faster than a 970, but $399 (and maybe more if yields/release numbers aren't high - companies will price high at launch if there's not an abundance). Right now a 970 is $290 with a free $60 game, so potentially $230 if you're remotely into the game. Yes, that kind of Pascal pricing would eat into the 980/980TI appeal if that happens, but those cards are frankly a tad overpriced anyways (not that it stopped my from buying one last year Posted Image ), and not really too much when you consider the aggressive pricing and game packaging that Nvidia is putting in to attract customers who know that Pascal is coming. There's no doubt the OP would get some performance out of waiting, and I never denied that. Is it enough to warrant not having a GPU for an indeterminant number of months (4, optimistically?). I can't make that judgment, but I do think we should temper our expectations here to something below "it will literally be the second coming".

Edited by Catamount, 01 March 2016 - 01:42 PM.


#32 Goose

    Member

  • PipPipPipPipPipPipPipPipPip
  • Civil Servant
  • Civil Servant
  • 3,463 posts
  • Twitch: Link
  • LocationThat flattop, up the well, overhead

Posted 01 March 2016 - 01:37 PM

I forget where I read it, but Intel is mostly worried 'bout power consumption, as it's designing mobiles before desktops …

#33 Kshat

    Member

  • PipPipPipPipPipPipPipPip
  • Overlord
  • Overlord
  • 1,229 posts

Posted 01 March 2016 - 03:08 PM

Wait three months and get one of the new 14nm GPUs. Biggest jump in graphics technology since the release of Kepler/Tahiti-Pitcairn. Not worth to dump 300$ and more in soon to be obsolete GPU-technology if your card is still alive.
You'll get more out of your money than now, keeping your games running for the next several years.

Btw AMD will hold a reddit AMA this week and some other event regarding GPU-tech. Worst case: they're only talking about VR and double-Fiji. Best case: Polaris-info incoming.

#34 Kshat

    Member

  • PipPipPipPipPipPipPipPip
  • Overlord
  • Overlord
  • 1,229 posts

Posted 01 March 2016 - 03:12 PM

View PostGoose, on 01 March 2016 - 01:37 PM, said:

I forget where I read it, but Intel is mostly worried 'bout power consumption, as it's designing mobiles before desktops …


They do. At least when it comes to consumer hardware. Their skylake-mobile CPUs (6xxxHQ etc) are mostly clocked down desktop CPUs which allow for lower voltage and power targets. On the big gaming Desktopreplacements you're able to overclock them exactly to desktop speed.
And socket 2011 isn't a real solution since broadwell E will only be the 14nm shrink. New Chipsets, new interconnects etc will be sheduled for 2017 at best.

#35 Catamount

    Member

  • PipPipPipPipPipPipPipPipPip
  • LIEUTENANT, JUNIOR GRADE
  • 3,305 posts
  • LocationBoone, NC

Posted 02 March 2016 - 01:09 AM

Oh Intel definitely has a lot of attention paid to power efficiency, but we did eventually get SB-sized IB chips to convert that efficiency into performance, right along with the FinFET gains (see how great a parallel IB is?), but critically:those chips were vastly more expensive than SB dies of the same size, because the smaller process lead to smaller transistors but not really cheaper ones (and FinFETs are power efficient, but again, not cheap). I think it's fully reasonable in light of computing history to assume Pascal/Polaris will ultimately be bottlenecked by transistors/$, not transistors/watt or transistors/mm^2. Could I be wrong? Sure. And tomorrow the invisible pink unicorn could descend from the sky to give us all free candy. I think, however, I can reasonably argue against either being especially likely.

#36 xWiredx

    Member

  • PipPipPipPipPipPipPipPip
  • Elite Founder
  • Elite Founder
  • 1,805 posts

Posted 02 March 2016 - 08:35 AM

Hmm...

Nvidia 500 to 600 jumped one fab node and put 15% more transistors on a die 56% of the size.
Nvidia 500 to 700 jumped one fab node and put 216% more transistors on a die 8% bigger.
The Titan Black was between 2x and 6x faster depending on the benchmark compared to the GTX 580.

The thing to note here is that the 780Ti was also the same chip as the Titan Black. The thing with the 700-series is that the price points changed a bit. The mid-range card prices stayed about the same, but the higher-end cards all inflated (570 to 770 was $50 more, 580 to 780 was $150 more). They decided that with a premium product that can generally outperform, consumers would just willingly pay more. So $650 is now apparently the 'reasonable" top-tier price for a non-Titan. Going back to the 780Ti, which was a premium product on top of the already premium high-end card, an extra $50 beyond that price tier isn't out of the now 'normal' realm.

This is an example of jumping one fab node and getting double the performance from the same size die. Of course there were other factors as well, such as improved GDDR5 and architecture improvements.

The Titan X fit 12% more transistors on a die 8% larger than the Titan while keeping the same TDP, which is neat. The performance difference is about 10-25% depending on the game. More architecture improvements at work.

So I think waiting for the next generation for a few months might benefit OP with a good amount more performance for the same price range. Like I said, probably not literally double, but definitely a fair bit beyond what he'd get right now.

#37 el piromaniaco

    Member

  • PipPipPipPipPipPipPip
  • Bad Company
  • Bad Company
  • 959 posts
  • LocationVienna

Posted 02 March 2016 - 09:01 AM

We had this "the next card generation will be twice as fast game" since i started building pcs and upgrading gfx-cards (about 20 years now?).
There have been doubling steps, very early, when 3d-accelerator cards (extra cards) became available. I don't expect such huge steps withtin the next years.
The 980 is a good card, i can run MWO with my 970 at 1980x1020 with everything maxed out and get framerates between 50 and 100.

So you got a nice upgrade there, enjoy your new quality of gaming.

Greetings

el piro

#38 Oderint dum Metuant

    Member

  • PipPipPipPipPipPipPipPipPip
  • Ace Of Spades
  • Ace Of Spades
  • 4,758 posts
  • LocationUnited Kingdom

Posted 05 March 2016 - 07:32 AM

I'd wait on the new series not to buy one. But for the price reductions on the 900 series.

Unless your trying to power 4K or triple monitor it's really not worth it right now until we see the true effects of DX12 games.


#39 DarkBazerker

    Member

  • PipPipPipPipPipPip
  • The God
  • The God
  • 281 posts
  • Twitch: Link
  • LocationWaffle House

Posted 06 March 2016 - 01:34 AM

Out of those three I would say go with the 390. The 390 for the most part tends to pull ahead of the 970 and sometimes tie with it. And with a mild overclock you'll match the performance of the 390x, which tends to give the 980 a run for its money. The only down side here is with a 500w psu, you might be pushing your power limit with the 390.

#40 AlphaToaster

    Member

  • PipPipPipPipPipPipPip
  • 839 posts
  • LocationUnited States

Posted 22 March 2016 - 03:42 PM

I recently purchased an eVGA GTX 970 4GB from BestBuy after a long conversation with the sales person there. From our conversation he mentioned the R 390 is going to out perform the GTX 970 by a good +30% with DX12.

I picked up the 970 because it came with The Division for free and I am not on DX12 yet. I am also an eVGA fan because I have had good luck with warranty support from them, but seriously looked at the R 390.





5 user(s) are reading this topic

0 members, 5 guests, 0 anonymous users