New Gpu Time... 390 Or 970. 980?
#21
Posted 24 February 2016 - 10:39 AM
#22
Posted 25 February 2016 - 08:58 AM
I'll look for the MFAA option but it sounds like that may not be available...
Oh yeah, thanks a lot Goose, Catamount and others!
Edited by BigBadVlad, 25 February 2016 - 09:00 AM.
#23
Posted 25 February 2016 - 09:18 AM
Look: TXAA2x + MFAA loads me up about like three "bumps" of DSR + PostAA + SMAA via ReShade. I'm 1800 x 1440 @ 70Hz. The trade-off seems to be between lacking footprints in MSAA-related modes and putting up with much swimming at all …
Oh: Don't forget the Sharpness slider in DSR.
#24
Posted 27 February 2016 - 08:57 AM
Gfx cards will be making the jump from 28nm to 14ish nm nodes this year. That means more transistors per mm^2 = better performance.
Likely they will all become fully dx12 compliant at the same time and the upper tier will switch to Hbm2 and mid tier likely to gddr5X memory.
It makes zero sense to invest now so Dont invest in a god tier gfx card.
Its the first time in 4 years that gfx card manufacturers are changing the node process mainly due to TSMCs botched 20nm node process that we had significant performance increases is only due to very rigorous design optimisations.
I reluctantly bought an AMD 380 in october 2015 since my HD 6870 fried. Very reluctantly .
Edited by dwwolf, 27 February 2016 - 09:00 AM.
#25
Posted 27 February 2016 - 01:26 PM
Moreover, we don't know when these cards are coming out. The fact that we don't even have a release date suggests it's not going to be tomorrow, the day after that, the week after that, or the next few months after that. H2 sometime? Christmas? 2017 if a delay hits or yields on the new process are low and we get a paper launch? Who knows?
Given what we do, and more important what we don't know, I'm really not inclined to criticize buying right now. If these cards were coming out next week or next month I might feel differently, but they aren't.
#26
Posted 27 February 2016 - 08:47 PM
Catamount, on 27 February 2016 - 01:26 PM, said:
Given what we do, and more important what we don't know, I'm really not inclined to criticize buying right now. If these cards were coming out next week or next month I might feel differently, but they aren't.
From all that I have read, both AMD and NVIDIA are planning releases on late Q2 2016 or Q3 2016 -- I'm personally going with a late Q2 release (June). Most likely (at least for NVIDIA) it will be their successor to the Titan X -- Titan V as the possible name?
Vulkan's steady acceptance (including NVIDIA) makes for some interesting possibilities in lots of future games. Possibly more (and better) usage of multicores/multithreads?
Edited by Rykiel, 27 February 2016 - 09:08 PM.
#27
Posted 28 February 2016 - 01:25 AM
This smaller node process gives em a huge boost in transistor count and heat/power budget.
That enables em to check off dx12 support levels ( both amd and nvidia do not fully support all the options.) And it also gives them the option to work on their respective cards weakpoints ( amd being weaker in tesselation for instance and Nvidia in async shader ops ) without needing to compromise the rest of the chip design.
Memory tech upgrades will also enable higher throughput at the mid and highend cards.( HBM2 and GDDR5X ).
Will it be a 100% overall increase ? Dont think so. But It should be more significant than we have seen for a long time.
Edited by dwwolf, 28 February 2016 - 01:31 AM.
#28
Posted 29 February 2016 - 06:16 PM
Even as Nvidia has loosened up on voltage control a bit, their boards haven't gotten any better at taking it. Increasing voltage on Maxwell is a dangerous game.
Intel has found power and voltage limitations becoming steep as well, yes their chips get more power efficient, but they take less power as well. Intel hasn't actually gotten much net performance out of their processors with die shrinks. In fact, you could argue that they've gotten no net performance, because new series with no die shrink have netted as much performance as those that have had them, typically just with fewer power draw benefits. Sandy Bridge won over the similarly 32nm Westmere due to an architectural change, Haswell gained as much performance as Ivy Bridge, etc.
If one looks back on GPUs to see where most of the performance increase came from in past years, before the slowdown occurred, it wasn't from smaller processes. It was from TDP increases. Yes, smaller processes, architectural changes, etc, obviously played a role, but once the TDP increases stopped, once we hit the effective limits of dual-slot air coolers, the magical year over year doubling ceased overnight.
28nm was a comfort zone. Nvidia themselves said years ago that smaller processes were netting diminishing returns, or even resulting in more expensive transistors (you can't pack more transistors for a given cost if each transistor costs more, even if they physically fit on the chip), and once they tried to move to 20, huge yield problems occurred. It seems CPUs and GPUs started suffering some of the same problems at just about the same changeover in size. I'm sure 16nm will be hugely beneficial to certain things. It means laptop chips might finally be equal to their desktop counterparts again for the first in 15 years. We've already been closing that gap since it opened wide with the aforementioned upwards TDP race. It means GPUs won't require such beefy power supplies or cooling, may fit into smaller form factors more easily, etc etc. Finfets will help even more.
Magical performance increases are another matter, and not something I think should be predicted at this stage, let alone with any certainty. Smaller processes just haven't been kind to tech companies of late.
Edited by Catamount, 29 February 2016 - 06:17 PM.
#29
Posted 29 February 2016 - 06:21 PM
Games have only just, just now begun to tiptoe into DX12, which is to say they've only barely begun to meander into featuresets card from 2009 didn't support (I have yet to see a DX11 game that a, say, a 5970 couldn't run well, not until last year, and not for DX11 compliance reasons).
#30
Posted 01 March 2016 - 06:07 AM
Catamount, on 29 February 2016 - 06:16 PM, said:
Yes, but you are also seeming to forget that Intel has dedicated all of that extra space they gained from die shrinks to other things that aren't necessarily compute-related, like stronger integrated graphics and moving components traditionally found on the motherboard onto the CPU package. Nvidia and AMD won't be doing a whole lot of that, the extra die space for transistors is going to directly to horsepower.
They are estimating that they can more than double the amount of transistors with this shrink, from 8 billion to 17 billion. They are also moving from GDDR5 to GDDR5X and HBM 2.0. It isn't illogical to believe that small architecture improvements won't also be present. We should see a decent bump in performance, though it might be reserved mostly for the high-end cards.
#31
Posted 01 March 2016 - 01:34 PM
When Intel moved from Sandy Bridge to Ivy Bridge, they did hike the overall transistor count (and it most certainly didn't all go to the GPU), but more importantly they shrunk the chip and tdp. That's important because it shows that Intel could have released a bigger, more powerful chip, however what they couldn't do was release such a chip at the same price point. We'd later see bigger chips in the Ivy Bridge E family, but at considerable price premiums.
This is exactly what Nvidia has been noting, something that came to the forefront of discussions when 20nm fell apart at the seams
http://www.extremete...ially-worthless
The key advancement in smaller fabrication processes isn't just that you can technically fit more transistors, but that you can fit cheaper transistors. This is no longer the case. Power advancements may enable a 17 billion transistor Pascal GPU, but that's on rumor mills alone as far as I'm aware and let's be clear on what this will and won't be. It almost certainly won't be a 17 billion transistor GPU for the price of an 8 billion transistor Maxwell. It sounds like most or all of that is going into re-adding real compute performance into the cards, not boosting gaming performance, so basically, what we're being told is that there's a professional GPU coming with 17 billion transistors, a Titan that'll be meant more for modeling or GPGPU than gaming, or maybe a Quadro.
Unless our OP intends to have almost twice the cash to blow when Pascal releases (I have no doubt they'll milk some pricing improvements per transistor count out), he'll no more be able to afford a 17 billion transistor Pascal than he can afford two 8 billion transistor Maxwell GPUs today, and it doesn't sound like he's in for a GPGPU-focused card either.
In many ways, I think the FinFETs will be the better advancement here, and that may indeed net some cool stuff. I won't be night and day, however, in terms of performance. If these new cards offer even 50% more performance/$, I will literally eat my shirt. Given how much Maxwell chips have dropped in price, I'm thinking maybe we see 50% performance hikes at 30% price hikes, not counting deals or MIRs, so maybe we get a GTX 1070, for example, and it's 50% faster than a 970, but $399 (and maybe more if yields/release numbers aren't high - companies will price high at launch if there's not an abundance). Right now a 970 is $290 with a free $60 game, so potentially $230 if you're remotely into the game. Yes, that kind of Pascal pricing would eat into the 980/980TI appeal if that happens, but those cards are frankly a tad overpriced anyways (not that it stopped my from buying one last year ), and not really too much when you consider the aggressive pricing and game packaging that Nvidia is putting in to attract customers who know that Pascal is coming. There's no doubt the OP would get some performance out of waiting, and I never denied that. Is it enough to warrant not having a GPU for an indeterminant number of months (4, optimistically?). I can't make that judgment, but I do think we should temper our expectations here to something below "it will literally be the second coming".
Edited by Catamount, 01 March 2016 - 01:42 PM.
#32
Posted 01 March 2016 - 01:37 PM
#33
Posted 01 March 2016 - 03:08 PM
You'll get more out of your money than now, keeping your games running for the next several years.
Btw AMD will hold a reddit AMA this week and some other event regarding GPU-tech. Worst case: they're only talking about VR and double-Fiji. Best case: Polaris-info incoming.
#34
Posted 01 March 2016 - 03:12 PM
Goose, on 01 March 2016 - 01:37 PM, said:
They do. At least when it comes to consumer hardware. Their skylake-mobile CPUs (6xxxHQ etc) are mostly clocked down desktop CPUs which allow for lower voltage and power targets. On the big gaming Desktopreplacements you're able to overclock them exactly to desktop speed.
And socket 2011 isn't a real solution since broadwell E will only be the 14nm shrink. New Chipsets, new interconnects etc will be sheduled for 2017 at best.
#35
Posted 02 March 2016 - 01:09 AM
#36
Posted 02 March 2016 - 08:35 AM
Nvidia 500 to 600 jumped one fab node and put 15% more transistors on a die 56% of the size.
Nvidia 500 to 700 jumped one fab node and put 216% more transistors on a die 8% bigger.
The Titan Black was between 2x and 6x faster depending on the benchmark compared to the GTX 580.
The thing to note here is that the 780Ti was also the same chip as the Titan Black. The thing with the 700-series is that the price points changed a bit. The mid-range card prices stayed about the same, but the higher-end cards all inflated (570 to 770 was $50 more, 580 to 780 was $150 more). They decided that with a premium product that can generally outperform, consumers would just willingly pay more. So $650 is now apparently the 'reasonable" top-tier price for a non-Titan. Going back to the 780Ti, which was a premium product on top of the already premium high-end card, an extra $50 beyond that price tier isn't out of the now 'normal' realm.
This is an example of jumping one fab node and getting double the performance from the same size die. Of course there were other factors as well, such as improved GDDR5 and architecture improvements.
The Titan X fit 12% more transistors on a die 8% larger than the Titan while keeping the same TDP, which is neat. The performance difference is about 10-25% depending on the game. More architecture improvements at work.
So I think waiting for the next generation for a few months might benefit OP with a good amount more performance for the same price range. Like I said, probably not literally double, but definitely a fair bit beyond what he'd get right now.
#37
Posted 02 March 2016 - 09:01 AM
There have been doubling steps, very early, when 3d-accelerator cards (extra cards) became available. I don't expect such huge steps withtin the next years.
The 980 is a good card, i can run MWO with my 970 at 1980x1020 with everything maxed out and get framerates between 50 and 100.
So you got a nice upgrade there, enjoy your new quality of gaming.
Greetings
el piro
#38
Posted 05 March 2016 - 07:32 AM
Unless your trying to power 4K or triple monitor it's really not worth it right now until we see the true effects of DX12 games.
#39
Posted 06 March 2016 - 01:34 AM
#40
Posted 22 March 2016 - 03:42 PM
I picked up the 970 because it came with The Division for free and I am not on DX12 yet. I am also an eVGA fan because I have had good luck with warranty support from them, but seriously looked at the R 390.
5 user(s) are reading this topic
0 members, 5 guests, 0 anonymous users