Jump to content

Feedback On Potential Rig


65 replies to this topic

#41 Kuritaclan

    Member

  • PipPipPipPipPipPipPipPip
  • Ace Of Spades
  • 1,838 posts
  • LocationGermany

Posted 04 March 2015 - 11:25 AM

View PostDustySkunk, on 04 March 2015 - 10:21 AM, said:



I understand that I could get away with a Gaming 3 motherboard, but I would like the option to upgrade the CPU in the future without having to replace the motherboard as well. Do you think this is a bad idea? Should I stick with the Gaming 3 over the Gaming 5?

In terms of the SSD, I will look at both the Transcend SSD370 256GB or Crucial MX100 256GB. Thanks for the information! I didn't realize that I was paying extra for speed I essentially don't need.

The Gaming 3 is also capable to hold the i7 4790k in a good spot with overclocking. However the Gaming 5 has the clearly better VRM (better passiv heatsinks, phase layout, pwm controller, more capacity). In the end most known Boards above $100 will get the i74790k to a good oc, since most of them have enough powerstages/transistors to provide the current -> wattage need to oc the cpu to high clocks. However all those Boards didn't provide the cleanest voltage/current since most of them save some money in the vrm and getting not 1by1 controlled phases with the pwm-controller. (If you wanna look up the differences and dig deeper see this thread: http://www.overclock...42/z97-vrm-info)

Feature wise the 3 and 5 are nearly the same: NIC is a slightly modern product, Sound is the same except for better capacitors, PCIe Layout the same.

My thought is, upgrading later is ok with the 3er and you may then if you do so could also pick up a broadwell i7k if needed or getting cheap. But the cpu with the umpcoming DX12 should stay a long time 3+ years minimum before you may need to upgrade the base of the rig.

View PostDV McKenna, on 04 March 2015 - 10:44 AM, said:


A guy i play games with had 3 Corsair SSD drives, 2 have died after a few scant years.
by contrast my Samsung SSD is into it's 3rd or 4th year never had a hiccup.

*touches wood around the entire house*

On the ssd hick up. The better numbers run into a barrier. Yes if you wann by the Samsung go for it, they have pretty good benchmarks, but they don't translate into real world better performance of most other products now a day, since you allready have the boundary of the controller and the Sata III Port. The "Corsair"-Example can be caused by bad psus. Also it could be a problem of the ssd chips itself. Crucial for Example use MLC Micron Chips - didn't heard bad things about. Samsung their own. Corsair used Toshiba MLC Chips back than, maybee there cheapness caused this trouble, like OCZ SSDs had such problems. So all this claims about SSDs bad or worse are somewhat usless as long as you can't name the cause of the problem. I would bet a dime if those "friends" used beside the ssds Corsair psu's maybee those could be the assasins you should looking for, and don't blame the ssds for it.

Edited by Kuritaclan, 04 March 2015 - 11:50 AM.


#42 DustySkunk

    Member

  • PipPipPipPipPipPip
  • Wrath
  • Wrath
  • 257 posts
  • LocationNew England

Posted 04 March 2015 - 11:28 AM

Just updated the original post again. I have changed the motherboard to a Gigabyte GA-Z97-Gaming 3, the RAM to G.Skill Ripjaw X 2400 (2X4BG), changed the HDD to a Seagate 1TB at 7200RPM, changed to PSU to SeaSonic S12G 550W 80+ Gold certified, and changed to OS to Windows 8.1 (OEM) 64bit.

*whew*

Lot's of changes!

My thoughts on the RAM: For about 20 USD more I can get RAM clocked at 800MHz faster so I went with the 2400 over the 1600.

Still looking more in depth regarding SSDs

In terms of the case, there are already some suggestions on this thread which I need to do more research into. Other than what has already been suggested, any ideas? I'm also looking into monitors.... I'm in love with the Dell U2414H. However I'm currently leaning toward the BenQ RL2455HM which has an incredible response time (1ms) and is closer to the price I'm looking to spend. Thoughts?

#43 Kuritaclan

    Member

  • PipPipPipPipPipPipPipPip
  • Ace Of Spades
  • 1,838 posts
  • LocationGermany

Posted 04 March 2015 - 11:40 AM

doublepost

Edited by Kuritaclan, 04 March 2015 - 11:41 AM.


#44 xWiredx

    Member

  • PipPipPipPipPipPipPipPip
  • Elite Founder
  • Elite Founder
  • 1,805 posts

Posted 04 March 2015 - 12:23 PM

Yeah, my buddy could have a bad PSU. That is definitely a possibility. He's one of those people that is a brand loyalist and refuses to buy things like Samsung SSDs and Seasonic PSUs when Corsair makes everything except the mobo and CPU in his rig.

As far as SSDs go, I could not find a failure rate for Samsung SSDs. I was able to find one for Corsair SSDs, though, which was at a little over 2%. Compared to HDD failure rates in the 3-6% range, that's not bad. However, I also found that Intel SSDs have a ridiculously low failure rate (0.6%) and very high tolerance for stress testing (nearly double the writes that a Samsung Evo 840 can take). If you want an SSD that will last, apparently you should get an Intel one.

#45 Kuritaclan

    Member

  • PipPipPipPipPipPipPipPip
  • Ace Of Spades
  • 1,838 posts
  • LocationGermany

Posted 04 March 2015 - 02:22 PM

Well Corsair products add up to the problem - we will never know. It looks like this is a strong trademark. But you will leave a trademark first, when it fails for you even if it fails highly for other over all. ;) Thats mankind problems.

There is no survey done in germany, but in france is a reporting system which gives indicator of returns ("it includes failure products" but does not be the case in the full percentage so it is not a failure rate!)

SSDs:

Hersteller:

- Samsung 0,24% (contre 0,54%)
- Intel 0,27% (contre 0,90%)
- Sandisk 0,29% (contre 0,70%)
- Crucial 0,57% (contre 1,08%)
- Kingston 0,63% (contre 0,72%)
- Corsair 0,87% (contre 0,91%)

5 Modelle mit den häufigsten Rückläuferquoten

- 3,27% Kingston SSDNow mS200 mSATA 120 Go
- 2,84% Corsair Force GS 240 Go
- 2,54% Corsair Neutron 64 Go
- 1,44% Corsair Force LS 120 Go
- 1,34% OCZ Agility 3 480 Go

Modelle mit ~120 GB


- 1,44% Corsair Force LS 120 Go
- 0,70% Crucial M500
- 0,66% Kingston V300
- 0,66% Corsair Force GT
- 0,45% Sandisk Extreme II
- 0,37% Sandisk Ultra Plus
- 0,29% Kingston HyperX 3K
- 0,29% Samsung 840 Pro
- 0,07% Samsung 840 EVO
- 0,00% Corsair Neutron
- 0,00% Intel 530
- 0,00% Sandisk SSD

Modelle mit 240/256 GB

- 2,84% Corsair Force GS
- 0,71% Kingston HyperX 3K
- 0,68% Crucial M500
- 0,64% Kingston V300
- 0,49% Sandisk Extreme II
- 0,34% Samsung 840 Pro
- 0,32% Sandisk Ultra Plus
- 0,26% Samsung 840 EVO
- 0,00% Corsair Neutron
- 0,00% Intel SSD 335
- 0,00% Intel SSD 530

Modelle mit 480/512GB

- 1,34% OCZ Agility 3 480 Go
- 0,27% Samsung 840 EVO
- 0,15% Crucial M500

http://www.hardware....posants-11.html

or a rather old source FROM 2012 would be:
- Intel 0.45% (against 1.73%)
- Samsung 0.48% (N/A)
- Corsair 1.05% (against 2.93%)
- Crucial 1.11% (against 0.82%)
- OCZ 5.02% (against 7.03%)
well OCZs Problems kicked in hard those days

http://www.behardwar...ns-rates-7.html

If i'm correct Intels Zero percent return rate is a product of service - all Intel products are not be managed over store rather directly - so numbers could be evaluated for those. Intel then gives a overall percenteg to media. Some other companies make this to so they do not be comparable to competitors. It is like EVGA for GPUs.

Edited by Kuritaclan, 04 March 2015 - 02:59 PM.


#46 gaIaxor

    Member

  • PipPip
  • 24 posts

Posted 05 March 2015 - 04:11 AM

View PostDV McKenna, on 04 March 2015 - 09:39 AM, said:


Please go away and learn what you are talking about, and why the physical spec of the card your talking about is a total non issue unless you push resolutions that a single card at this end wasn't designed for, and what that 256bit interface actually is compared to previous 256 bit interfaces and i'll give you a clue in compression standard.


That is what you should do. It's only 224bit max for only 3.5GB, not 256bit. Even NVIDIA admits that they have lied to thier customers:
"Alben frankly admitted to us that Nvidia "screwed up" in communicating the GTX 970's specifications in the reviewer's guide it supplied to the press."[techreport.com]

By defending this kind of behavior(they even dare to call it a "feature") you are making a mistake. Next time NVIDIA may pull an even more brazen scam.

The resulting stuttering is not a "non issue":


www.scribd.com/doc/256406451/Nvidia-lawsuit-over-GTX-970

If you want to keep your GTX970 several years and play future games that may become a bigger problem.

#47 Oderint dum Metuant

    Member

  • PipPipPipPipPipPipPipPipPip
  • Ace Of Spades
  • Ace Of Spades
  • 4,758 posts
  • LocationUnited Kingdom

Posted 05 March 2015 - 09:57 AM

View PostgaIaxor, on 05 March 2015 - 04:11 AM, said:


That is what you should do. It's only 224bit max for only 3.5GB, not 256bit. Even NVIDIA admits that they have lied to thier customers:
"Alben frankly admitted to us that Nvidia "screwed up" in communicating the GTX 970's specifications in the reviewer's guide it supplied to the press."[techreport.com]

By defending this kind of behavior(they even dare to call it a "feature") you are making a mistake. Next time NVIDIA may pull an even more brazen scam.

The resulting stuttering is not a "non issue":


www.scribd.com/doc/256406451/Nvidia-lawsuit-over-GTX-970

If you want to keep your GTX970 several years and play future games that may become a bigger problem.


sigh, im assuming you have little technical knowledge to understand past the original blown out issue.

Going back to my last post, about the 256bit bus lane which is what it is (what you referring to mistakenly is the 224GB/s memory bandwidth). Here is the up to date list of the 970 vs 980 specs and there are a couple of things you will notice, the bus width, and memory bandwidth.

Posted Image


the 970 can provide as much memory bandwidth as the 980, when it accesses both pools of memory which unlike earlier reports it can do, so guess what that means...the card has and can access a total of 4GB VRAM.

Now the next aspect i will address is why that 256bit interface isn't just a standard 256 bit much like the R9 285 it uses a different method of compression compared to older cards.
Now you'll remember that the 780 and 780 ti came with 384 bit interfaces, well thanks to the compression standard the 256bit reaches performance close to that old 384bit interface.
You may wish to read up on both brands usage of Delta colour compression.

Heres an overview of the GM 204 chipset on the 970

Posted Image

I think it's self explanatory to see what issue there is here.

Accessing both memory sections brings the card to a 4-6% hit in performance that is it. Negligible

Here is an actual end user who's done extensive testing on a number of games.
http://www.neogaf.co...3&postcount=700

Now if you you just want a TL:DR here it is.

Quote

in closing
there's an issue with the cards certainly. they are most definitely under utilizing the full 4GiB of memory. however, outside synthetics, to break the 3.5GiB mark i need to crank games up to ludicrous settings i'd never use when normally playing. all in all, the issue exists but it's certainly been blown out of proportion from what i've seen. i did these tests to see if i needed to return the card and just go right to the 980 series. i still might, i have until friday to decide, but this issue would certainly not have me trading the card in for a 290x or certainly not down to a 960 eww.


if anyone knows of any games i can test, outside shadow of mordor, that runs well while utilizing tons of VRAM, i'd love to hear about it. shadow of mordor has been tested to death post 3.5GiB and it certainly doesn't seem to have any major stuttering issues from any of the videos that have been posted.


Now, if you want to talk ethics, scams and total dumbo moments, yes entirely Nvidia has put itself in a stupid sticky position.
That lawsuit however won't get anywhere on the VRAM aspect, the card has 4GB it can use the 4GB (albeit a whopping 512mb at 1/7th the speed of the rest)

What will hold water, is the mistake over L2 cache and ROP.


Total TL:DR

The 970 is still the mainstream gamer orientated card to have.

Edited by DV McKenna, 05 March 2015 - 10:05 AM.


#48 Catamount

    Member

  • PipPipPipPipPipPipPipPipPip
  • LIEUTENANT, JUNIOR GRADE
  • 3,305 posts
  • LocationBoone, NC

Posted 05 March 2015 - 11:13 AM

The 970 is okay right now if you're fine with a card there merely runs most games like it should. It's not a particularly good deal because at 1440p, which I hope anyone with a $300 GPU has graduated to, the 290X is equal in performance, cheaper, and without these issues, but the 970 is not offensively bad, again, most of the time. Just accept that you can't play some games well if you want one that badly.

What concerns me is what it's going to be like in a year, or two years, or even three years (and GPU service life is definitely trending upwards, so that's not an unrealistic expectation). VRAM usage is going up, not down, and it's doing so at a not-so-languid pace.

I'm also concerned about one of the observations in that video about VRAM: the 970 seems to be doing work to avoid addressing that extra 512mb. It seems there are mechanisms in place to try to keep it at 3.5 where other cards, even reasonably similar cards, would go to 4. So what's it doing to image quality to keep the frame buffer down? If that's what the 970 has to do to avoid savage stuttering instead of just letting the frame buffer size get to whatever it'll naturally get to, and indeed that video shows things get really bad when that mechanism's ability to hold it down is exceeded, then going back to the "down the road", what happens when a great deal of games exceed the 970s ability to keep under the 3.5gb limit? The way the 970 seems to behave with VRAM, it's not like a natural progression of VRAM usage until you start bumping up past 3.5 and then you get slow degradation. It's more like the 970 desperate tries to keep that from happening to hide its performance issues, and then suddenly fails and the flood gates open and your performance goes to lolwtfbbq. It's very a dichotomous behavior, which means that at some point, you're just going to get title after title that are just really unplayable without turning things way below what the 970 should be capable of all of a sudden. In short, Shadows of Mordor is going to become the rule, not the exception, and small upticks in settings are going to cause savage playability drops, and the threshold for that is just going to get lower and lower and lower with newer and newer games.

The 970 is an unremarkable deal without these issues and potential concerns. It was a great deal in late 2014, but prices have changed, and there's just no reason, at this point, why I would ever see recommending it unless you're really concerned about power consumption, where the 970 commands a moderate advantage over the 290X (its not as good as its TDP suggests, because the card flagrantly violates that TDP in normal use).

View PostDV McKenna, on 05 March 2015 - 09:57 AM, said:

Accessing both memory sections brings the card to a 4-6% hit in performance that is it. Negligible Here is an actual end user who's done extensive testing on a number of games. http://www.neogaf.co...3&postcount=700


That's only average framerates. Frametimes get fairly borked up, and make a far more than negligible difference. Even the testing you cite shows that frame time variance absolutely skyrockets. It has some hickups nearing 3.5gb, but once Far Cry 4 inches up just a little into ~3700mb territory the frametimes just go to all hell.

Galaxor's video shows exactly the same thing. It seriously looked like the multi-GPU stutter people used to complain about back in the mid-late 2000s. These were at settings the card could easily otherwise play at and get butter smooth framerates at. It's not like some issue that only crops up if you otherwise push the card past its normal abilities.


Quote

Total TL:DR

The 970 is still the mainstream gamer orientated card to have.


On release, I'd have totally agreed with you, but with today's prices why would Joe Consumer pay 110% of the price of a 290x to get a 970? Unless I gamed exclusively at 1080p, why would I even pay the same price for a 970 when I could just not have these issues altogether with AMD's card?

Edited by Catamount, 05 March 2015 - 11:34 AM.


#49 Oderint dum Metuant

    Member

  • PipPipPipPipPipPipPipPipPip
  • Ace Of Spades
  • Ace Of Spades
  • 4,758 posts
  • LocationUnited Kingdom

Posted 05 March 2015 - 11:51 AM

View PostCatamount, on 05 March 2015 - 11:13 AM, said:

The 970 is okay right now if you're fine with a card there merely runs most games like it should. It's not a particularly good deal because at 1440p, which I hope anyone with a $300 GPU has graduated to, the 290X is equal in performance, cheaper, and without these issues, but the 970 is not offensively bad, again, most of the time. Just accept that you can't play some games well if you want one that badly.


That's a bit of a stretch it plays any game perfectly fine at the level it is intended to do so, stretch past that level and you run into issues (like any card from any side)

Quote

What concerns me is what it's going to be like in a year, or two years, or even three years (and GPU service life is definitely trending upwards, so that's not an unrealistic expectation). VRAM usage is going up, not down, and it's doing so at a not-so-languid pace.


VRAM is definatley going up but i wouldn't say in a hurray, lets be honest here its been what 3-4 years give or take since we hard cards of 1.2/1.5GB and we only just now are seeing the past double that come into effect at standard resolutions. i certainly don't forsee games becoming 4-6GB as standard any time in the next 2-3 years, ofc there will be more outlier games like SoM however.

Quote

I'm also concerned about one of the observations in that video about VRAM: the 970 seems to be doing work to avoid addressing that extra 512mb. It seems there are mechanisms in place to try to keep it at 3.5 where other cards, even reasonably similar cards, would go to 4. So what's it doing to image quality to keep the frame buffer down? If that's what the 970 has to do to avoid savage stuttering instead of just letting the frame buffer size get to whatever it'll naturally get to, and indeed that video shows things get really bad when that mechanism's ability to hold it down is exceeded, then going back to the "down the road", what happens when a great deal of games exceed the 970s ability to keep under the 3.5gb limit? The way the 970 seems to behave with VRAM, it's not like a natural progression of VRAM usage until you start bumping up past 3.5 and then you get slow degradation. It's more like the 970 desperate tries to keep that from happening to hide its performance issues, and then suddenly fails and the flood gates open and your performance goes to lolwtfbbq. It's very a dichotomous behavior, which means that at some point, you're just going to get title after title that are just really unplayable without turning things way below what the 970 should be capable of all of a sudden. In short, Shadows of Mordor is going to become the rule, not the exception, and small upticks in settings are going to cause savage playability drops, and the threshold for that is just going to get lower and lower and lower with newer and newer games.


I think it's quite obviously a sensible load balancing technique, once you go into that last .5 your using the over loaded L2 cache that still has 4 SM units to deal with. if the image and compression standards can keep that VRAM usage under control that's not out of line.

Quote

The 970 is an unremarkable deal without these issues and potential concerns. It was a great deal in late 2014, but prices have changed, and there's just no reason, at this point, why I would ever see recommending it unless you're really concerned about power consumption, where the 970 commands a moderate advantage over the 290X (its not as good as its TDP suggests, because the card flagrantly violates that TDP in normal use).


What makes you suggest that out of interest from most of the stuff i have seen it's between a 65 - 57W difference depending on the type of 970 used.

Quote

Galaxor's video shows exactly the same thing. It seriously looked like the multi-GPU stutter people used to complain about back in the mid-late 2000s.


On release, I'd have totally agreed with you, but with today's prices why would Joe Consumer pay 110% of the price of a 290x to get a 970? Unless I gamed exclusively at 1080p, why would I even pay the same price for a 970 when I could just not have these issues altogether with AMD's card?


the 290X certainly holds an advantage in larger resolutions, but the 970 is not under any circumstance restricted to just 1080P, the problem with the 290X is still power consumption, and the heat involved in such a high TDP.
And certainly here in the UK the 290X is more expensive than the 970 (albeit not massively)

AMD's new lineup will change all this, and that's not surprising but i certainly wouldn't advocate buying a near EoL 290x.

Edited by DV McKenna, 05 March 2015 - 12:01 PM.


#50 Flapdrol

    Member

  • PipPipPipPipPipPipPipPip
  • The 1 Percent
  • The 1 Percent
  • 1,986 posts

Posted 05 March 2015 - 12:04 PM

View PostCatamount, on 05 March 2015 - 11:13 AM, said:

On release, I'd have totally agreed with you, but with today's prices why would Joe Consumer pay 110% of the price of a 290x to get a 970?


Conservative rasterisation, or one of the other dx12/11.3 features?

Anyway, the 290 is great value.

#51 gaIaxor

    Member

  • PipPip
  • 24 posts

Posted 05 March 2015 - 05:50 PM

View PostDV McKenna, on 05 March 2015 - 09:57 AM, said:


sigh, im assuming you have little technical knowledge to understand past the original blown out issue.


Obviously that is your problem. You are still being fooled by NVIDIA's lies. It is 224bit max and that means 196GB/s max. 256bit (i.e. 224GB/s) is physically impossible because it's gimped.

"In the case of pure reads for example, GTX 970 can read the 3.5GB segment at 196GB/sec (7GHz * 7 ports * 32-bits), or it can read the 512MB segment at 28GB/sec, but it cannot read from both at once; it is a true XOR situation. The same is also true for writes, as only one segment can be written to at a time.

Unfortunately what this means is that accessing the weaker 512MB segment blocks access to the stronger 3.5GB segment if both memory operations are identical; or put another way, using the 512MB segment can harm the performance of the 3.5GB segment. "
www.anandtech.com/show/8935/geforce-gtx-970-correcting-the-specs-exploring-memory-allocation/2

"3584 Mo @ 224bits + 512 Mo @ 32bits"
www.materiel.net/carte-graphique/msi-geforce-gtx-970-oc-4-go-111502.html
"3.5GB @ 196GB/s (224bit), 512MB @ 28GB/s (32bit)"
www.geizhals.de/eu/msi-gtx-970-gaming-4g-v316-001r-a1167950.html?hloc=at&hloc=de&pg=3

memory bench:
pics.computerbase.de/6/2/7/1/6/1.png

#52 Kuritaclan

    Member

  • PipPipPipPipPipPipPipPip
  • Ace Of Spades
  • 1,838 posts
  • LocationGermany

Posted 05 March 2015 - 10:42 PM

Quote

Unfortunately what this means is that accessing the weaker 512MB segment blocks access to the stronger 3.5GB segment if both memory operations are identical; or put another way, using the 512MB segment can harm the performance of the 3.5GB segment. "

Since NV Driver for upcoming games try to block this 512MB Segment and hold the card strictly under 3.5GBs you may never experience the shuttering what you may get within the 3.5 to 4GB VRAM. The 970 is a Kepler card with MAxwell gimicks and nv done a bad job to give it more than the 3GB it can handle as kepler. - But anyway it is only experienced when you are into higher resolutions and AA + high res textures. - If you set details right you could use DSR well and or stay on a 1080p/1440p resolution and don't notice any performancy loose via VRAM. Also the card will drop fps wise in regions below 30 fps when you try to outlast the the card over 3.5GB VRAM and this is most time not playable anyhow.

Edited by Kuritaclan, 05 March 2015 - 10:48 PM.


#53 Flapdrol

    Member

  • PipPipPipPipPipPipPipPip
  • The 1 Percent
  • The 1 Percent
  • 1,986 posts

Posted 06 March 2015 - 02:48 AM

View PostKuritaclan, on 05 March 2015 - 10:42 PM, said:

The 970 is a Kepler card with MAxwell gimicks and nv done a bad job to give it more than the 3GB it can handle as kepler.

???

#54 Kuritaclan

    Member

  • PipPipPipPipPipPipPipPip
  • Ace Of Spades
  • 1,838 posts
  • LocationGermany

Posted 06 March 2015 - 02:59 AM

Maxwell is somewhat Kepler 2.0 - Maxwell adds better energy saving under normal load - when overclocked the wattage consume stay pretty much the same like kepler with big load Maxwell 970 consume as much as a 780/780TI and have nearly same performance.

Ok it has some other features like the new AA - but those features are not used right now by many who bought the Maxwell cards.

Maxwell would have handled 3GB well - however to get the 4GB NV engenieers tweaked the Kepler architecture to drive more than 3GB, but as we see it isn't well done. I think it was a bit of arrogance to give the 970 more than 3GB. With the 4GB they earned a shitstorm - if they would have only used 3GB nobody had complained i guess. If you wanna have 4GB get a 980 or a AMD card - thats my standpoint. I'm ok with the 3.5GB since the performance over all of the 970 isn't so good that you will need the 0.5 to 1 GB more vram since when you the cards have such textures to handle the framerate isn't acceptable anyway. The 970 is the followup of the 770 - this card had 2GB so even with 3GB it would have been an improvment in the midrange.

Edited by Kuritaclan, 06 March 2015 - 03:48 AM.


#55 xWiredx

    Member

  • PipPipPipPipPipPipPipPip
  • Elite Founder
  • Elite Founder
  • 1,805 posts

Posted 06 March 2015 - 05:40 AM

I've never seen a 970 use as much energy as a 780 Ti. In fact, in almost every review I can find it uses 50-60W less. You have to raise the power limit on it and overclock it somewhat significantly in order for it to use the same amount of energy as a 780 Ti, but then the performance is clearly in favor of the 970. This makes the 970 essentially one of two things: a card as fast as a 780 Ti that uses almost 20% less energy, or a card that performs 5-10% faster and uses almost the same amount of energy as a 780 Ti. Also, the fact that the 970 is significantly cheaper in cost means that it is still a decent pick.

Either way, I went with a 980 so it doesn't really matter to me :P

Edited by xWiredx, 06 March 2015 - 05:40 AM.


#56 Kuritaclan

    Member

  • PipPipPipPipPipPipPipPip
  • Ace Of Spades
  • 1,838 posts
  • LocationGermany

Posted 06 March 2015 - 06:11 AM

Jeah the price performance for an NV card is the nice hook on it - compared to AMD card it remain lackluster.

Well if you don't unlock power limit and set it higher the card stay on the low end in consumption. The stock cards abuse this fact a bit. Cards with customer coolers and little oc do come closer (however with selfmade bios/oc tools you can raise it even higher). Also it is a question what programm is used:
http://www.tweakpc.d...phantom/s06.php
while you have in some games a biger gap in others it come pretty close.

Edited by Kuritaclan, 06 March 2015 - 06:34 AM.


#57 Oderint dum Metuant

    Member

  • PipPipPipPipPipPipPipPipPip
  • Ace Of Spades
  • Ace Of Spades
  • 4,758 posts
  • LocationUnited Kingdom

Posted 06 March 2015 - 09:53 AM

View PostgaIaxor, on 05 March 2015 - 05:50 PM, said:


Obviously that is your problem. You are still being fooled by NVIDIA's lies. It is 224bit max and that means 196GB/s max. 256bit (i.e. 224GB/s) is physically impossible because it's gimped.

"In the case of pure reads for example, GTX 970 can read the 3.5GB segment at 196GB/sec (7GHz * 7 ports * 32-bits), or it can read the 512MB segment at 28GB/sec, but it cannot read from both at once; it is a true XOR situation. The same is also true for writes, as only one segment can be written to at a time.

Unfortunately what this means is that accessing the weaker 512MB segment blocks access to the stronger 3.5GB segment if both memory operations are identical; or put another way, using the 512MB segment can harm the performance of the 3.5GB segment. "
www.anandtech.com/show/8935/geforce-gtx-970-correcting-the-specs-exploring-memory-allocation/2

"3584 Mo @ 224bits + 512 Mo @ 32bits"
www.materiel.net/carte-graphique/msi-geforce-gtx-970-oc-4-go-111502.html
"3.5GB @ 196GB/s (224bit), 512MB @ 28GB/s (32bit)"
www.geizhals.de/eu/msi-gtx-970-gaming-4g-v316-001r-a1167950.html?hloc=at&hloc=de&pg=3

memory bench:
pics.computerbase.de/6/2/7/1/6/1.png


Right there is a couple of things you missed, from your own article you linked from Anandtech where they contradict themselves because they didn't do any practical testing.

Quote

[color=#444444] Only after 3.5GB is requested – enough to fill the entire 3.5GB segment – does the 512MB segment get used, at which point NVIDIA attempts to place the least sensitive/important data in the slower segment.[/color]


The card can use BOTH segments at the same time. Period

If it could not you wouldn't see this.
Posted Image

You only have to look at the crossbar picture of the achitecture on the card to see where the issue is!

Edited by DV McKenna, 06 March 2015 - 09:53 AM.


#58 gaIaxor

    Member

  • PipPip
  • 24 posts

Posted 08 March 2015 - 05:26 PM

View PostDV McKenna, on 06 March 2015 - 09:53 AM, said:


Right there is a couple of things you missed, from your own article you linked from Anandtech where they contradict themselves because they didn't do any practical testing.


Read the article. Of course, they did pratical testing and they do not contradict themselves.

They wrote:
"Unfortunately what this means is that accessing the weaker 512MB segment blocks access to the stronger 3.5GB segment if both memory operations are identical; or put another way, using the 512MB segment can harm the performance of the 3.5GB segment."

View PostDV McKenna, on 06 March 2015 - 09:53 AM, said:

The card can use BOTH segments at the same time.


It cannot. That is why the driver tries to avoid using more than 3.5GB. When that fails stuttering is the result. Just look at the frametimes.

#59 Oderint dum Metuant

    Member

  • PipPipPipPipPipPipPipPipPip
  • Ace Of Spades
  • Ace Of Spades
  • 4,758 posts
  • LocationUnited Kingdom

Posted 09 March 2015 - 01:02 AM

View PostgaIaxor, on 08 March 2015 - 05:26 PM, said:


Read the article. Of course, they did pratical testing and they do not contradict themselves.

They wrote:
"Unfortunately what this means is that accessing the weaker 512MB segment blocks access to the stronger 3.5GB segment if both memory operations are identical; or put another way, using the 512MB segment can harm the performance of the 3.5GB segment."



It cannot. That is why the driver tries to avoid using more than 3.5GB. When that fails stuttering is the result. Just look at the frametimes.


I take it your being deliberately difficult. There is a graph right in the post above showing VRAM usage above 3.5gb and there are plenty more out there.

If it can't access both parts of the memory at the same time, that wouldn't be possible.

http://techreport.co...tly-as-intended

Quote

The GeForce GTX 970 is equipped with 4GB of dedicated graphics memory. However the 970 has a different configuration of SMs than the 980, and fewer crossbar resources to the memory system. To optimally manage memory traffic in this configuration, we segment graphics memory into a 3.5GB section and a 0.5GB section. The GPU has higher priority access to the 3.5GB section. When a game needs less than 3.5GB of video memory per draw command then it will only access the first partition, and 3rd party applications that measure memory usage will report 3.5GB of memory in use on GTX 970, but may report more for GTX 980 if there is more memory used by other commands. When a game requires more than 3.5GB of memory then we use both segments.

Edited by DV McKenna, 09 March 2015 - 01:09 AM.


#60 gaIaxor

    Member

  • PipPip
  • 24 posts

Posted 13 March 2015 - 01:02 PM

View PostDV McKenna, on 09 March 2015 - 01:02 AM, said:

I take it your being deliberately difficult. There is a graph right in the post above showing VRAM usage above 3.5gb and there are plenty more out there.


And there you just have to look at the frametimes. That explains why the driver tries to avoid using more than 3.5GB.

View PostDV McKenna, on 09 March 2015 - 01:02 AM, said:

If it can't access both parts of the memory at the same time, that wouldn't be possible.


It can use both segments, but not at the same time:
"accessing the weaker 512MB segment blocks access to the stronger 3.5GB segment"





1 user(s) are reading this topic

0 members, 1 guests, 0 anonymous users