Jump to content

Terrible Gtx 970 Experience With Mwo.


111 replies to this topic

#21 Flapdrol

    Member

  • PipPipPipPipPipPipPipPip
  • The 1 Percent
  • The 1 Percent
  • 1,986 posts

Posted 30 November 2014 - 08:44 AM

View PostJesus DIED for me, on 29 November 2014 - 04:58 PM, said:

Cool. I am overclocking mine further as we speak, testing it currently at 3.6GHz using FurMark burnin software. Will have to check mwo performance after that's done. Thanks for the input.

Furmark is a gpu stress tool right? If you're overclocking the cpu you should test with a cpu stress tool, like prime95.

Make sure you keep an eye on the temperatures with something like realtemp or coretemp, if you up the voltage temps go up a lot, and phenom II's don't like heat.

should be plenty of information on how to best overclock phenom II's online, I'd look around a bit.

#22 Catamount

    Member

  • PipPipPipPipPipPipPipPipPip
  • LIEUTENANT, JUNIOR GRADE
  • 3,305 posts
  • LocationBoone, NC

Posted 30 November 2014 - 10:00 AM

These days I just go all-in-one with OCCT for CPU testing. It has two tests, something seemingly analogous to prime95 and a Linpack test, a built in temperature monitor, and an adjustable temperature cutoff.

#23 Flapdrol

    Member

  • PipPipPipPipPipPipPipPip
  • The 1 Percent
  • The 1 Percent
  • 1,986 posts

Posted 30 November 2014 - 11:20 AM

yeah, being able to automatically abort with high temps is a nice feature.

#24 Therrinian

    Member

  • PipPipPipPipPip
  • Veteran Founder
  • Veteran Founder
  • 197 posts
  • LocationNetherlands

Posted 30 November 2014 - 11:39 AM

well the higher you want the particle count to go the higher you need to clock the cpu it seems.

Also the resolution matters, 1080p should be pretty easy, 1440 and 4K require more and more.

#25 xWiredx

    Member

  • PipPipPipPipPipPipPipPip
  • Elite Founder
  • Elite Founder
  • 1,805 posts

Posted 30 November 2014 - 02:06 PM

MWO loves CPU cycles. I don't know how many times this has been said, but it's in almost every single thread in the hardware section multiple times. While that Phenom II chip does perform somewhat well, keep in mind that it is about 4 1/2 years old and is bound to show its age a bit in games like MWO that demand top-notch CPU performance in order to have all the bells and whistles turned on.

#26 MechWarrior4172571

    Member

  • PipPipPipPipPipPip
  • Bridesmaid
  • 251 posts

Posted 01 December 2014 - 08:31 AM

View PostOdins Fist, on 29 November 2014 - 04:54 PM, said:

Stil stand behind the 3.2 GHZ on a Phenom II chip isn't the greatest for MWO.
When I ran my Phenom II x6 1100t a Stock clocks I absolutely saw a difference between 3.3 and 4.2 GHZ.
On a newer Intel chip 3.2 GHZ is no problem.
You are absolutely correct about Phenom II being a loser in this game of mwo cpus. I have overclocked my stock 3.2GHz Phenom 6 core Thuban 1090T processor to 3.4 and used it awhile but then just yesterday, due to some people posting here about their cpu oc successes, I overclocked it further to 3.6 (seems it is the max it would go without serious water cooling) and I am now getting barely playable (but not the smoothest) framerates--got solid minimum 10 fps increase from just 200MHz overclocking of the cpu. hmm.. I tend to agree with that guy who posted that 4.5 GHz is the minimum for this game (he also has the same processor and got his 'smooth' 60fps in battles by going to 4.5GHz). I think that Phenom in particular is terrible for mwo--don't get me wrong, other processors have different architectures but Phenoms are the forerunners of todays AMD architecture, the first in major redesign, so to speak. Phenoms don't have logical 'hyper-threading'--it's 1 actual per core... HOW exactly they handle the math of mwo calculations I don't know but it looks like they are horrible at mwo performance per se. Old motherboard might add to the problem too, I would guess.

View PostPV Phoenix, on 29 November 2014 - 09:27 PM, said:


Far as I'm aware the only big defect I'm aware of with 970's is that a lot of them are shipping with coil whine but still run great. I got lucky as mine doesn't whine and max's out everything. Think I read somewhere that EVGA's cheapest model has some issues though so if you have one of those it might be worth your time to look it up.

But for the most part, a GTX 970 is great for gaming in general, but not so by itself for unoptimized games like MWO. Indeed getting a new CPU is your main option for increasing performance (or overclocking if you can, this helps lots if you can do it)... Or PGI decides to optimize the game. The former is your best best though, latter will probably never happen.
Yeah, I've read about the coil whine. I am glad that I only got 3 one second squeaks of it and it was gone and mine overclocks fine to 1500/7900. It seems the Cryengine and older CPUs are to blame for most of the trouble. Problem is.. that some of the blame is with the newer GTX 970 though.. I WAS getting BETTER solid IN battle frame rates with overlocked GTX 465 which would stay at SOLID above 50 fps with the SAME setup and which would not dip into 40s at all during battles (on low settings everything also) but this new GTX 970 is not optimized for the Cryengine (comparing it to my experience with GTX 465) and the frame rates are wild--they go from 40s to 70s @IN BATTLE experience... really fluctuates. I am not too happy about it.. this is just minimum bare playable experience right now with my Phenom x6 1090T overclocked @3.6GHz.

View PostFlapdrol, on 30 November 2014 - 08:44 AM, said:

Furmark is a gpu stress tool right? If you're overclocking the cpu you should test with a cpu stress tool, like prime95.

Make sure you keep an eye on the temperatures with something like realtemp or coretemp, if you up the voltage temps go up a lot, and phenom II's don't like heat.

should be plenty of information on how to best overclock phenom II's online, I'd look around a bit.
Well, that's true, but, they have a button there that brings up CPU stress tool up, so one can run both CPU and GPU stress tests at the same time--which is what I did. I put 5 core max burn in for cpu (leaving 1 core to do the video things if needed be) and also turned on graphical burnin test of the furmark. I think you can download the furmark cpu burnin test as a separate download though, but in my case it came together as one program (latest furmark 1.15.0.0 download from Geeks3D.com).

View PostCatamount, on 30 November 2014 - 10:00 AM, said:

These days I just go all-in-one with OCCT for CPU testing. It has two tests, something seemingly analogous to prime95 and a Linpack test, a built in temperature monitor, and an adjustable temperature cutoff.
Thanks for the info. I am going to download and try that. Sounds like a good program. http://www.ocbase.com/

View PostxWiredx, on 30 November 2014 - 02:06 PM, said:

MWO loves CPU cycles. I don't know how many times this has been said, but it's in almost every single thread in the hardware section multiple times. While that Phenom II chip does perform somewhat well, keep in mind that it is about 4 1/2 years old and is bound to show its age a bit in games like MWO that demand top-notch CPU performance in order to have all the bells and whistles turned on.

Yes, it's a very 'old' processor by MWO standards--I came to realize that in the last couple of days. Before that I thought it was still great but seeing how another Cryengine based game Skyrim with high resolution textures (absolutely everything maxed out) drops to 40s and 50s sometimes from solid 60fps.. makes me wonder. I think it's wrong that neither Cryengine nor MWO programmers came out with a simple offload of calculations or optimization of calculations for the either Cryenine itself or for MWO in particular. This programs (Cryengine and MWO) have been for a long time on the market and the 'recommended' PC specifications which are stated for it are ridiculous in light of my experience. All they have to do is to tweak the whatever affects the load during the battle and redirect it to GPU processing (my GPU is running at 30% usage when running MWO! LOL LOL LOL and not all CPU cores are fully utilized either!) or to optimize the code to run more efficiently on, specifically Phenom II processors--this would ensure all the other processors are covered, because it seems that Phenom II cpus are specifically affected by this menace. Oh, and I reiterate that when I get 40sh fps in MWO I get them with EVERY setting on low (both DirectX 9 and DirectX 11--same ugly results) and this only affects IN BATTLE experience (otherwise it's really fast @ over 100fps) and you can see the same slowdown when going between "Skills" and "Mechlab" tabs--the fps would drop suddenly and THAT is what you would be getting in the actual battle with other people (not solo "Testing Grounds" maps. I get over 100 fps in Testing grounds maps--no matter what I do but as soon I am "on" in the battle the frame rate drops from 90 to 40sh fps... fast enough for LRMing but not for person to person quick combat.

Edited by Jesus DIED for me, 01 December 2014 - 08:41 AM.


#27 Odins Fist

    Member

  • PipPipPipPipPipPipPipPipPip
  • Ace Of Spades
  • Ace Of Spades
  • 3,111 posts
  • LocationThe North

Posted 01 December 2014 - 11:53 AM

View PostJesus DIED for me, on 01 December 2014 - 08:31 AM, said:

You are absolutely correct about Phenom II being a loser in this game of mwo cpus. I have overclocked my stock 3.2GHz Phenom 6 core Thuban 1090T processor to 3.4 and used it awhile but then just yesterday, due to some people posting here about their cpu oc successes, I overclocked it further to 3.6 (seems it is the max it would go without serious water cooling) and I am now getting barely playable (but not the smoothest) framerates--got solid minimum 10 fps increase from just 200MHz overclocking of the cpu. hmm.. I tend to agree with that guy who posted that 4.5 GHz is the minimum for this game


Phenom II x6 thubans aren't losers at running MWO, but you will want to OC them a bit, heck the AMD FX-series CPUs have to be Oc'd to see the kind of perfomance you want out of those.

4.5GHZ on a Phenom II chip is a bad bad bad bad bad idea.. You will literally have to bump your voltage to 1.66 volts, and that is unsafe, that CPU will cook and be gone within 60 days heavy use.

The accepted safe zone for Phenom II x6 is 4.27 GHZ.. 4.3 is the wall, and you likely won't see any real difference in performance after 4.27 GHZ.

On any of the Phenom II x6 chips 4.1 GHZ overclock is where you are going to wan't to be, you can get there without dangerous voltage, and you will see a decent bump in perfomance over 3.6 GHZ.

Anyone telling you to OC a Phenom II x6 to 4.5 GHZ is kinda crazy.
How do I know this..?? 3 years on a Phenom II x6 1100t 3.3GHZ stock Oc'd to 4.2 GHZ.

Posted Image

Edited by Odins Fist, 01 December 2014 - 12:16 PM.


#28 MechWarrior4172571

    Member

  • PipPipPipPipPipPip
  • Bridesmaid
  • 251 posts

Posted 02 December 2014 - 04:01 PM

View PostOdins Fist, on 01 December 2014 - 11:53 AM, said:


...

4.5GHZ on a Phenom II chip is a bad bad bad bad bad idea.. You will literally have to bump your voltage to 1.66 volts, and that is unsafe, that CPU will cook and be gone within 60 days heavy use.

The accepted safe zone for Phenom II x6 is 4.27 GHZ.. 4.3 is the wall, and you likely won't see any real difference in performance after 4.27 GHZ.

On any of the Phenom II x6 chips 4.1 GHZ overclock is where you are going to wan't to be, you can get there without dangerous voltage, and you will see a decent bump in perfomance over 3.6 GHZ.

Anyone telling you to OC a Phenom II x6 to 4.5 GHZ is kinda crazy.
How do I know this..?? 3 years on a Phenom II x6 1100t 3.3GHZ stock Oc'd to 4.2 GHZ.

4.5 GHz is a lot, I agree. The way I meant that, is that 4.5GHz would be 'just right' for minimum 'normal' smooth gameplay for Phenoms, IF that was normal and easily achievable. I would not dare to overlock a Phenom to that speed. I have just overlocked my Phenom 1090T (stock 3.2) to 3.8 (on air) and I am still getting lousy framerates in battles--40sh to 50sh and occasional quick dip into 30sh. But it is at 190 fps in loading screen and similar perfect experience in training grounds, not so in actual battles with other people. As soon as the battle starts (after generic mwo screen flicker which indicates that you are 'in') the frame rate goes to 90 for me and then just goes down and down to what I described earlier. I have just discovered that MWO runs my 6 core Phenom II 1090T Thuban processor at 1 core maxed out--hogging that one core to death, and SOOOO, it shows that it would want higher and higher frequency from that core in order to process calculations faster (at gameplay during battles with others). doh Sure, I do realize that intel is fast at single thread processing, and that it beats AMD, and that Phenoms don't have hyperthreading, but,... this is ridiculous. My GTX 970 is running mwo at 50% utilization and only 1 cpu core (out of 6) is fully utilized (the other ones are at 10% utilization or less.) I am thinking of disabling couple of cores on my 6 core Phenom and use the extra found headroom to overclock the processor further (on air) to 4.0 or so. In BIOS, I even put core loading at "slight", but mwo completely ignores that and hogs that one core like as if there were no others to play with.. so lame.

#29 xWiredx

    Member

  • PipPipPipPipPipPipPipPip
  • Elite Founder
  • Elite Founder
  • 1,805 posts

Posted 02 December 2014 - 04:14 PM

It seems like you and others do not properly understand the differences between your 4 1/2 year-old AMD chips that you are unsatisfied with and brand new Intel chips that many of us are very happy with in relation to MWO performance. Nehalem (the first Core i-series) was faster than Deneb/Thuban (The Phenom II series). Since then, Intel has increased performance per clock with Sandy Bridge, Ivy Bridge, and Haswell. AMD has not made any noticeable increase in per-clock performance since Thuban (Bulldozer was actually a decrease and Piledriver was supposed to make up the difference but I'm not sure it actually did).

Now some quick benchmarking has already told us that there is a 600-800MhZ increase needed on an FX-series chip to match the per-core performance of an Intel chip in the best case scenarios. That means to get the same performance as an Intel chip at 4.0GhZ, an FX-series chip needs to be set somewhere between 4.6-4.8GhZ.

Now that we've established that, let's establish a few more ground rules. When you play in the training grounds, you are the only person there. There is no network stack processing being done for 24 players, there is no extra animation or overhead from weapons going off and mechs moving around, etc. Testing performance in the training grounds is essentially useless.

Are you beginning to see why your performance is snot what you're expecting?

#30 MechWarrior4172571

    Member

  • PipPipPipPipPipPip
  • Bridesmaid
  • 251 posts

Posted 02 December 2014 - 04:35 PM

View PostxWiredx, on 02 December 2014 - 04:14 PM, said:

It seems like you and others do not properly understand the differences between your 4 1/2 year-old AMD chips that you are unsatisfied with and brand new Intel chips that many of us are very happy with in relation to MWO performance. Nehalem (the first Core i-series) was faster than Deneb/Thuban (The Phenom II series). Since then, Intel has increased performance per clock with Sandy Bridge, Ivy Bridge, and Haswell. AMD has not made any noticeable increase in per-clock performance since Thuban (Bulldozer was actually a decrease and Piledriver was supposed to make up the difference but I'm not sure it actually did).

Now some quick benchmarking has already told us that there is a 600-800MhZ increase needed on an FX-series chip to match the per-core performance of an Intel chip in the best case scenarios. That means to get the same performance as an Intel chip at 4.0GhZ, an FX-series chip needs to be set somewhere between 4.6-4.8GhZ.

Now that we've established that, let's establish a few more ground rules. When you play in the training grounds, you are the only person there. There is no network stack processing being done for 24 players, there is no extra animation or overhead from weapons going off and mechs moving around, etc. Testing performance in the training grounds is essentially useless.

Are you beginning to see why your performance is snot what you're expecting?


Not, really, no--I don't see how (per recommended specifications) MWO is supposed to be bad at my setup--3.8 GHz 6 core processor and GTX 970 set at everything low. hm? 1 core hogged by MWO of the cpu and the GPU is at 50% utilization. It sounds like you are trying to justify lack of involvement from MWO and Cryengine (and perhaps from NVidia) programmers due to your investment in MWO, instead of approaching this issue with a technical perspective and a cool head. Pay attention that Cryengine was around for a long time and MWO too. Add to this conundrum my personal experience of having GTX 465 (overclocked) play at CONSISTENT 50sh fps (on low settings) and that in comparison of my new GTX 970 (in the same system, once again, everything on low video settings) is performing at FLUCTUATING frame rate DOWN to 37 fps. What is WRONG with this picture?

#31 xWiredx

    Member

  • PipPipPipPipPipPipPipPip
  • Elite Founder
  • Elite Founder
  • 1,805 posts

Posted 02 December 2014 - 06:05 PM

Cryengine 3 has only been around since Crysis 3. It isn't the same as Cryengine 1, 2, etc. I think you're trying to justify whining about 4 1/2 year-old hardware not being up to snuff anymore.

#32 Odins Fist

    Member

  • PipPipPipPipPipPipPipPipPip
  • Ace Of Spades
  • Ace Of Spades
  • 3,111 posts
  • LocationThe North

Posted 02 December 2014 - 07:57 PM

View PostxWiredx, on 02 December 2014 - 04:14 PM, said:

"AMD has not made any noticeable increase in per-clock performance since Thuban"


And that's EXACTLY why I skipped the AMD FX-series CPUs and decided to go Intel.
The FX-Series CPUs actually have LESS single threaded performance then the Phenom II Thubans.
(Clock versus Clock)

Was an AMD fan for years because of price, now i'm not.

#33 MechWarrior4172571

    Member

  • PipPipPipPipPipPip
  • Bridesmaid
  • 251 posts

Posted 03 December 2014 - 01:27 AM

View PostTherrinian, on 29 November 2014 - 02:00 PM, said:

I have upgraded to the GTX 970 SSC and ran the game
at max settings at 2560x1440 resolution and the FPS
droped into the 20's just as you described.

However overclocking the CPU from 3.2 GHz
to a solid 4.5 GHz fixed the problems
and the FPS is now a stable 60.

4.5 GHz overclock seems to be
a system requirement for this game.

Absolutly try to overclock the CPU
if your core voltage and temperature permit.

Im sure that will fix your problems
if you can get it stable.



Thanks again for posting this info, Therrenian, (about the GTX 970 going into 20s fps and about the overlocking of your Thuban and for a great suggestion to try to overclock my processor as well) and I followed your advice and was only able to push my 1090T to 3.8GHz stable on air with all 6 cores but beyond that I had to make hard choice and to disable 3 of the 6 cores completely in order to gain the extra needed overhead in order to overclock it further on air to 4.0GHz. Now I have 3 core lol Phenom running stable (tested by Furmark cpu burnin plugin) at 4.0GHz with case fans still not needing the speed (and noise) boost (not yet, anyway, summer is not here yet so final performance remains to be tested.) Anyway, I got solid 30s and 40s fps now, coming from iffy 20s fps (all of this for GTX 970). I WAS getting stable 50s fps with my older overclocked GTX 465 and with this new GTX 970 the fps went down and started to fluctuate.. ugh. It's like something is not optimized either in NVidia drivers, the card, or both (most of the blame is of course with MWO optimization or lack thereof, in my opinion). I will note that the figures of frames per second (fps) I gave for both cards are and were for everything set to "low" and "off" in video settings so no one can blame graphical stress on this, and even my tests show, with Afterburner hardware monitor, that the GTX 970 is only being utilized up to 50% of it's capacity by the MWO software (in battle experience.) That's my 2 cents, so to speak, and, thanks again for your info, because it helped me to comprehend this problem a bit and to compare my experience with someone who had a very similar experience with very similar hardware.

Edited by Jesus DIED for me, 03 December 2014 - 01:33 AM.


#34 MechWarrior4172571

    Member

  • PipPipPipPipPipPip
  • Bridesmaid
  • 251 posts

Posted 03 December 2014 - 01:51 AM

View PostOdins Fist, on 02 December 2014 - 07:57 PM, said:



And that's EXACTLY why I skipped the AMD FX-series CPUs and decided to go Intel.
The FX-Series CPUs actually have LESS single threaded performance then the Phenom II Thubans.
(Clock versus Clock)

Was an AMD fan for years because of price, now i'm not.


Well, the main reasons were that AMD has been playing a catch-up game with Intel from the very start (for the most part). AMD did come out public with first 1GHz memory array sample. What AMD was trying to catch up with was hyper-threading. In order to gain foot-hold in that territory they had to completely redesign their architecture--Thuban was 'in between' transitional silicon. In fact, earlier 4 core AMD cpus were snappier. After the initial Thuban (hyper-threadless) release, the plan to implement hyper-threading was enacted--hence the FX series with hyper-threading. Not much was gained and why? Because AMD acquired Radeon video graphics business, once again, trying to catch up with Intel which could have pulled a rabbit out if it's hat and show the world APU that would have ruined AMDs' business completely (I am wondering if NVidia is even thinking of making an APU and getting into a cpu business? hmm.) Right now AMD has it's hands full with APU adoption and integration and betterment--everybody is going mini and micro and big ol' cpus are on an after burner. NVidia already showed this to us when they started to release mobile VGAs first and then the desktop parts, thus showing, where the market (and profit) is going... AMD know this well and are redirecting their resources in that direction as well, leaving PC hard-core enthusiasts wonder if everything is all right.



#35 Catamount

    Member

  • PipPipPipPipPipPipPipPipPip
  • LIEUTENANT, JUNIOR GRADE
  • 3,305 posts
  • LocationBoone, NC

Posted 03 December 2014 - 03:22 PM

AMD going with clustered multithreading (ie Bulldozer/Piledriver) was the worst decision they ever made, and even they admit it and are basing their new architecture on lessons learned from both the old Phenom II line and Intel's i-series line, but anyways...

the reason you're getting poor framerates on low is because you're going to be CPU bound and varying graphical settings has no impact on that. Check out my old stock 3570k test w/a 5850

http://mwomercs.com/...ost__p__1833008

If I can't get more than 46fps minimums on a 3570k, at any clock speed, then 37 is a perfectly acceptable minimum for your AMD chip. Back then, OCing much past 4ghz took me up near 60fps minimums on a 3570k, but that was pre 12-man before performance took a dump. To be honest, I'm impressed that you're getting 37fps minimums, with any AMD chip, at any clock speed.

Edited by Catamount, 04 December 2014 - 08:41 AM.


#36 MechWarrior4172571

    Member

  • PipPipPipPipPipPip
  • Bridesmaid
  • 251 posts

Posted 03 December 2014 - 07:14 PM

View PostCatamount, on 03 December 2014 - 03:22 PM, said:

AMD going with clustered multithreading (ie Bulldozer/Piledriver) was the worst decision they ever made, and even they admit it and are basing their new architecture on lessons learned from both the old Phenom II line and Intel's i-series line, but anyways...

the reason you're getting poor framerates on low is because you're going to be CPU bound and varying graphical settings has no impact on that. Check out my old stock 3570k test w/a 5850

http://mwomercs.com/...ost__p__1833008

If I can't get more than 46fps minimums on a 3570k, at any clock speed, then 37 is a perfectly acceptable minimum for your AMD chip. Back then, OCing much past 4ghz took my up near 60fps minimums on a 3570k, but that was pre 12-man before performance took a dump. To be honest, I'm impressed that you're getting 37fps minimums, with any AMD chip, at any clock speed.



Yes, I finally figured it out as well--it is the load on the each person's cpu, for all the other 12 freindly players that is being dumped on each player's cpu, is what is causing this major decrease of frame rates during the game play with other players. Another way to see how it is true is to wait until all 11 of your team-mates are killed in battle (or you, as part of that number) and see how the frame-rates jump right back up to your max settings during the last guy's final battle/moves. Why would they want us to calculate (redundantly) each for all of the other 11 team members is hard to fathom. 1) position 2) vector (in 4 dimensions) 3) hmm.. color of each of the other 11 mechs' glass? 4) Individual decals? 5) Fancy colors of theirs? 6) Smiles on their happy i7 owners' faces as they sit in their cockpits? (stretching a little here, but you get the point.) If we don't see those mechs visually, we shouldn't need to calculate all of that stuff for each other x12 times over on each of the cpus. (12 friendlies) Basic 4d position and vectors should be sufficient.. am I wrong? Unless a bullet or a laser flux is flying in their direction. I am guessing about all of this going on, but I am pretty sure that that's what is going on. It's probably a Cryengine 3 feature that was poorly implemented and nobody bothered to overcome the hurdle by cutting down on the processed items, as processed by each of the 12 players' computers for each of the 12 players' computers, ugh.. a tongue twister. (Note: since the jump to 12 person teams (probably was manageable enough before that)).

#37 skilly

    Rookie

  • The 1 Percent
  • The 1 Percent
  • 2 posts

Posted 05 January 2015 - 10:34 AM

Just to add to this Topic and maybe get some attention on the problem as I have the feeling that this might affect more users of new GPUs - I have nearly the same Problem here.

I just added to my configration an ASUS Streak GTX 970 (coming from a Palit GTX 580). Before the change I had constant 50-60 FPS with low/medium details. No changes to configuration (PC and game) at all except the GPU.

=> FPS bounced between near 200 in CW (during dropship phase) to 10 (!!!) during the actual drop/mech startup phase and stabilze around 45. Then on quick turning, getting lots of missiles / ppcs etc. the FPS drop sometimes to 20-25 FPS making it nearly unplayable. On non-CW maps the FPS are rather stable at a nearly acceptable 50-60 FPS level with sometimes bounces to 25 or up to 80. Changing the settings from mostly low/med to all low did not change anything on the behaviour (fealing it even got 2-5 frames worse) - on all very high the frames are also nearly unchanged on the above mentioned values.
I was nearly at the state to throw the card into the dumb and reset back to the old (as I first thought I had just buyed a lousy card) - then testing it in other games (WoD, Farcry 4, Battlefield 4, etc.) I got constant 60/100 FPS without any dips on all settings ultra/very high. So highly motivated I resinstalled the whole system, flashed the BIOS, overclocked the CPU from 3.2 to 4.7 - no change. So in my oppinion this clearly is not a configuration/settings/tweak problem - there is something in the MWO-code (specially on CW) which conflicts new cards and this needs to be fixed by the development team in order to keep this game alive with new hardware incoming.

Some data on the system I am using, just to be sure that nothing there is the actual bottleneck: 16 GB finest RAM, SSD for operating system and game, i7 3930k@3.2, currently running on 1.35 V with 4700 (with and without OC same lousy result +/-10 FPS), Asus Rampage IV motherboard, fired by 750W, many fans keeping everything at low temperature and now a GTX970.

#38 xWiredx

    Member

  • PipPipPipPipPipPipPipPip
  • Elite Founder
  • Elite Founder
  • 1,805 posts

Posted 05 January 2015 - 10:55 AM

CW performance is crappy and there is a lot more that needs to be done before performance on those maps will be stable. Try testing in regular matches.

Also, keep your chip OCed. It's a Sandy Bridge chip, and even Haswell needs to be in the 4.1-4.3GhZ range for top-tier performance in MWO. Since Haswell has a 5-20% increase in performance over SB depending on the application, you'll need that chip OCed way beyond 3.2GhZ.

It really has nothing to do with "conflicting with new cards". If it did, every person with a 970 or 980 would be having the same issues, and the vast majority of us aren't. Also, if you didn't clear the shader cache using the repair tool after switching cards, you should. You also need to check the settings in the Nvidia control panel for the MWO executable.

Also take some time to read the other performance-related threads. No need to necro a month-old thread when you have a lot of bases to cover still.

#39 Therrinian

    Member

  • PipPipPipPipPip
  • Veteran Founder
  • Veteran Founder
  • 197 posts
  • LocationNetherlands

Posted 05 January 2015 - 11:25 PM

Jesus DIED for me, you are mistaken, I am not running a thuban at all. I am running an intel 2600k core i7.
This is probably why your safe clocks and performance still isnt where mine is.

Also you have no need to run at all minimum settings, there are plenty of settings relying on your video card, you can bump those up (quite probably to very high).

These include: shadows, post processing, anti aliasing, shading, enviroment and level of detail.

One setting that is a majorly CPU bound is PARTICLES, keep this on low.

An older CPU is no reason you shouldnt enjoy at least some eye candy from your GTX 970.

Experiment a little, plenty of GPU load you can play with.

#40 Exarch Levin

    Member

  • PipPipPipPipPip
  • Moderate Giver
  • Moderate Giver
  • 118 posts

Posted 06 January 2015 - 12:18 AM

Quote

These include: shadows...enviroment and level of detail.

Is this true for MWO? Shadows, more specifically dynamic shadows, tax CPUs in other games and with PGI's vague description for shadows I'm not sure what each setting brings to the table.

With environment and LOD, don't these increase the amount of draw calls that have to be made, draw calls being what PGI says cause MWO performance to be so absymal?
http://mwomercs.com/...77#entry3458877





1 user(s) are reading this topic

0 members, 1 guests, 0 anonymous users