Jump to content

System Optimisation For Amd Chipsets


68 replies to this topic

#21 Goose

    Member

  • PipPipPipPipPipPipPipPipPip
  • Civil Servant
  • Civil Servant
  • 3,463 posts
  • Twitch: Link
  • LocationThat flattop, up the well, overhead

Posted 13 May 2014 - 01:38 AM

;) Nope. http://www.tomshardw...ew,3328-17.html

#22 Golrar

    Member

  • PipPipPipPipPipPip
  • Survivor
  • Survivor
  • 359 posts
  • LocationChicago, IL

Posted 13 May 2014 - 02:36 AM

That story proves what my story says. Straight from the link you provided:

"the fact that AMD’s FX-8350 gets ranked below Intel’s Core i5-2550K indicates to us that the test isn’t fully taxing the Piledriver-based processor."

Except for the PC Mark scores, which they admit is not a very good test anymore due to limitations, the FX-8350 comes close to most i7-3770k scores, and consistently above the i5 scores. The i7-3770k still retails for $130 more than the FX-8350 at this time. From same report, the FX gets better scores than even the i7 in some graphics apps, and is within 1 fps of the highest Intel score for the highest resolution on the games tested. Yes, the lower rez in Skyrim is quite obviously lower, but Skyrim also does not make use of hyperthreading, which would benefit the FX in performance. Hence the higher scores for the Intel CPUs.

Also remember that the FX is cheaper than all of the current top of the line Intel processors. As I have stated before in other threads, Intel vs AMD is mainly a preference at this time. They leapfrog, and lately Intel has been taking two leaps for AMD's one. Nobody can truly see the future, we can only make "maybe" and "what if" statements. It is the hope that with wide spread use of AMD CPUs that performance on said chips will get better with better software design to take advantage of the architecture. Time will only tell.

Edit because posting a direct copy gave me all kinds of "color" coding errors.

Edited by Golrar, 13 May 2014 - 02:37 AM.


#23 Goose

    Member

  • PipPipPipPipPipPipPipPipPip
  • Civil Servant
  • Civil Servant
  • 3,463 posts
  • Twitch: Link
  • LocationThat flattop, up the well, overhead

Posted 13 May 2014 - 03:30 AM

View PostGolrar, on 13 May 2014 - 02:36 AM, said:

That story proves what my story says. Straight from the link you provided:

Pathetic Cherry-pick is Pathetic:

Quote

Futuremark’s PCMark 7 is on notice, in a way. In testing integrated graphics for Core i5-3570K, -3550, -3550S, And -3570T: Ivy Bridge Efficiency, I discovered that the application was weighing performance from the Ivy Bridge architecture’s Quick Sync feature incredibly aggressively. That won’t affect today’s story, given our use of Nvidia’s GeForce GTX 680, but the fact that AMD’s FX-8350 gets ranked below Intel’s Core i5-2550K indicates to us that the test isn’t fully taxing the Piledriver-based processor.

The FX’s performance in the Productivity and Entertainment suites mirrors the Overall benchmark’s results, though the Creativity and Computation sub-tests show the new FX-8350 doing much better. We’ll need to fire up the real-world metrics in our armada of benchmarks to draw more definitive conclusions.

Go re-read the actual page I linked, and stop trying to get the keds whom don't know any better to waist money on sub-par parts.

Your sad devotion to that ancient AMD religion has not helped you conjure up an IPC breakthrough, or given you enough clairvoyance to find ATI some bandwidth ...

#24 Smokeyjedi

    Member

  • PipPipPipPipPipPipPipPip
  • Liquid Metal
  • Liquid Metal
  • 1,040 posts
  • LocationCanada

Posted 13 May 2014 - 04:45 AM

View PostGoose, on 13 May 2014 - 03:30 AM, said:

Pathetic Cherry-pick is Pathetic:

Go re-read the actual page I linked, and stop trying to get the keds whom don't know any better to waist money on sub-par parts.

Your sad devotion to that ancient AMD religion has not helped you conjure up an IPC breakthrough, or given you enough clairvoyance to find ATI some bandwidth ...

How does saving money on CPU to pour into a GPU sound like a bad idea....... A CPU that will do the same thing, a few frames per second on the CPU side is nothing when u can go from a GTX 760(300$) to a 780(500$+) from AMDs all around savings.....Your logic is weak and sad.

Edited by Smokeyjedi, 13 May 2014 - 04:47 AM.


#25 Smokeyjedi

    Member

  • PipPipPipPipPipPipPipPip
  • Liquid Metal
  • Liquid Metal
  • 1,040 posts
  • LocationCanada

Posted 13 May 2014 - 04:50 AM



Youd pull up in your Porche 911 cause its shiny and expensive.......the light goes green, you loose.........meanwhile the truck owner didnt have to mortgage his house a 2nd time.....Smart.
**edit the truck carries a minibike and some beer...........**

Edited by Smokeyjedi, 13 May 2014 - 04:52 AM.


#26 xWiredx

    Member

  • PipPipPipPipPipPipPipPip
  • Elite Founder
  • Elite Founder
  • 1,805 posts

Posted 13 May 2014 - 05:18 AM

So much misinformation and pointless fanboy bickering in here. Let's clear some things up.

1: AMD's performance/clock (some would abbreviate as IPC here) is far lower than Intel's. There is no arguing this.
2: MWO is built on CryEngine, but it is an earlier version and does not appear to fully utilize more than 4 CPU cores. This means any AMD quad-core or higher will still be handily beaten by any Intel Sandy Bridge i5 or i7 (or Ivy Bridge, or Haswell).
3: AMD's memory performance is also way below Intel's. MWO does like RAM, and it likes it fast just like any other application.
4: MWO can fully abuse my GTX 660 Ti with room to spare (I definitely can't max every setting out and still get 60FPS), so I'm assuming even with a strong CPU a GTX 680 (or Radeon equivalent) is required to run MWO smoothly with cranked settings.
5: MWO's network utilization is apparently semi-heavy (like other online games).

This all ties together. CPU is needed not only for in-game calculations, but also for processing the network stack and feeding data to/from the GPU. For the best experience, you can't assume one strong part will make up for one weak part. Furthermore, the emphasis here is on having a strong CPU because without it, other things will be bottlenecked. Since Intel performs much better than AMD in anything that requires 4 cores or less (and in many cases, even things that will use 8 cores AMD is just matching Intel), the logical assumption here is that the OP is probably using a weaker AMD CPU and should either upgrade it, overclock the hell out of it, or switch to an Intel-based setup.

#27 Golrar

    Member

  • PipPipPipPipPipPip
  • Survivor
  • Survivor
  • 359 posts
  • LocationChicago, IL

Posted 13 May 2014 - 06:20 AM

Goose I did read it, and you just stated what I did over again, so thanks.

I agree that for MWO an Intel chip performs much better. I said this. What I am inferring from recent data is that future games might be better optimized for AMD processors due to their growing market share in the main stream outlets, (ie. the PS4, Xbone) and Intel's continued development of the i7 series. Yes, the i7-4770k is an amazing CPU, but you also pay an amazing price. Again, I am inferring this future information until I finish my time machine in the garage. By no mans is it guaranteed that AMD Piledriver CPUs will perform much better with new software design, but by the statement you quoted above:

Quote

[color=#959595]but the fact that AMD’s FX-8350 gets ranked below Intel’s Core i5-2550K indicates to us that the test isn’t fully taxing the Piledriver-based processor.[/color]


That alone gives me hope for the future of AMD. But of course, I can be completely wrong and they decide to close shop and sell calculators next year.

For budget PCs, which frankly a lot of us must rely on since we don't have the "special" tree in our yard, AMD is going to be the way to go. To simply tell someone that they better buy Intel or they won't be able to enjoy the game is just false.

For the price of an FX-8350, 8GB RAM, motherboard and liquid cooling you will pay less than the price of the Intel CPU and mobo alone. If you go i7 (cheapest i7, the 3770 locked is $300), you could even get a mild GPU or part of a good GPU.

Intel i5-4670k (unlocked) $240
ASUS Sabertooth Z87 $230
Total: $470

FX-8350 $200
ASUS Sabertooth 990fx $160
G. Skill Ripjaw 8GB 1600 $48
Total: $408
Corsair H60 $60 (Granted I chose the H100i for my personal build for aggressive OCing in my future, but the H60 will handle it decently)
New total: $468

I know this because I just went through it. I wanted the i5-4670k but my budget wouldn't allow it. Rather I chose to get the low IPC FX because from all of the information I researched, MWO is really the only game that I would concievably play that would hypothetically be a problem for me. And guess what? It isn't. I run 50-60fps with a 128bit GPU, stock CPU speeds with 8GB DDR3 1600 RAM! Can't wait till I can upgrade my GPU and double my RAM in a few months. Not to mention OC the CPU after the break-in period.

Yes, the Intel chips can go faster, especially in IPC dependent software, but you pay for it. The real choice is whether that increase in cost is worth it. In my personal experience it is not. Yeah, I'm an AMD fanboy. I like to get a lot for a little.

Edited by Golrar, 13 May 2014 - 06:22 AM.


#28 xWiredx

    Member

  • PipPipPipPipPipPipPipPip
  • Elite Founder
  • Elite Founder
  • 1,805 posts

Posted 13 May 2014 - 07:48 AM

View PostGolrar, on 13 May 2014 - 06:20 AM, said:

snip


Yeah... except no. On a $400 budget for just CPU and motherboard like your example you can get a K-series Ivy Bridge i5 and Z77 motherboard that will blow the hinges off the FX 8350 and motherboard that you purchased. The disparity between the two platforms regarding RAM bandwidth means you can put DDR3-1333 in the Intel system and get the same RAM performance as DDR3-1600 with the AMD system. In addition, because the Intel system requires a ton less energy to run (33-50% less) not only does your energy bill grow less with the Intel system, but the PSU is not taxed as much and you have more breathing room for the GPU (not to mention less load typically means less fluctuation -and- less heat generated which has several benefits).

#29 Lord Letto

    Member

  • PipPipPipPipPipPipPip
  • Giant Helper
  • 900 posts
  • LocationSt. Clements, Ontario

Posted 13 May 2014 - 07:49 AM

according to this thread: http://mwomercs.com/...te-please-help/
OP got a:
AMD FX 8150 8-core
16GB RAM
Win7HP
AMD Radeon HD 7900

Running DX11 on 1080p, max settings
Game installed on a SSD
No Overclock
deleted Cache and switched to DX9 and got better performance to where it was playable

Edited by Lord Letto, 13 May 2014 - 07:49 AM.


#30 Goose

    Member

  • PipPipPipPipPipPipPipPipPip
  • Civil Servant
  • Civil Servant
  • 3,463 posts
  • Twitch: Link
  • LocationThat flattop, up the well, overhead

Posted 13 May 2014 - 01:57 PM

http://www.tomshardw...re-i5,3708.html

Paul Henningsen of Tom's Hardware said:

Back when we ordered, Gigabyte's Radeon R9 280X was one of the least expensive and highest-clocked models. The only downside was a voltage lock on the GPU, ultimately limiting overclocking headroom. The next step was picking a solid platform. I configured two totally different options: a tweakable AMD FX-6300 and a more restricted build packing Intel’s potent Core i5. The cooler and motherboard I would have relied on to take the Vishera design close to 4.5 GHz actually made the AMD option $10 to $20 more expensive, violating the budget. The enthusiast in me favored that option, but my inner-realist knew that we could inevitably pull higher frame rates from Core i5. ASRock's affordable Z75 Pro3 motherboard could get the most out of the -3470’s limited headroom, and Intel's bundled cooler would get the job done at no extra cost.

Emphasis mine.

And stop pretending the drag racing is all that important …

Edited by Goose, 13 May 2014 - 01:59 PM.


#31 Smokeyjedi

    Member

  • PipPipPipPipPipPipPipPip
  • Liquid Metal
  • Liquid Metal
  • 1,040 posts
  • LocationCanada

Posted 13 May 2014 - 03:37 PM

View PostGoose, on 13 May 2014 - 01:57 PM, said:

http://www.tomshardw...re-i5,3708.html

Emphasis mine.

And stop pretending the drag racing is all that important …

Than why the IPC thing Intel has over AMD????? until software extensions allow AMD to uncork the 25-40% IPC power in cores leftover it has up its sleeve.....sheesh.......

#32 xWiredx

    Member

  • PipPipPipPipPipPipPipPip
  • Elite Founder
  • Elite Founder
  • 1,805 posts

Posted 13 May 2014 - 06:08 PM

View PostSmokeyjedi, on 13 May 2014 - 03:37 PM, said:

Than why the IPC thing Intel has over AMD????? until software extensions allow AMD to uncork the 25-40% IPC power in cores leftover it has up its sleeve.....sheesh.......


If AMD actually had a way to pull 40% more performance out of its CPUs right now, it wouldn't have already started design work on a completely different architecture that takes cues from Intel's Sandy Bridge and AMD's own Phenom II CPUs. Yeah, the "modular" architecture that Bulldozer, Piledriver, and Steamroller are all built on is being abandoned according to AMD's latest roadmap updates. The only chips that won't really see that change are the ones in the Xbox One/Playstation 4.

CryEngine does not take advantage of the one compiler that -can- make a difference: Open64. In fact, most software development companies don't. You won't be seeing any "AMD optimization" for games on the CPU side other than games that generically take advantage of more cores.

So, if you're a Windows PC gamer and you don't want to make sacrifices, you go Intel. There is a reason I retired my Phenom II X3 720BE in favor of my Core i7 2600K, after all.

#33 Goose

    Member

  • PipPipPipPipPipPipPipPipPip
  • Civil Servant
  • Civil Servant
  • 3,463 posts
  • Twitch: Link
  • LocationThat flattop, up the well, overhead

Posted 13 May 2014 - 11:17 PM

View PostKrinkov, on 11 May 2014 - 03:48 PM, said:

I get a 20% to 30% boost in performance on my AMD Fx8120 by turning off cores 1,3,5 and 7. The game isn't optimized to run on those cores. Turning them off prevents the game from attempting to run code on them. I use Process Lasso so that only MWO doesn't use the cores.

Run any more of the experiments? Like "turning off only core 7?"

#34 Smokeyjedi

    Member

  • PipPipPipPipPipPipPipPip
  • Liquid Metal
  • Liquid Metal
  • 1,040 posts
  • LocationCanada

Posted 14 May 2014 - 05:09 AM

View PostxWiredx, on 13 May 2014 - 06:08 PM, said:


If AMD actually had a way to pull 40% more performance out of its CPUs right now, it wouldn't have already started design work on a completely different architecture that takes cues from Intel's Sandy Bridge and AMD's own Phenom II CPUs. Yeah, the "modular" architecture that Bulldozer, Piledriver, and Steamroller are all built on is being abandoned according to AMD's latest roadmap updates. The only chips that won't really see that change are the ones in the Xbox One/Playstation 4.

CryEngine does not take advantage of the one compiler that -can- make a difference: Open64. In fact, most software development companies don't. You won't be seeing any "AMD optimization" for games on the CPU side other than games that generically take advantage of more cores.

So, if you're a Windows PC gamer and you don't want to make sacrifices, you go Intel. There is a reason I retired my Phenom II X3 720BE in favor of my Core i7 2600K, after all.

I would say that the single reason for the modular designs un impressiveness is the lack of supported extensions.....If AMD had been in bed with the Software devs like Intel has been with most benchmark companies, games (Windows all using Intel extensions for the most part, or atleast have very optimized versions)......... and software devs, they too would have a choke hold on the mainstream software extensions that drive these BIASED benchmarks..........And yes the efficiency trade offs are in favor of heat for Intel, but who cares about being conservative at this point? Im bottoming this pig out~Ballz Deep! I ran my 555BE phenom 2 @ 4.21ghz-Had i had a board with better vrms....I would have gone higher...........at any cost...... that 3 core of yours would be a screamer.............easily compete..........with large enough clock boost to saturate(bottleneck) DDR3.......

Edited by Smokeyjedi, 14 May 2014 - 05:15 AM.


#35 xWiredx

    Member

  • PipPipPipPipPipPipPipPip
  • Elite Founder
  • Elite Founder
  • 1,805 posts

Posted 14 May 2014 - 06:29 AM

View PostSmokeyjedi, on 14 May 2014 - 05:09 AM, said:

I would say that the single reason for the modular designs un impressiveness is the lack of supported extensions.....If AMD had been in bed with the Software devs like Intel has been with most benchmark companies, games (Windows all using Intel extensions for the most part, or atleast have very optimized versions)......... and software devs, they too would have a choke hold on the mainstream software extensions that drive these BIASED benchmarks..........And yes the efficiency trade offs are in favor of heat for Intel, but who cares about being conservative at this point? Im bottoming this pig out~Ballz Deep! I ran my 555BE phenom 2 @ 4.21ghz-Had i had a board with better vrms....I would have gone higher...........at any cost...... that 3 core of yours would be a screamer.............easily compete..........with large enough clock boost to saturate(bottleneck) DDR3.......


That 'bias' you keep talking about is mostly just developers choosing to use an industry standard compiler instead of one cherry-picked by AMD. AMD has an in-house version of the Open64 compiler I mentioned, and much of the internal performance testing done by AMD on CPUs before their release uses software compiled with this compiler. Since most developers instead use an industry standard compiler, it isn't realistic. GCC and the Visual C++ compiler are probably the main compilers used at this point, and their development has been driven by high-performing chips. The design on both of these hasn't been particularly driven by either CPU company directly, so you can't really say they are purposely favoring Intel. Further to the point, most applications that are compiled with AMD's favored compiler still do not perform better on an 8-core Piledriver CPU than on a 4-core Haswell CPU. Some do, and some even perform a full 5-10% better, but those are very few (I found 2 examples so far). The problem is, even in those instances, it takes AMD 125-140W to do what Intel does with 84-88W.

The 720BE I had never went past 3.7GhZ on water. My local grocery store has dry ice available and I did successfully get to 4.5GhZ using dry ice, but obviously I can't do that all the time. The 720BE at 3.7GhZ was a huge bottleneck for gaming, especially with dual Radeon 6870s. Those cards never realized their full potential until I moved to Sandy Bridge (a gigantic difference in FPS in Battlefield 3). As I said before, the CPU isn't just doing game calculations, it has to process the network stack and feed the GPU(s), too. It's a tiny, but almost exponential, decline when it can't do all of that quickly.

#36 Krinkov

    Member

  • PipPipPipPipPip
  • Bridesmaid
  • Bridesmaid
  • 146 posts

Posted 14 May 2014 - 07:35 AM

Confirmation from Karl Berg about a performance drop when using 8 core AMD processors. The question was about getting better performance by turning off the odd cores on my 8120 because of their inability to handle floating point operations.

View PostKarl Berg, on 13 May 2014 - 02:16 PM, said:


Took a quick look at the processor specs, and indeed there is a shared FPU unit between any two specific cores on this processor. In this sense, it's somewhat similar to a hyperthreaded model; although it seems there are independent dispatch and integer units available to each physical core.

This is unfortunate for our game, because we do heavily utilize the SSE instruction set for all floating point work. I would need to carefully profile a Bulldozer based system to get a good sense for why performance is dropping so dramatically for you; but it's certainly possible there is some kind of penalty being paid for swapping between cores. Whether this is due to thread context switching, L1/L2 cache thrashing behaviour, CPU pipeline stalls, or some other mechanism is really difficult for me to predict.






#37 Shamous13

    Member

  • PipPipPipPipPipPipPip
  • 684 posts
  • LocationKitchener, Ont.

Posted 14 May 2014 - 08:33 AM

View PostxWiredx, on 14 May 2014 - 06:29 AM, said:


That 'bias' you keep talking about is mostly just developers choosing to use an industry standard compiler instead of one cherry-picked by AMD. AMD has an in-house version of the Open64 compiler I mentioned, and much of the internal performance testing done by AMD on CPUs before their release uses software compiled with this compiler. Since most developers instead use an industry standard compiler, it isn't realistic. GCC and the Visual C++ compiler are probably the main compilers used at this point, and their development has been driven by high-performing chips. The design on both of these hasn't been particularly driven by either CPU company directly, so you can't really say they are purposely favoring Intel. Further to the point, most applications that are compiled with AMD's favored compiler still do not perform better on an 8-core Piledriver CPU than on a 4-core Haswell CPU. Some do, and some even perform a full 5-10% better, but those are very few (I found 2 examples so far). The problem is, even in those instances, it takes AMD 125-140W to do what Intel does with 84-88W.


Many software programmers consider Intel's compiler the best optimizing compiler on the market, and it is often the preferred compiler for the most critical applications. Likewise, Intel is supplying a lot of highly optimized function libraries for many different technical and scientific applications. In many cases, there are no good alternatives to Intel's function libraries.
Unfortunately, software compiled with the Intel compiler or the Intel function libraries has inferior performance on AMD and VIA processors. The reason is that the compiler or library can make multiple versions of a piece of code, each optimized for a certain processor and instruction set, for example SSE2, SSE3, etc. The system includes a function that detects which type of CPU it is running on and chooses the optimal code path for that CPU. This is called a CPU dispatcher. However, the Intel CPU dispatcher does not only check which instruction set is supported by the CPU, it also checks the vendor ID string. If the vendor string says "GenuineIntel" then it uses the optimal code path. If the CPU is not from Intel then, in most cases, it will run the slowest possible version of the code, even if the CPU is fully compatible with a better version.

Independent developers have been working on a new instruction Library for CPUs called Yeppp! for sometime, the main goal of this new library is to create a compiler that is optimized for all CPU platforms, including Intel, AMD & ARM. This was driven by the intentional sabotage lead by intel against AMD CPUs via "dirty" compilers that would cripple AMD CPU performance by up to 100%.

In layman's terms what the intel compilers do is if they want to calculate the value of 2, they feed the code 1+1=2 to intel CPUs, and feed the code 1+0.1+0.1+0.1+0.1+0.1+0.1+0.1+ 0.1+ 0.1+0.1=2 to AMD CPUs, clearly the bad code will make the calculations run significantly slower.

And software developers are forced to use these compilers, because there is no alternative, except now Yeppp! is emerging. You can read about these compilers in more detail here ,and try searching for the court case that intel lost because of the compiler that they were supplying. They were told to change it years agoe but I doubt that has happened or they have found another way to cripple the competition.

Edited by Shamous13, 14 May 2014 - 08:39 AM.


#38 Smokeyjedi

    Member

  • PipPipPipPipPipPipPipPip
  • Liquid Metal
  • Liquid Metal
  • 1,040 posts
  • LocationCanada

Posted 14 May 2014 - 10:24 AM

View PostxWiredx, on 14 May 2014 - 06:29 AM, said:


That 'bias' you keep talking about is mostly just developers choosing to use an industry standard compiler instead of one cherry-picked by AMD. AMD has an in-house version of the Open64 compiler I mentioned, and much of the internal performance testing done by AMD on CPUs before their release uses software compiled with this compiler. Since most developers instead use an industry standard compiler, it isn't realistic. GCC and the Visual C++ compiler are probably the main compilers used at this point, and their development has been driven by high-performing chips. The design on both of these hasn't been particularly driven by either CPU company directly, so you can't really say they are purposely favoring Intel. Further to the point, most applications that are compiled with AMD's favored compiler still do not perform better on an 8-core Piledriver CPU than on a 4-core Haswell CPU. Some do, and some even perform a full 5-10% better, but those are very few (I found 2 examples so far). The problem is, even in those instances, it takes AMD 125-140W to do what Intel does with 84-88W.

The 720BE I had never went past 3.7GhZ on water. My local grocery store has dry ice available and I did successfully get to 4.5GhZ using dry ice, but obviously I can't do that all the time. The 720BE at 3.7GhZ was a huge bottleneck for gaming, especially with dual Radeon 6870s. Those cards never realized their full potential until I moved to Sandy Bridge (a gigantic difference in FPS in Battlefield 3). As I said before, the CPU isn't just doing game calculations, it has to process the network stack and feed the GPU(s), too. It's a tiny, but almost exponential, decline when it can't do all of that quickly.

Just because This particular relationship isn't globally announced doesn't mean its not there......After all half the world is run with ulterior motives........If you don't believe that, than you truly need a spiritual awakening.......poor sheeple..... ;)

#39 xWiredx

    Member

  • PipPipPipPipPipPipPipPip
  • Elite Founder
  • Elite Founder
  • 1,805 posts

Posted 14 May 2014 - 02:27 PM

After a lot of looking into it, it appears that most of what makes CryEngine is compiled using the Visual C++ and CLANG compilers, so while some developers use the Intel-provided compiler (and several benchmarks are guilty of this), it isn't very relevant to MWO. Based on Phoronix testing with various compilers, in many instances CLANG is almost as good as Open64, but in many other instances a good 15-20% worse (still way better than GCC). Based on these two pieces of information, I can tell you for certain that Intel-specific optimizations are not relevant, and AMD performance in MWO could be worse than it is. In fact, it's -almost- best-case scenario for AMD CPUs. Parking the odd cores so there isn't any core switching might be of decent benefit as suggested by somebody else ITT, but outside of that I don't think you'll see a whole lot of help coming for AMD performance.

So... again with the suggestion to the OP to either get the maximum stable overclock out of his chip or move to an Intel platform.

#40 Nick Rarang

    Member

  • PipPipPip
  • FP Veteran - Beta 1
  • 81 posts

Posted 14 May 2014 - 02:58 PM

As a current AMD FX8350 / Radeon 7970 user, I do acknowledge the fact that for MWO; my 5.0ghz oc'd CPU bottlenecks my 1200/1600 oc'd GPU. I have to play at very high everything 1080p msaa on to make it less CPU bound but I still get fps dips that is a result of the bottleneck. Right now I average 50fps when gpu utilization is at 99% but those nasty 20 fps dips when GPU utilization drops below 90% is the tell tale sign that one of the cores possibly handling physics is overloaded while the other 4 cores are parked.

Battlefield 4 @1080p ultra everything including AA gives me a 73 fps average and I do not experience dips below 30fps. That is because Mantle API makes sure that CPU bottlenecks are put to a minimum. I think the only time that this can happen with MWO is if either Mantle or dx12 API gets supported. It's all in PGI hands whether to do this or not but as it stands, I will continue to get these dips at dx11. DX9 is worse for my system.

Do I regret getting a fx8350? NO. I do predict that future games, specially those ported from consoles will favor my set up because of the similar architecture of jaguar 8 cpu core with my fx chip as well as my 7970's affinity with console gpus. That is exactly the reason why i went with this set up when i built my 1st qtr 2013.

.

Edited by Nick Rarang, 14 May 2014 - 03:00 PM.






4 user(s) are reading this topic

0 members, 4 guests, 0 anonymous users