Jump to content

Why The Amd Hurt?


44 replies to this topic

#41 Chrithu

    Member

  • PipPipPipPipPipPipPipPip
  • Bad Company
  • 1,601 posts
  • LocationGermany

Posted 18 August 2015 - 07:18 AM

View PostSaxie, on 15 August 2015 - 08:55 PM, said:

... Although we'd have to see how many other AMD users are having the same issue, and if they aren't what exactly are they doing that everyone in this thread has overlooked.

...


Well there are different degrees of being affected as far as I can see. Those with older Phenom multicore CPUs combined with a crossfire GPU setup hurt the most as they are bound by CPU and bad crossfire support. Then there are those that have newer CPUs (or APUs as AMD calls them in the latest generations) with higher per core tact that hurt a little less from MWO's CPU hunger.

The funny thing is though: Even Intel CPUs with low per core tact have performance issues in MWO. I mean the only people on this board not complaining are those with very high end intel machines using overclocked CPUs in the 3.4 to 4.0 GHz range. Well and of course those that do not play.

If I recall correctly one of the devs posted quite a while ago that the main performance issue is the way they do the mech animations. Which makes sense since the performance difference between training grounds (where there is no animated mech) and live play is quite drastic.

#42 Chafe

    Member

  • PipPip
  • Colonel II
  • Colonel II
  • 32 posts

Posted 18 August 2015 - 03:09 PM

View PostDragoon20005, on 31 July 2015 - 01:35 AM, said:

I noticed he is using CF 290s

also that mobo is the major cause of concern

the VRMs will overheat at load and only got fixed in the Rev 4.0

also do not use beta BIOS


My guide to improving ghetto vrm based amd woe:
http://mwomercs.com/...96#entry4244196
Hope you find it helpful :)

#43 Shamous13

    Member

  • PipPipPipPipPipPipPip
  • 684 posts
  • LocationKitchener, Ont.

Posted 19 August 2015 - 05:03 AM

View PostChafe, on 18 August 2015 - 03:09 PM, said:


My guide to improving ghetto vrm based amd woe:
http://mwomercs.com/...96#entry4244196
Hope you find it helpful :)


a well written guide nice job, It should be in the hardware and accessory forum though it would get more attention their.

Edit: After looking in to this more I found this on tom's hardware, this may explain some of the strange behavior with amd processors

Quote

Basically it will underclock/shut down unneeded cores and bump up ones that are in use. However I have seen some anecdotal info that states that there are two stages. Stage 1 is all cores up to a certain frequency (depending on the TDP of the cpu) and the other is half the cores up to max turbo.

What could be causing the problem is this. Lets say that the game takes 4 cores. You have 8 cores and 4 FPUs (which are used in gaming). With turbo on the BIOS senses that only 4 cores are in use. So it shoves everything on to 2 modules, and powers down the other which frees up TDP for pushing the 4 remaining cores faster. since it shuts down a full module, you lose 2 FPU and are down to 2.

When you disable turbo core what happens is that the game sends out a thread for the FPU. You have 4 available so those threads have less time to wait to be processed. So the scheduler utilizes all the resources it can to get the best performance

Edited by Shamous13, 19 August 2015 - 07:17 AM.


#44 ApolloKaras

    Member

  • PipPipPipPipPipPipPipPip
  • Ace Of Spades
  • Ace Of Spades
  • 1,974 posts
  • LocationSeattle, Washington

Posted 19 August 2015 - 08:36 AM

You know just doing some tests on my machine, I'm running a 2600k, for the test I locked it @ 3400mhz, GTX670 Clocked up to 1228 (trying to avoid any gpu limitation), all settings low vsync off 1920x1080. Training grounds - Forest Colony. I know the fps numbers arent representative of being online however I need some type of control. Look at this:

My MSI Afterburner is set to show GPU Usage - FPS - each individual core (including hyperthread), and total CPU usage. Screenshots were taken with the highest/lowest numbers captured.

Hud on prior to shot

After the shot

hud off prior to shot

lowest numbers after shot

That hud is still sucking up gobs of cpu. Also appears the particles are tied to the cpu, that would explain the GPU usage drop the CPU is becoming a bottleneck.

Edited by Saxie, 19 August 2015 - 09:14 AM.


#45 Chrithu

    Member

  • PipPipPipPipPipPipPipPip
  • Bad Company
  • 1,601 posts
  • LocationGermany

Posted 19 August 2015 - 09:50 AM

Somewhat unrelated: I got Ark: Survival Evolved as a gift from a friend on Steam this week and suffer from low FPS there as well, this time bottlenecked by my GPUs though. Doing research I found that the game doesn't support SLI or Crossfire and probably never will due to it running on Unreal Engine 4 which uses a new way of rendering using info from the previous frame which is incompatible with both alternate frame and half frame rendering used in Crossfire/SLI.

Digging deeper into that topic I found that EPIC (developers of Unreal Engine) themselve say that multi GPU support will come with DX 12. The way I understand things DX12 can manage and utilize multiple GPUs independent from respective drivers in a transparent manner, meaning that to a game it just appears as a single gpu and shouldn't need any compatibility patches.

More interesting to read was an older article I found discussing specifically how AMD hardware could benefit immensely from DX12. Apparantly under DX10 and DX11 the AMD GPUs caused by how the driver is coded put much larger load in terms of drawcalls on the CPU than NVidia GPUs do, which makes AMD GPUs much more likely to bottleneck the CPU. In addition AMDs CPU architecture doesn't work well under that kind of load either.

Now my own conclusions are that AMD, when they designed their new CPU and GPU architectures already had Mantle in the works (which afaik is very similar to what DX 12 and Vulcan do) and designed the hardware with that in mind hoping it would be better adopted than it actually was. Mind you that we are talking 2012 and maybe earlier here. I still remember the benchmark tests that were spread utilizing mantle showing how much more performance you could get. But in the end only a handful of games used it. Nevertheless Microsoft and the OpenGL devs basically copied what AMD did with Mantle to make DX 12 and Vulcan. Which gives reason to hope that for the next generation of games that use DX 12 AMD GPUs will work a lot better.





5 user(s) are reading this topic

0 members, 5 guests, 0 anonymous users