Jump to content

Tweaks For Crossfire


5 replies to this topic

#1 Deffias

    Member

  • PipPipPip
  • Ace Of Spades
  • Ace Of Spades
  • 61 posts

Posted 01 October 2017 - 06:41 PM

I've seen threads for people getting crossfire (or mGPU, as it's going: https://www.pcworld....omplicated.html) to work in the past. Some from 2014, and 2016 - and I can't seem to solve my frame rate issue using options mentioned in those posts.

I'm running:
Ryzen 7 1700x
2 x RX 580 8gbs
ASRock x370 Taichi
16gb DDR4 2400mhz
Windows 10

Everything is up to date - I could update my bios, but that's really only going to help me get 3200mhz on my ram - not something that's going to solve this issue imo.

So, what I experience is a wide range of FPS. With everything max'd I'll hit 90+ fps. My frame rate drops when more of the environment comes into POV and gets rendered, and it'll drop to 10ish fps. Once this area of the environment is rendered, I'm back up to 90+ fps and it's smooth as butter till I start moving and more environment gets rendered.

I've also noticed my GPU utilization is all over the board and is synchronous with the frame rate drops. For example, GPU 1 will drop to zero after everything is rendered, but GPU 2 is being utilized anywhere from 50-100%. Then that will drop to zero and GPU 1 will hit anywhere from 50-100%.

One thread mentioned this is due to AMD gpus dropping low in their power state during idle - but this was a thread from 2014 and I've not looked too deep into this yet. I'll have to do some more research into the current crimson drivers to verify if this is even a function in the drivers still.

Any pointers/suggestions (other than grab a better single GPU) would be greatly appreciated.

#2 NARC BAIT

    Member

  • PipPipPipPipPipPipPip
  • Ace Of Spades
  • 518 posts
  • Twitch: Link
  • LocationAustralia

Posted 01 October 2017 - 11:04 PM

from memory, the games 'auto' settings wont properly use crossfire setups, that may not be correct, as its been a while since I've actually touched that sort of setup ....

so your *probably* going to need to create a user.cfg, for Xfire you'll probably need to set the key
r_MultiGPU = 1
2 is the normal option, which is 'auto' detection, it used to ONLY work for NVidia, because yknow, NVidia were the company that got to influence crytek to make sure that 'their product performed better' ... an investment still paying off 6+ years later in some circles ...

I can tell you that the faster you get the memory on a Ryzen, the faster its core interconnect stuff runs, which is useful for any task that has any level of multi threaded, I run my kit at 3500 MHz, mostly because things get real weird after that ....

you didn't happen to mention what resolution your using either, this can have a great impact on how much stuff the renderer has to prepare on the cpu, before it can pass data to the GPU for actual rendering, especially seeing as PGI went with configurations that tend to work on a per pixel basis, the more pixels, the more preparation required ... 720p works out to 414,720 pixels per frame, 1080p works out to 2,073,600 pixels per frame, 2K gets out to 3,648,400 pixels per frame, and 4K pushes out to 8,294,400 pixels per frame ... and you didn't give us a basic idea of what settings your using in the game options, a lot of people will refuse to sacrifice visual quality for performance, even though they claim to have a problem with the performance side of things .... for instance using MSAA adds a good chunk of video load, but really gives very little in terms of visual quality when your running around in a lolcust @ 150kph constantly snapping the cockpit from left to right ...

if you don't mind guinea pigging a bit, you could experiment with these settings
sys_budget_streamingthroughput = 999999  -- do not use more than 6 digits!
sys_LocalMemoryGeometryStreamingSpeedLimit = 35000
sys_LocalMemoryTextureStreamingSpeedLimit = 35000
sys_streaming_max_bandwidth = 35000
these are my values on my current system, yours may be higher or lower, to find the value in an 'command prompt' with administrator rights, you type in the command 'winsat mem' ... the bottom three are the raw number ( mines rounded down ), and the top one is that number x 1024, so 35000 x 1024 = 35840000 .... in theory, that might help with some of the stalls your seeing ...

next is threading .... threading on a ryzen seems to be very wrong to me, that is, how the old code sees and interprets things about your CPU, hasn't been updated to reflect your CPU .... this is a widely recognised issue, for other games .... and the patch for 'rise of the tomb raider' that got a FPS bump, was based around changing the way software interprets hardware, and at my end, I've managed to eek out higher ( better ) min/avg/max rates playing with different configurations .... feel free to try this, it also goes into a user.cfg file, anyway, I've modified my numbers for an 8 core ryzen with SMT enabled ...
sys_job_system_max_worker = 16
sys_main_CPU = 1 
sys_physics_CPU = 2
sys_streaming_CPU = 3
e_ParticlesThread = 4
ca_thread0Affinity = 5
ca_thread1Affinity = 6
r_WaterUpdateThread = 7   
e_StatObjMergeUseThread = 8
sys_TaskThread0_CPU = 9
sys_TaskThread1_CPU = 10
sys_TaskThread2_CPU = 11
sys_TaskThread3_CPU = 12
sys_TaskThread4_CPU = 13
sys_TaskThread5_CPU = 14
I also seemed to get good results from putting the task threads in between the primary threads, so main = 1, taskthread0 = 2, physics = 3 and so on in that fashion ... one way might work out better for you than the other, at this point, I'm not completely sure ....

EDIT NOTE : after testing do not use more than 6 digits for 'sys_budget_streamingthroughput', if your number is longer than 6 digits, fill it with 9's ....

Edited by NARC BAIT, 02 October 2017 - 05:28 AM.


#3 D34DMetal

    Member

  • PipPipPipPipPip
  • Legendary Founder
  • Legendary Founder
  • 134 posts
  • Locationin a Mad Cat duh...

Posted 01 December 2017 - 11:57 PM

The game is works better with NVidia GPU's and Intel CPU's unfortunately.

#4 Nightbird

    Member

  • PipPipPipPipPipPipPipPipPipPip
  • The God of Death
  • The God of Death
  • 7,518 posts

Posted 02 December 2017 - 10:52 AM

You're CPU limited, game can't use all of even one of those GPUs.

https://mwomercs.com...ost__p__5962801

#5 Deffias

    Member

  • PipPipPip
  • Ace Of Spades
  • Ace Of Spades
  • 61 posts

Posted 18 April 2018 - 08:17 PM

I went TDY after that post and tooooooooootally forgot about this.

Thank you for your inputs, and the details. If for some reason you gentlemen happen by this thread again:

View PostNARC BAIT, on 01 October 2017 - 11:04 PM, said:

I can tell you that the faster you get the memory on a Ryzen, the faster its core interconnect stuff runs, which is useful for any task that has any level of multi threaded, I run my kit at 3500 MHz, mostly because things get real weird after that ....


Currently running 3200mhz - and you are absolutely right. I didn't have my bios updated the time of previously posting.

View PostNARC BAIT, on 01 October 2017 - 11:04 PM, said:

you didn't happen to mention what resolution your using either, this can have a great impact on how much stuff the renderer has to prepare on the cpu, before it can pass data to the GPU for actual rendering, especially seeing as PGI went with configurations that tend to work on a per pixel basis, the more pixels, the more preparation required ...


I'm running at 5760x1080 - basically 3x1080p monitor setup with Eyefinity. Wut can I say, I <3 Freesync. Which puts me at 6,220,800 pixels per frame. So, based off your instruction it's best to use 999999.

View PostNARC BAIT, on 01 October 2017 - 11:04 PM, said:

if you don't mind guinea pigging a bit, you could experiment with these settings
sys_budget_streamingthroughput = 999999  -- do not use more than 6 digits!
sys_LocalMemoryGeometryStreamingSpeedLimit = 35000
sys_LocalMemoryTextureStreamingSpeedLimit = 35000
sys_streaming_max_bandwidth = 35000
these are my values on my current system, yours may be higher or lower, to find the value in an 'command prompt' with administrator rights, you type in the command 'winsat mem' ... the bottom three are the raw number ( mines rounded down ), and the top one is that number x 1024, so 35000 x 1024 = 35840000 .... in theory, that might help with some of the stalls your seeing ...


Sys assessment returned 36447.58 mb/s, so i'll probably go with 35000 as well to be on the safe side. I'm not feeling ballsy enough to max that out.

View PostNARC BAIT, on 01 October 2017 - 11:04 PM, said:

sys_job_system_max_worker = 16
sys_main_CPU = 1
sys_physics_CPU = 2
sys_streaming_CPU = 3
e_ParticlesThread = 4
ca_thread0Affinity = 5
ca_thread1Affinity = 6
r_WaterUpdateThread = 7
e_StatObjMergeUseThread = 8
sys_TaskThread0_CPU = 9
sys_TaskThread1_CPU = 10
sys_TaskThread2_CPU = 11
sys_TaskThread3_CPU = 12
sys_TaskThread4_CPU = 13
sys_TaskThread5_CPU = 14

I also seemed to get good results from putting the task threads in between the primary threads, so main = 1, taskthread0 = 2, physics = 3 and so on in that fashion ... one way might work out better for you than the other, at this point, I'm not completely sure ....



Will do, definitely going to give this a go.

#6 Deffias

    Member

  • PipPipPip
  • Ace Of Spades
  • Ace Of Spades
  • 61 posts

Posted 18 April 2018 - 08:22 PM

I've tweaked the in game settings, and it had little improvement on using crossfire.

It helps when running 5760x1080p with one of my gpus, but best I can muster is about mid 30ish fps with in game settings tweaked.

Then of course, I could run the game with 1 card, and 1 monitor and max everything out no problem... Buuuuuuuuuut i'm a glutton for punishment and would like to attempt to use all the hardware Posted Image





1 user(s) are reading this topic

0 members, 1 guests, 0 anonymous users