Jump to content

Mwo + Refresh Rate Troubles


16 replies to this topic

#1 Cyborne Elemental

    Member

  • PipPipPipPipPipPipPipPipPip
  • 3,950 posts
  • LocationUSA

Posted 17 December 2017 - 09:36 AM

Built a new PC.

Running Windows-10 64bit home
Ryzen-7 series x1700 (8 core) 3.6ghz
MSI X370 Gaming M7 Motherboard (newest BIOS)
2x MSI Rx480 gaming (8 gig) video cards in Crossfire
16 gigs DDR4 2400 Corsair Vengence
EVGA 800 watt SUPERNOVA powersupply
1x Samsung EVO 840 SSD 512 gig
1x WesternDigital HDD SATA 6.0 drive 1-TB

Newest MB-BIOS, AMD chipset and Video drivers.
ect ect.

I have an ASUS monitor that runs 120hz.
I can't get MWO to run at 120hz like I could on my old machine.

I have set windows monitor display adapter (Asus monitor)
to run at 120hz refresh.
Also tried adding "r_overrideRefreshRate = 120" to my user.cfg

Can't get it to work in 120hz refresh when launching MWO, it always reverts to 60hz.

In general, MWO is running very poorly on this new machine, worse than my old machine which was built 3 years ago..
Intel-i5 4670k, (Quadcore) 3.2ghz.
Gigabyte Gaming mobo.
DDR3-1300
single XFI Radeon Rx480 (4 gig)
700watt PS

Any ideas guys?

Edited by The Trojan Titan, 29 December 2017 - 08:38 AM.


#2 NARC BAIT

    Member

  • PipPipPipPipPipPipPip
  • Ace Of Spades
  • 518 posts
  • Twitch: Link
  • LocationAustralia

Posted 17 December 2017 - 06:11 PM

the general rule of late, has been to press alt-enter twice, in game .... alt-enter is a keyboard shortcut to fullscreen the application,and apparently doing it twice will get MWO to grab the 'correct' refresh, for most people anyway .... there is no specific user.cfg cvar for the refresh rate .... its meany to to just run at, whatever your panel was set for, at game launch .... unless you happen to be looking at the mechlab .... there they used a different style of frame limiting, with a 60 FPS cap ... BUT, in reality, even though it might say, 60 FPS in the mechlab, your generally only going to actually 'see', half that rate ... at best ... as in, just because software sees 60 fps, the actual game only updates half as often ... try spinning your mech in the mechlab, and youll see that your not actually receiving the 60 FPS claimed in the mechlab ....

#3 Cyborne Elemental

    Member

  • PipPipPipPipPipPipPipPipPip
  • 3,950 posts
  • LocationUSA

Posted 17 December 2017 - 06:59 PM

Seems more and more like this game needs an updated Engine...

#4 NARC BAIT

    Member

  • PipPipPipPipPipPipPip
  • Ace Of Spades
  • 518 posts
  • Twitch: Link
  • LocationAustralia

Posted 18 December 2017 - 06:45 AM

putting the same code into a new engine would likely yield the same results that we currently have .... and any gained improvements would likely just band aid existing problems .... some bits need to be re-done, from scratch in a method more efficient, in whatever engine is being used ... at any time, if you turn the HUD off, you'll get an FPS boost ... clear sign there are problems there .... luckily, that code cannot be merely imported into a new engine .... lets just hope they do it better ...

there is no refresh rate control available from a user.cfg file, all you get there, is a method to set the maximum FPS ... take that with a grain of salt too .... ...

if this is a fresh windows install thats behaving differently to how the screen used to be, than maybe there is something configurable that is in the wrong mode .... the guys who do the double alt-enter all have freesync monitors, you didnt specity, and I dont have one to know any tricks to configure softwares etc ... you might want to try poking sticks at things that reference variable refresh rate ... but dont devote your life to it, a bunch of other people might love your for a fix though ...

#5 Cyborne Elemental

    Member

  • PipPipPipPipPipPipPipPipPip
  • 3,950 posts
  • LocationUSA

Posted 23 December 2017 - 11:49 AM

Well I have an update, and to be honest...
I really don't know what to think about this..

I switched to DX-9 using my Crossfire build | 2x MSI Radeon R-480x's
and it seems that only in DX-9 do both cards run at full GPU usage as far as clock speeds.

Now the weird thing..

My FPS bounces from 40-80 back and forth constantly, I have Vsync on, and have been using the ALT+Enter to get my monitor to run at 120hz mode.

I know that Crossfire is a GEN-3 and DX-12 feature as far as from what I've read.

Game feels somewhat smoother, despite the jumps in fps.

I'm curious, why it will run both cards as though Crossfire is enabled in DX-9, but in DX-11 I have one card running at full GPU clock, and the other linked card sits idle at 300...

#6 Cyborne Elemental

    Member

  • PipPipPipPipPipPipPipPipPip
  • 3,950 posts
  • LocationUSA

Posted 25 December 2017 - 03:50 AM

Well I found a small, easier step to getting into 120hz mode for MWO.
In Video settings, set the game to start in windowed mode, launch the game, and once you're in Mechlab, just press ALT+Enter.
1 less step.

AMD Ryzen-7 x1700 (8core) running at 3.6 Ghz
2 MSI RX480-gaming 8 gig cards

This is my user.cfg

It seems to be pretty stable, average FPS are much better now.
I have to run in DX9 mode to get both RX480 cards to run synchronously, DX11 just does not seem to make both cards in Crossfire work, PGI must have left out some code or something because in DX-11 only 1 card will run at its full clock rate.

r_FullscreenWindow =1
r_overrideRefreshRate =120
r_AntialiasingMode =0
r_MultiGPU =1
r_MultiThreaded =1
sys_job_system_enable =1
sys_job_system_max_worker =8
e_ParticlesThread =7
e_StatObjMergeUseThread =6
sys_limit_phys_thread_count =4
sys_main_CPU =5
sys_physics_CPU =4
sys_streaming_CPU =6
ca_thread =1
ca_thread0Affinity =0
ca_thread1Affinity =1
sys_TaskThread0_CPU =2
sys_TaskThread1_CPU =3
sys_TaskThread2_CPU =0
sys_TaskThread3_CPU =1
sys_TaskThread4_CPU =2
sys_TaskThread5_CPU =3
p_num_threads =4
p_num_jobs =6


Would love some help getting this dialed in a little better for 8 cores though.

Edited by The Trojan Titan, 25 December 2017 - 07:07 AM.


#7 NARC BAIT

    Member

  • PipPipPipPipPipPipPip
  • Ace Of Spades
  • 518 posts
  • Twitch: Link
  • LocationAustralia

Posted 28 December 2017 - 10:23 PM

r_overrideRefreshRate
r_AntialiasingMode
these dont exist .... I'm not sure where you got them from, maybe a reference for crysis, but, in MWO we dont have access to those .... they will probably throw an error in the log file ....

secondly, assuming youve still got SMT on, you've got everything set to run on one CCX ... and while that is not a problem in itself, it makes me wonder if youve tried the other options, like setting the job system tasks to run on the second CCX ... or spreading out the threads a bit ... because 'thread' 0 and 1 are on one 'core' which do not have an even distribution of oomph between them ... disabling SMT increases the L1/L2 throughputs massively, so does increasing the base clock ... so maybe try putting the main threads onto say even cores, like 0,2,4 ... I'm not 100%, but youll probably get a better result from disabling the job system anyway, seeing as it tends to move data around more than you'll need to, and that can come with a performance penalty on a ryzen ....

last few days I've been playing with the 32 bit client, as opposed to the default 64 bit client ... in theory the 64 bit version should be better, by a margin you cant really measure, but it seems on my setup that a 'pause' lasts longer ....

on another level, Ryzens love fast / low latency ram ... and the general consensus is that you want to be up around the 3200 mhz range .... I've got a 4000 mhz kit ... but have never gotten it 'really stable' beyond 3500 .... if you can decrease the latency on the ram you have, that can make everything 'snappier' ...[/color]

if you dont have anything installed to check ram latency, try out UserBenchMark ( http://www.userbench...erBenchMark.exe ) its a small download benchmark utility, that covers most of the basic aspects, and will give you a link you can post back to us like this ... the latency information is at the bottom ... http://www.userbench...UserRun/6271333

Edited by NARC BAIT, 28 December 2017 - 10:24 PM.


#8 Cyborne Elemental

    Member

  • PipPipPipPipPipPipPipPipPip
  • 3,950 posts
  • LocationUSA

Posted 29 December 2017 - 07:14 AM

Where is a good reference for MWO commands that were used to build these user.cfg's?

https://mwomercs.com...-for-crossfire/

https://mwomercs.com...01#entry5962801

I've tried running the user.cfg's on both of these forum threads, and while running Direct-X11 I see terrible results, and Crossfire is not utilizing both cards resources, Videocard1 runs at 1300 and around 90% usage stats, and the second card idles at 300 and 1% usage, regardless of what is in the .cfg.

In Directx-9, in full screen, both cards run at 1300 core clocks, and 70%-90% usage and I get slightly better FPS.

To me that is very odd. Why Directx-11 in MWO isn't supporting Multi-GPU, but Dx9 is?
As far as code and setting up a user.cfg that works, I'm at square one, I have minimal knowledge about how to setup the multithreading to work properly, so I would appreciate the help if people are willing to explain a little better.

** GEN-3 support for Multi-GPU and cards with 4+gig of onboard Ram enabled
** Set SMT (simultaneous multi-threading) to disabled in bios, will see if it has any effect.

Edited by The Trojan Titan, 04 January 2018 - 06:42 AM.


#9 NARC BAIT

    Member

  • PipPipPipPipPipPipPip
  • Ace Of Spades
  • 518 posts
  • Twitch: Link
  • LocationAustralia

Posted 31 December 2017 - 08:53 PM

*WARNING* *WARNING* *INCOMING WALL OF TEXT*

personally I cant really say much about the crossfire stuff, as I havent used it for years, same with SLI for that matter, mostly because it always brought about results like you describe above, inconsistent that make little sense, so most of the time I just had a secondary card, doing very little other than adding heat to the card above it .... but ... maybe the problem is the 'mode' that its trying to do under crossfire that is setup wrong for MWO .... quick googling reveals this image showing where you might have some hope of changing the setting in use ...
Posted Image

in relation to SMT, your ryzen 1700 has 8 cores, and 16 threads, with each core containing two threads, something like this ....
CORE 0 = THREAD 0+1
CORE 1 = THREAD 2+3
CORE 2 = THREAD 4+5
CORE 3 = THREAD 6+7
						  INFINITY FABRIC
CORE 4 = THREAD 8+9
CORE 5 = THREAD 10+11
CORE 6 = THREAD 12+13
CORE 7 = THREAD 14+15

now in theory, assuming your using an application that is well designed for multi-core execution (something other than MWO) each thread could be doing a different task, allowing said program to get through the workload faster .... see how I left a gap between the two groups, thats not an accident, but is a weird quirk, that gap represents the division between the two CCX's because your 8 core processor, is really two 4 cores 'glued' together, apparently thats how we do it now ... the gap, marked as 'infinity fabric' represents a performance penalty any time data has to cross from one side to the other, and it becomes limited to %50 of your 'rated' memory speed .... say your using 2400 mhz, the infinity fabric runs its bridge at 1200 mhz .... the 'ideal' recommended for gaming speeds tends to be around 3200 mhz, resulting in a 1600 mhz bridge, the faster you can get your memory, the less 'time in nanoseconds' of a performance penalty you get .... personally I lose stability at around 3500 mhz, and each 'step' makes a slightly noticeable difference ...

disabling SMT gives each thread its full access to the L1/L2 cache and they tend to have a higher throughput .... so how much raw 'performance' does that disable ? somewhere between %20 and %30, of the total .... but MWO wont get near the total .... on your system, its probably only using %20-%30, during its heaviest loads .... on my setup, I have SMT disabled @ around 4ghz ( depends on the day) , MWO usage stays under 50%, always ....

but its something that you really need to try out for yourself, to see how it impacts the range of software you use .... if your not doing massive 3D rendering, or mining, or going for cinebench records .... SMT doest really help 'general usage' that much, and makes some things slower, depending on where the scheduler has decided to execute it, and how often that process will have its thread changed, or interact with other threads ....

when you assign 'tasks' to 'threads' in a user.cfg, you might find that you get better performance putting the taskthreads onto seperate cores, like this ...
sys_TaskThread0_CPU = 9
sys_TaskThread1_CPU = 10
sys_TaskThread2_CPU = 11
sys_TaskThread3_CPU = 12
sys_TaskThread4_CPU = 13
sys_TaskThread5_CPU = 14
or it might run better if you put the main threads onto each primary thread within a core .... like
sys_main_CPU = 0
sys_physics_CPU = 2
sys_streaming_CPU = 4
e_ParticlesThread = 6
ca_thread0Affinity = 8
ca_thread1Affinity = 10
r_WaterUpdateThread = 12
and either way might work better with the job system on, or off .... we honnestly shouldnt need the current job system implementation, it was designed more for 2ghz processors, running all threads over 2 or 4 cores .... and on a ryzen it might attract more infinity fabric penalty than it provides benefits ....

I cant really tell you with any confidence that one way will work better than another, at your end ... you'll have to test and decide for yourself ...

#10 Cyborne Elemental

    Member

  • PipPipPipPipPipPipPipPipPip
  • 3,950 posts
  • LocationUSA

Posted 05 January 2018 - 05:29 AM

Thank you for the info Narc, that makes alot more sense.

Right off the bat, I can see an issue I ran into with my build.

I'm Running DDR4-2400, and have been reading alot about performance of the Ryzen architecture being very very picky in terms of Memory DIMMS speed affecting performance.

I've ordered 16 gigs of DDR4-3200 and am waiting for it to show up.

I was amazed to see the benchmark scores and just how big of an impact memory speeds made, going from 2400-3200 the performance gains were huge the faster the ram was.

What I find sad, is that AMD specific memory is maxxed out at 3200 right now as far as availability, and very few manufacturers make AMD specific DIMMS at those higher speeds.

Looks like Intel has the market on alot of things..

#11 NARC BAIT

    Member

  • PipPipPipPipPipPipPip
  • Ace Of Spades
  • 518 posts
  • Twitch: Link
  • LocationAustralia

Posted 05 January 2018 - 02:31 PM

yeah ... theyve got other problems at the moment ... in relation to the memory controller / handling that they've always touted has been better than ours ....

AMD came to the DDR4 party late ... and by then, intel had influenced the manufacturers to suit themselves ... hard to blame them for that in some regards ... theres current debate about just how much performance intel about about to lose ... on nearly every processor for the last 20 years .... something like 5% would be acceptable to most people who brought decent performance ... something like 30%, might be crippling .... needs a little more time to see how that plays out ... but it doesnt help you any either way ...

around the 3200mhz mark, and depending on the sticks of ram, 'tight timings' can yield better results than going faster, faster translates to higher bandwidth, throughput etc, whereas tightening the timings reduces the micro slices of wasted time, and have things 'ready' a degree that you have no hope of ever noticing individually .... but when your going to do this as many times as possible every second, less time wasted makes a bonus at the other end ...

#12 M T

    Member

  • PipPipPipPipPipPip
  • Ace Of Spades
  • Ace Of Spades
  • 351 posts
  • LocationGouda, South Holland

Posted 20 July 2018 - 03:44 PM

Found a solution: Custom Resolution Utility.

Disclaimer: If you **** up something make sure you have Splashtop or some remote program running so you can revert back the changes you've done in case...

----------------------------

Can't say for AMD but works on my Nvidia card.

1. Make sure your profile is on (active)
2. Deselect every tickbox on the left
3. Find your settings for your max resolution / refresh rate first and make a printscreen or write values down somewhere.
4. Get rid of all resolutions in every section, also in the extension block, go over all the data including EDID and take it all out.
5. Add new resolution in 'Detailed resolutions' and Extension block with the values you wrote down earlier.
6. Reboot or use restart(64).exe to reset driver.

Result:
- Only highest refresh rate available in Nvidia Control panel, 60hz, 85hz, 100hz and whatever you had are removed.
- Forces Windows and all games, exclusive full screen or not to run at maximum refresh rate at all times, no matter what resolution even.
- Full screen optimization works nicely here (Windows 10 1807), basically full screen + instant alt tab best of both worlds. Lowest input lag. Best of both worlds :-)

I bet this fixes a lot of refresh rate issues on a lot of games and system wide...

Edited by M T, 20 July 2018 - 03:45 PM.


#13 Peter2k

    Member

  • PipPipPipPipPipPipPipPipPip
  • Elite Founder
  • Elite Founder
  • 2,032 posts
  • LocationGermany

Posted 23 August 2018 - 07:59 PM

Lots in this thread I'm rather unsure about the exact meaning off and seemingly half truths or something.

So I'm guessing when you set your resolution in MWO it always reverts to 60Hz?
What about borderless windowed mode?
Because I'm betting it would run like you want with borderless window
Just not the fps you might want, but the Hz should be there

RAM speed with a Ryzen matters because the speed of the infinity fabric, the interconnect between cores, is tied to RAM speed
There is no direct control like with an Intel CPU using mesh
The old adage low timings and high speed is surely true, could be gotten a lot with 3000 CL14 for maybe 10 bucks more than that 2400 kit
3000 isn't exoticly fast
It's fine your going to replace it, but that being said usually any 2400 kit can do 3000 with a small bump in voltage (the new kit will have a higher voltage than the 2400 kit, that RAM is just factory overclocked)

RAM speed isn't maxed out for AMD because Intel or something, it's the memory controller in Ryzen that can't go higher
For instance using one stick instead of 2 would yield higher speeds
Secondly the second gen Ryzen improved the memory controller, giving higher speeds
It's not just the RAM that needs to be stable at the rated speed, the memory controller needs to be able to handle it as well


Maybe it wasn't that AMD was "late" to DDR4, but that it was a completely new platform
The board partners had no experience with while simultaneously having to deal with new Intel chipsets as well, all in a manner of a few months

The Ryzen platform matured very fast in the following months

Also try to coax more frequency out of your Ryzen
MWO likes as fast cores as you can give it.

The trouble with CCX was fixed in Windows 10 like a year ago, disabling SMT could improve performance slightly, but oc'ing the RAM from 2400 to 3000 should provide a bigger boost
As would adding 100Mhz more, the 1700x should top out just shy of 4Ghz, as more capable CPU dies where binned for Threadripper.

For crossfire or SLI to work the driver needs to support the game as well, depending on the profile it might work in dx9 or/and dx11
Sometimes an old driver might actually be better for an "old game"

And why would crossfire be a dx12 feature?
DX12 can theoretically use 1 card from AMD and 1 from Nvidia in a sort of SLI/crossfire, the only game that can do that though is Ashes of singularity.
Vulkan gained the ability for SLI/crossfire with 1.1 I think.

It's an old game with an old engine, it doesn't respond to more cores like newer titles do, you could have threadripper and have worse perfomance than your old i5

Btw
Processor wise, when it comes to MWO, the i5 would've been better
Pushing that i5 to 4.5 Ghz should have be been doable, was a K after all


That being said, Ryzen is a great platform, just not particularly to MWO as it can't multihtread really well


View PostNARC BAIT, on 18 December 2017 - 06:45 AM, said:

putting the same code into a new engine would likely yield the same results that we currently have ...


That's not really how it works

PGI would license a new engine and support would be part of the deal, well usually
It's not like you take your code out of CryEngine and copy and paste it into Unreal Engine.

Both engines provide a framework from which you start your thing.

You can see a difference between MWO and MW5 already, add NVidia support to the mix and it should all work out better then MWO does now
Especially since coding seems to be a lost art amongst the MWO dev's (moved over to MW5? :D )

View PostM T, on 20 July 2018 - 03:44 PM, said:

Found a solution: Custom Resolution Utility.

Disclaimer: If you **** up something make sure you have Splashtop or some remote program running so you can revert back the changes you've done in case...

----------------------------

Can't say for AMD but works on my Nvidia card.

1. Make sure your profile is on (active)
2. Deselect every tickbox on the left
3. Find your settings for your max resolution / refresh rate first and make a printscreen or write values down somewhere.
4. Get rid of all resolutions in every section, also in the extension block, go over all the data including EDID and take it all out.
5. Add new resolution in 'Detailed resolutions' and Extension block with the values you wrote down earlier.
6. Reboot or use restart(64).exe to reset driver.

Result:
- Only highest refresh rate available in Nvidia Control panel, 60hz, 85hz, 100hz and whatever you had are removed.
- Forces Windows and all games, exclusive full screen or not to run at maximum refresh rate at all times, no matter what resolution even.
- Full screen optimization works nicely here (Windows 10 1807), basically full screen + instant alt tab best of both worlds. Lowest input lag. Best of both worlds :-)

I bet this fixes a lot of refresh rate issues on a lot of games and system wide...


Yeah I'm not sure for AMD either, but with Nvidia you can just set a custom resolution using the driver panel and give that resolution the refresh rate you want
It's how I drive my 60hz display with 75hz

In games it shows up with the same resolution set apart with the refresh rate at the end


Also, again, MWO = best mode = borderless window

Can alt tab all day

Edited by Peter2k, 23 August 2018 - 08:38 PM.


#14 NARC BAIT

    Member

  • PipPipPipPipPipPipPip
  • Ace Of Spades
  • 518 posts
  • Twitch: Link
  • LocationAustralia

Posted 23 August 2018 - 11:52 PM

View PostPeter2k, on 23 August 2018 - 07:59 PM, said:

Lots in this thread I'm rather unsure about the exact meaning off and seemingly half truths or something.
well, your pretty late to the party ...

View PostPeter2k, on 23 August 2018 - 07:59 PM, said:

Secondly the second gen Ryzen improved the memory controller, giving higher speeds
actually, its the same, its completely the same, and other than being fractionally smaller, its not different, some people have better results, but its not actually better

View PostPeter2k, on 23 August 2018 - 07:59 PM, said:

The board partners had no experience with while simultaneously having to deal with new Intel chipsets as well, all in a manner of a few months
the 'board partners' have been manufacturing ddr4 boards for a few years, the 'controller' is not located on the boards

View PostPeter2k, on 23 August 2018 - 07:59 PM, said:

It's not like you take your code out of CryEngine and copy and paste it into Unreal Engine.
lets imagine someone made a 'flow diagram' of how a system is meant to work, the map information for instance, now lets say, that over time, it got really bloated and dysfunctional, now if you took that same bad flow chart, and imported the bad workflow into a new engine, you'd get, the same bad result ...

View PostPeter2k, on 23 August 2018 - 07:59 PM, said:

You can see a difference between MWO and MW5 already, add NVidia support to the mix and it should all work out better then MWO does now
yeah, the ray tracing video really did tell us more than I wanted to know ... where they said something to the effect of 'using this technology means we dont have to fix things' .... but where does that leave everyone who isnt going to have a ray tracing capable card ... looking at glitches that will never be fixed, from day one ?

I want to believe there is hope, but the inner cynic screams so loudly

#15 Peter2k

    Member

  • PipPipPipPipPipPipPipPipPip
  • Elite Founder
  • Elite Founder
  • 2,032 posts
  • LocationGermany

Posted 24 August 2018 - 07:58 AM

View PostNARC BAIT, on 23 August 2018 - 11:52 PM, said:

well, your pretty late to the party ...

slow day, aside from complaints there is not that much going on


View PostNARC BAIT, on 23 August 2018 - 11:52 PM, said:

actually, its the same, its completely the same, and other than being fractionally smaller, its not different, some people have better results, but its not actually better

its been reported by AMD that they improved latency in the infinity fabric and all together in the Ryzen architecture, aside from that I'm taking the word over some long standing review sites over random internet dude that the memory controller has actually not been improved upon

btw, having a better manufactureing process can indeed lead to better performing silicone
that can be seen with Intel that hasnt actually switched its architecture in the desktop market since Skylake, just improved upon the manufacturing process; 14 to 14+ to 14++

View PostNARC BAIT, on 23 August 2018 - 11:52 PM, said:

the 'board partners' have been manufacturing ddr4 boards for a few years, the 'controller' is not located on the boards

its not about putting some traces between CPU and RAM that I'm doubting they had no experience with, its the firmware actually being in control of everything, ya know the AGESA (Bios) on the AMD boards
that has matured, not the actual hardware implementation

if that weren't true then why do the same boards run RAM higher than they used to at launch

View PostNARC BAIT, on 23 August 2018 - 11:52 PM, said:

lets imagine someone made a 'flow diagram' of how a system is meant to work, the map information for instance, now lets say, that over time, it got really bloated and dysfunctional, now if you took that same bad flow chart, and imported the bad workflow into a new engine, you'd get, the same bad result ...

there is more to performance than having code that is bogging you down because its ineffiecient
even then you still cant actually import the code as the engines just arent the same, you'd have to rewrite it from ground up, well the stuff that you have to

That's probably why Russ was once saying UE or new CryTech would be the same work

new from the ground up, doesn't mean it would end up better though

its not like building a map or mech will require a lot of coding and the "game modes" aren't that

the whole engine is build around old tech

View PostKarl Berg, on 12 June 2014 - 09:18 PM, said:

The draw calls made into D3D are very CPU intensive. A good chunk of that is due to the lego-like nature of the mechs; being formed out of dozens of individual components rather than a single character that can be rendered with a single draw call, like in most other games. It's also compounded by the particle system, the terrain system, and the older Scaleform 3 integrated into the engine.

draw calls is not something DX 9/11 can handle really well
if you remember a bit back, AMD came up with mantle and showed how a lot of draw calls can make performance literally cave in in DX11 vs mantle (which in a way led to Vulkan and DX12)

then you got the old scaleform that isn't running really well, but doesn't get updated because its a lot of work and a new engine comes with it any way

View PostNARC BAIT, on 23 August 2018 - 11:52 PM, said:

yeah, the ray tracing video really did tell us more than I wanted to know ... where they said something to the effect of 'using this technology means we dont have to fix things' .... but where does that leave everyone who isnt going to have a ray tracing capable card ... looking at glitches that will never be fixed, from day one ?

I want to believe there is hope, but the inner cynic screams so loudly


cant be worse than now really

somehow I doubt you can even turn on RTX and run into single digit fps if the whole game is running like crap already


Quote

using this technology means we dont have to fix things


I'd like a timestamp on Russ said that

#16 Napoleon_Blownapart

    Member

  • PipPipPipPipPipPipPipPip
  • Shredder
  • Shredder
  • 1,167 posts

Posted 24 August 2018 - 11:14 AM

in your AMD 'gaming'/ 'global settings'/'global graphics' does it have 'chill' or framerate target control settings?and in 'gaming' have you configured mwo?
Posted ImagePosted ImagePosted Image

#17 NARC BAIT

    Member

  • PipPipPipPipPipPipPip
  • Ace Of Spades
  • 518 posts
  • Twitch: Link
  • LocationAustralia

Posted 25 August 2018 - 06:02 PM

View PostPeter2k, on 24 August 2018 - 07:58 AM, said:

I'm taking the word over some long standing review sites over random internet dude
yeah, but, if you dont know them personally, arent they all just some random internet dude ? I ran my 1600x to its limits, my 2600x has the same limits, with a minor improvement in latency, the maximum frequency I could attain remained the same across both systems, with a b-die kit and an e-die kit, the underlying design for the IMC is the same between those generations, anyone claiming a 'vast' improvement is giving you a sales guy pitch, I would expect the next generation to have a true design improvement to the memory controller itself

View PostPeter2k, on 24 August 2018 - 07:58 AM, said:

btw, having a better manufactureing process can indeed lead to better performing silicone
maybe the lowest end of the silicon lottery gains an averaged improvement, but the reality is that if things dont change drastically, then they wont 'improve' by a lot

View PostPeter2k, on 24 August 2018 - 07:58 AM, said:

its not like building a map or mech will require a lot of coding and the "game modes" aren't that
game assets were built in modelling software and imported to cryengine, shopping around on engines should have been an easy task for the devs

View PostPeter2k, on 24 August 2018 - 07:58 AM, said:

then you got the old scaleform that isn't running really well, but doesn't get updated because its a lot of work and a new engine comes with it any way


View PostPeter2k, on 24 August 2018 - 07:58 AM, said:

I'd like a timestamp on Russ said that


I didnt quote Russ, its a summary, from around 10:50

'turn the tech, the RTX off'
'I mean, this is, I guess, my final point here today is jst'
'if you look at the reflection of the glass over there, turn the RTX back on'
'and lets move behind the battlemech here and take a look'
'this is, this is a real easy one'
'this is a softball one from here cz'
'it just works' <- perfect meme
'and if you turn the RTX off'
'uhh, this would be an example of where you would just have to change the design of our hanger, probably, rather than trying to make this work the old way, so'
'umm, thats uhh ... '
'its just been incredible'

Edited by NARC BAIT, 25 August 2018 - 06:06 PM.






1 user(s) are reading this topic

0 members, 1 guests, 0 anonymous users