Desintegrator, on 05 August 2015 - 07:45 AM, said:
If you like some loud and annoying pump noise even when you CPU is in IDLE, I would also recommend a AIO cooling system !
Not sure how you overclock with water cooling but my sandybridge at 5ghz pump wise is dead silent. Get a piece of rubber foam (mouse pad) or thick neopren and place your pump on it.
Only noise I have is my fans and thanks to using low flow those are nearly whisper quiet. Temps average mid 20's C under no load and occasionally reach the high 40's if I REALLY stress the system.
Thanks for the info. I likely won't upgrade right now since everything except MWO is still good to go, but nice to be pointed in the right direction before I make another mistake. Definitely hitting the CPU bottleneck since I upgrade my GPU, but still the game is very playable at 1080p for now.
Túatha Dé Danann, on 06 August 2015 - 07:40 AM, said:
I have an Intel Core I7 4970K with 6C/12T @4.4 GHz.
I Like your Name, But Wat!
1) No such Chip as the i7-4970K, I'm thinking you meant i7-4790K
2) i7-4790K is 4C/8T, Not 6C/12T
3) if not a 4790K, Then maybe your talking about the 970 or 3970X Extreme Edition, those 2 are 6C/12T
Túatha Dé Danann, on 06 August 2015 - 07:40 AM, said:
I have an Intel Core I7 4970K with 6C/12T @4.4 GHz. I'm not CPU bound - in fact, my CPU starts to idle around when I throw MWO at it. I might have to upgrade the GPU to a (or 4) Titan X or similar before I come even close to saturation-levels.
I had a (roughly) 25% frame rate increase by upgrading from a 4C to a 6C CPU. The benchmarks above show just something like 5% on raw IPC pwer. Don't know where those 25% clock-per-clock comes from.
You would have to downgrade the frequency to the same levels and start the benchmark again.
The 6700K might be a nice CPU for its price-tag, but to be honest, 4C/8T is a little outdated if you really want to push the limits to be prepared for the next 5 years or so.
Games these days try to heavily use the advantages of multi-threading - so a 6 or 8-core CPU is worth more than the raw clock-per-clock raw power. We even see the use of GPU-based calculation approaches, and a GPU shader is weaker than any dedicated CPU-core. So the importance of IPC raw power gets spread out by the core-count.
Another thing: MWO is pretty much NOT optimized. If I take the power MWO uses up from my system and compare it with another game... well, I was able to have 60 fps in my hangar with a ship that has... 3 million polys? (+ the hangar itself and all additional objects, may end with something like 5 million polys)
edit:
The primary bottleneck for CPUs right now are the draw calls. Once DX 12 hits the mainstream, many older CPUs are valid again.
That's an IB chip? You mean the 4960X? Then I can actually guarantee that you have frame rate dips below 60 (probably into the high 20s even) if you have everything set to 'very high'.
We already know that MWO doesn't benefit from CPUs beyond 4 cores and that there is a negligible 1-2fps gain from having a 6-core or 8-core Intel chip. We've already run the numbers here. A 4790K at 4.5GhZ, a 5820K at 4.5GhZ, and a 5960X at 4.5GhZ all perform within margin of error of each other.
Also, MWO has to be ported to DX12 or you won't see any of those benefits.
it shows maybe %1 gain in games, if you are using 1080P resolution, witcher 3, shadow of mordor showed not much changed at all. Heck Witcher 3 the FX-8370 was 112 Avg FPS vs 119 for Skylake with all of them posting 99% of frametimes under 12ms.
$350 for a CPU....no thanks there's much better options to get, save money and not really suffer framerate wise unless you're gaming at 640x480
You didn't just compare a CPU bound game to a GPU bound game did you?
Official numbers based on Intel's -retail- Skylake CPUs are now blowing up across the interwebosphere this morning and the conclusions are in: Skylake is up to 10% faster clock-per-clock than Haswell and up to 25% faster clock-per-clock than Sandy Bridge.
What does this mean for MWO players? Anybody looking to build a new system should first look at brand new Skylake-based chips for building within their budget. MWO loves CPU cycles and Skylake has shown that it is now the best choice at a given clock speed and core count.
If you're about to pull the trigger on a Haswell-based system to enhance your MWO (and other gaming) experience, I would recommend jumping on the little stockpile of Skylake chips that exists now or waiting for supply to stabilize. It shouldn't take too long.
I also recommend going with an AIO over air cooling. Skylake will do fine with air cooling and has some OC headroom (4.2-4.5GhZ) on air, but temps can get a bit unwieldy the higher you go. Small increases in voltage in Skylake tend to produce much higher temps.
I have a sandy as a backup rig and a haswell for my main rig... nope I ain't upgrading to skylake...
I have a sandy as a backup rig and a haswell for my main rig... nope I ain't upgrading to skylake...
It's possible you haven't understood what he's posted. If your on Haswell there aint no point. Hence why he specifically states "Anybody looking to build a new system"
i have a I5 2500k and looking to upgrade...not for the CPU it salf but for PCIe 4.0. with DX12 and multi-gpu-resource-pooling
soo when i upgrade my GPU to a newer card i do not have to box up my old one for a backup i just keep it in my PC
sooo what this mean is PCI 2.0 is not going to cut it anymore soon.
PC gaming is about to see a rather drastic change and improvement in the way multi-GPUs are implemented. No longer will you be forced to pair dGPUs from the same architecture/family or even brand. The ability to combine any two or more GPUs and have their resources (GPU cores, Vram etc.) stacked and pooled is a thing and will be possible with DX12. Whether you combine Intel or AMD iGPUs with any dGPU or Nvidia and AMD dGPUs, it won't matter. Their resources can be shared for additional gaming performance. No longer will the iGPU in your system go to waste. This is awesome news for all of us. I know this has been known as a possibility with DX12 for some time, but this is the first time I've seen a demo of it and a game dev talking about how it can be implemented.
The demo in the video is using one of AMDs latest APUs paired with an R9 290X - an unlikely and formerly thought of "unbalanced" combination. But now with DX12, we not only have reduced CPU overhead - allowing this CPU/dGPU combo to actually perform optimally, but we also have the pairing of the APUs iGPU added to the total GPU capabilities of this particular system for even more performance. As mentioned, this is not exclusive to AMD hardware, they just happened to be running it on an AMD system. With the latest APUs having such strong iGPU performance, it was a good platform to demonstrate the effectiveness this can have.
Currently, the way multi-GPUs are combined is that each GPU (in a dual-card config) takes turn rendering each frame. Some of the disadvantages with this is that both GPUs have to be nearly identical, run at the same clock speeds and both have to hold the same data in the Vram of each card resulting in redundancies and limitations on which cards can be used/combined. The new approach made possible with DX12 is that now the devs can have control over which aspects of the scene are being rendered by which GPU. So you can have the high-end dGPU still doing most of the heavy-lifting, rendering the majority of the scene, while the iGPU takes on the remaining tasks/aspects of rendering the scene. This means the dGPU doesn't have to render the entire scene by itself, only part of it, which as you can guess, will increase frame rates.
As stressed in the video, it will be up to the devs of each game to optimize and sort out which aspects/tasks can be offloaded to an iGPU and how the work will be divided. One key thing to note is that this will allow for much better crossfire/SLI or hybrid dGPU combos to be possible as the GPUs will not be taking turns rendering each frame, but each frame will be rendered in concert with each GPU taking on specific elements of the scene. This will also allow for Vram stacking, so two cards with 4GB of Vram each will amount to a total of 8GB Vram available. One card can handle all textures, while the other handles lighting etc. The potential is there, but again, it will be strongly dependant on the devs to implement it.
Here's the demo video:
Edited by Iron Riding Cowboy, 09 August 2015 - 01:58 AM.
It's possible you haven't understood what he's posted. If your on Haswell there aint no point. Hence why he specifically states "Anybody looking to build a new system"
thoroughly reading posts? where is the fun in that
I read it. Now, perhaps I have low expectations (no, I KNOW I have low expectations) but my six-core AMD and 750-ti rock this thing until long after the sun comes up and fifteen other things at the same time.
Let the hammering begin. I got my garbage can lid to protect me.
Iron Riding Cowboy, on 09 August 2015 - 01:50 AM, said:
i have a I5 2500k and looking to upgrade...not for the CPU it salf but for PCIe 4.0. with DX12 and multi-gpu-resource-pooling
soo when i upgrade my GPU to a newer card i do not have to box up my old one for a backup i just keep it in my PC
sooo what this mean is PCI 2.0 is not going to cut it anymore soon.
PC gaming is about to see a rather drastic change and improvement in the way multi-GPUs are implemented. No longer will you be forced to pair dGPUs from the same architecture/family or even brand. The ability to combine any two or more GPUs and have their resources (GPU cores, Vram etc.) stacked and pooled is a thing and will be possible with DX12. Whether you combine Intel or AMD iGPUs with any dGPU or Nvidia and AMD dGPUs, it won't matter. Their resources can be shared for additional gaming performance. No longer will the iGPU in your system go to waste. This is awesome news for all of us. I know this has been known as a possibility with DX12 for some time, but this is the first time I've seen a demo of it and a game dev talking about how it can be implemented.
The demo in the video is using one of AMDs latest APUs paired with an R9 290X - an unlikely and formerly thought of "unbalanced" combination. But now with DX12, we not only have reduced CPU overhead - allowing this CPU/dGPU combo to actually perform optimally, but we also have the pairing of the APUs iGPU added to the total GPU capabilities of this particular system for even more performance. As mentioned, this is not exclusive to AMD hardware, they just happened to be running it on an AMD system. With the latest APUs having such strong iGPU performance, it was a good platform to demonstrate the effectiveness this can have.
Currently, the way multi-GPUs are combined is that each GPU (in a dual-card config) takes turn rendering each frame. Some of the disadvantages with this is that both GPUs have to be nearly identical, run at the same clock speeds and both have to hold the same data in the Vram of each card resulting in redundancies and limitations on which cards can be used/combined. The new approach made possible with DX12 is that now the devs can have control over which aspects of the scene are being rendered by which GPU. So you can have the high-end dGPU still doing most of the heavy-lifting, rendering the majority of the scene, while the iGPU takes on the remaining tasks/aspects of rendering the scene. This means the dGPU doesn't have to render the entire scene by itself, only part of it, which as you can guess, will increase frame rates.
As stressed in the video, it will be up to the devs of each game to optimize and sort out which aspects/tasks can be offloaded to an iGPU and how the work will be divided. One key thing to note is that this will allow for much better crossfire/SLI or hybrid dGPU combos to be possible as the GPUs will not be taking turns rendering each frame, but each frame will be rendered in concert with each GPU taking on specific elements of the scene. This will also allow for Vram stacking, so two cards with 4GB of Vram each will amount to a total of 8GB Vram available. One card can handle all textures, while the other handles lighting etc. The potential is there, but again, it will be strongly dependant on the devs to implement it.
Here's the demo video:
I'll be highly surprised and a monkeys uncle, if they ever get SLI VRAM and Mixed Brand Multi Gpu to ever work properly.
Doesn't actually address what i said, he says it's possible, MS say it's possible.
But we barely get SLI and Xfire working properly and scaling, so until it's demonstrated working properly it doesn't exist.
No but they got integrated graphics to work with the GPU very little trouble and only taking them 2 weeks to do it for the first time... Its about the same with multi-gpu ... DX12 looks at all GPUs in your PC the same way as one big GPU its up to the developers to optamize the game to use it.
Edited by Iron Riding Cowboy, 09 August 2015 - 03:36 AM.
Iron Riding Cowboy, on 09 August 2015 - 03:08 AM, said:
No but they got integrated graphics to work with the GPU very little trouble and only taking them 2 weeks to do it for the first time... Its about the same with multi-gpu ... DX12 looks at all GPUs in your PC the same way as one big GPU its up to the developers to optamize the game to use it.
That's not how it works, making an AMD APU and GPU work together is a small step.
There has to be drivers that will run both an AMD GPU and Nvidia GPU side by side no to mention getting to different architectures to communicate and work together.
I'm not saying upgrade if you're on a Haswell system, am I? Don't see that anywhere. I'm saying if you were about to build a Haswell system you should wait. If you're on a Sandy Bridge, Nehalem, or AMD system, though... upgrading is definitely something to consider if you want to crank the graphics settings.
Sure? It does only matter if Skylake is better at the same clock speed when both CPUs use the same.
And that's where Skylake got some issues, at least right now. Older CPUs, especially Devils Canyon, reaches way higher clocks. If you want to buy a new one, Skylake is fine, at least the i5. But solely for the Z170 Chipset.
The i7 is way too expensive. If you need more than four threads, go ahead and buy an i7 5820K. DDR4 costs all the same, the CPU roughly 20$ more than the 6700K, solely the mainboard takes another 60$ on top of it.
But you get six cores and reach the same clocks per core, thanks to good old soldered heatspreaders.
And the increase in power is way higher than from an 6600K to an 6700K - for less difference in price.
And if you're on a Sandy Bridge System, things won't change if you get roughly 10% more CPU-Power out of Skylake.
Honestly, if Intel - or AMD - doesn't get it going, this will be the first CPU I'll change because my mainboard is too outdated. SATA3 can't be called a true bottleneck, but SSDs are reaching a level where they are six times as fast as any SATA3-based SSD can be. This gets... tempting.
Not sure how you overclock with water cooling but my sandybridge at 5ghz pump wise is dead silent. Get a piece of rubber foam (mouse pad) or thick neopren and place your pump on it.
Only noise I have is my fans and thanks to using low flow those are nearly whisper quiet. Temps average mid 20's C under no load and occasionally reach the high 40's if I REALLY stress the system.
He talks about AiOs, where the pump is integrated in the CPU-block or sometimes in the radiator. Nothing to dampen there, only solution is to get a silent case with Bitumen covering the sides. Or something similar.
I am running on a classic intel 875K at 3.9Ghz with a 970 GTX and MWO works perfectly fine. Because CPU wise the the improvements were not so amazing during the last years and you if you can do some overclocking i would rather advice to buy a better graphic card.
That's not how it works, making an AMD APU and GPU work together is a small step.
There has to be drivers that will run both an AMD GPU and Nvidia GPU side by side no to mention getting to different architectures to communicate and work together.
well believe what you like but from every thing i have read say otherwise even by nvida and AMD. Its one of the reasons nvida is pushing NVlink to keep people to not mix cards with AMD. well unless you can show something to back up your argument ill take the word of the developers. but we will not have to wait long to see.. lots of DX12 games coming soon.