T(h)anks Sen

Have you tried disabling C1E, C3, C6, and Package C to see if their causing the low power state problem? Also try toggling Internal PLL Overvoltage, sometimes that can cause instability. Maybe in low power state, the boards under shooting Vcore and causing the crash. Are you running in offset or fixed Vcore mode?
You're correct about the PCI-E 1x slot to PCI-E cards, it's just a bit bandwidth loss. This on the other hand, is a mini pci-e, the type typically found in notebooks/laptops and m-itx boards.
About the only thing you can mount in them is WAN cards & specialized SSD's (which are much more expensive than even the mSata SSD's, & by far more pricey than full size 2.5" SSD's. Surprisingly enough).
Oh, it's not a bad oc on the cpu. But the average de-lidded Ivy will hit about 4.7 Ghz, and on the top end, 5.4 Ghz is as high as they'll go under water. Mine ranks in the bottom third percentile unfortunately. Luck of the draw I'm afraid. At 5 Ghz, Ivy's more than a match for Sandy, much less at 5.4 Ghz. Especially with a higher Tj max than Sandy's capable of.
Really, the differences lie in the way that Intel chose to attach the heat spreader. Ivy isn't soldered in place like Sandy is, they used an adhesive and a really bad grade of TIM, with the intent of (I believe) crippling Ivy a bit to make SB-E look better for moving units.
Remember when the 990X hit the market, and Sandy was whooping it in single threaded performance? 990X's didn't sell too well because...well why buy a $1300 chip that performs slower than the $400 chip in nearly every task? Client side wise, not many people had a use for a cpu that would only perform better in an multi threaded application that could take advantage of the 2 extra threads. Think professional CAD/Rendering/Vid editing & conversion, and you can see immediately how small the advantage of the 9**X chips actually was. That would be why they intentionally crippled Ivy, or at least in my best estimation.
Back to the topic at hand. I upped the voltage to 1275 mV and it ceased artifacting in Heaven. I then managed to eek another 250 Mhz out of the Vram overclock (now at 1300 Mhz core/1750 mem), and in doing so, running the Uningine Heaven 3.0 benchmark, bested the scores of every single gpu system with a 7950, GTX 670, 680 & lower on Overclock.net. In fact, the only 2 single gpu systems that scored higher, were 2 7970's that scored .5 & 1.3 fps more than my 7950 TF 3 respectively. Core clocks were the same on those 2 systems, and my mem oc was better, but the additional shaders the uncut 7970's have available for that 5% performance increase made a small difference. If only AMD hadn't tossed mine in the 7950 bin...
http://www.overclock...mark-3-0-scores
...and mine:
Not bad eh?

Obviously below the top performing CF & SLI equipped systems, but...
----------------------------------------------------------------------------------------------------------------------------------------------------------
For reference, these are the settings to run Heaven 3.0 on to compare.
Render: Direct X 11
Mode: 1680x1050 fullscreen
Shaders: high
Textures: high
Filter: trilinear
Anisotropy: 16x
Occlusion: enabled
Refraction: enabled
Volumetric: enabled
Anti-Aliasing: 8x
Tessellation: extreme
Driver mods or hacks are not allowed.
------------------------------------------------------------------------------------------------------------------------------------------------------------
That's why I don't trust that Catzilla benchmark. Faster in Heaven 3.0 by a good measure than a GTX 680, yet a GTX 670 beats out my massively clocked 7950 in Catzilla, with a far lower clock speed? Not likely, and the physics scores are why it's like that in Catzilla. Heavily weighted indeed.
Edited by Teh Rav3n, 27 December 2012 - 01:39 PM.