Jump to content

Dx 12 To Enable Nvidia And Amd Gpu's To Work In Slifire (Insert Made Up Names Here)


9 replies to this topic

#1 Oderint dum Metuant

    Member

  • PipPipPipPipPipPipPipPipPip
  • Ace Of Spades
  • Ace Of Spades
  • 4,758 posts
  • LocationUnited Kingdom

Posted 03 March 2015 - 11:54 AM

http://www.extremete...rk-side-by-side

Personally the headline grabber isn't a big deal for me...infact im sure it won't work.

What is big news and good, is the addition of VRAM when using SLI/Xfire, suddenly 2GB cards are less of an issue.

Thoughts?

#2 xWiredx

    Member

  • PipPipPipPipPipPipPipPip
  • Elite Founder
  • Elite Founder
  • 1,805 posts

Posted 03 March 2015 - 12:18 PM

Any new feature of DX12 is really only relevant if those new features can be utilized with DX12/Win10's emulation layer for DX11/DX10/DX9 support. If they can't be utilized by games that aren't DX12-native then it doesn't matter at all for MWO unless a newer version of CryEngine comes out with DX12 support and PGI is able to adopt it.

#3 Oderint dum Metuant

    Member

  • PipPipPipPipPipPipPipPipPip
  • Ace Of Spades
  • Ace Of Spades
  • 4,758 posts
  • LocationUnited Kingdom

Posted 03 March 2015 - 12:40 PM

View PostxWiredx, on 03 March 2015 - 12:18 PM, said:

Any new feature of DX12 is really only relevant if those new features can be utilized with DX12/Win10's emulation layer for DX11/DX10/DX9 support. If they can't be utilized by games that aren't DX12-native then it doesn't matter at all for MWO unless a newer version of CryEngine comes out with DX12 support and PGI is able to adopt it.


I would imagine a new CryEngine build with DX 12 support is a given.

What isn't is PGI being able to adopt it inside of MWO's life-cycle given how long DX 11 support took, i was more posting the article for general discussion with like minded nerds rather than MWO specific.

And i just got done reading a thread over on SC where some faux nerd has posted that 8GB cards are a requirement as he has took measurements that his 8GB card used over 9GB of VRAM (brain explosion)

Edited by DV McKenna, 03 March 2015 - 12:41 PM.


#4 Catamount

    Member

  • PipPipPipPipPipPipPipPipPip
  • LIEUTENANT, JUNIOR GRADE
  • 3,305 posts
  • LocationBoone, NC

Posted 03 March 2015 - 01:12 PM

SC might well be able to use that kind of VRAM if you intentionally tried to make it, and happened to have at least a single 4k monitor to work with (maybe 2?), but no GPU is even close to being able to put out the kind of performance that would give acceptable framerates at that point anyways.

Dual-GPU setups are of course another story, so being able to use the VRAM of both cards independently is great news. Even if it doesn't benefit older games that don't transition to DX12, current cards can run those anyways, including MWO with its relatively modest GPU requirements.

#5 Oderint dum Metuant

    Member

  • PipPipPipPipPipPipPipPipPip
  • Ace Of Spades
  • Ace Of Spades
  • 4,758 posts
  • LocationUnited Kingdom

Posted 03 March 2015 - 01:22 PM

View PostCatamount, on 03 March 2015 - 01:12 PM, said:

SC might well be able to use that kind of VRAM if you intentionally tried to make it, and happened to have at least a single 4k monitor to work with (maybe 2?), but no GPU is even close to being able to put out the kind of performance that would give acceptable framerates at that point anyways.

Dual-GPU setups are of course another story, so being able to use the VRAM of both cards independently is great news. Even if it doesn't benefit older games that don't transition to DX12, current cards can run those anyways, including MWO with its relatively modest GPU requirements.


I should clarify from the SC thread he has a double Xeon setup with 4 290X's i think it is so the GPU grunt wasn't the issue.

In theory the VRAM scaling is brilliant news, would depend on the implementation however as i suspect it wont be as simple as that!

#6 9erRed

    Member

  • PipPipPipPipPipPipPipPip
  • Overlord
  • 1,566 posts
  • LocationCanada

Posted 03 March 2015 - 01:59 PM

Greetings all,

So here's the 'stupid' question that everyone asks at some point.
- If there are cards out there with 2 GPU embedded,
- And with 4 or 6 RAM points avail,
Q: What's the reason that cards don't just install the largest RAMs available?
(getting rid of the 1, 2, 3Gbs on most, and just mounting 16 or more.)

Are the GPU's the limiting factor? The Bios? With 1000's of cores available on GPU's what's the reason we are not seeing the jump in capabilities? Nvidia has the Tesla series just for this reason, no outputs for any displays, just raw computational power.
- Is it a design issue?
- Is it a cost issue?

Just asking,
9erRed

#7 Oderint dum Metuant

    Member

  • PipPipPipPipPipPipPipPipPip
  • Ace Of Spades
  • Ace Of Spades
  • 4,758 posts
  • LocationUnited Kingdom

Posted 03 March 2015 - 02:01 PM

View Post9erRed, on 03 March 2015 - 01:59 PM, said:

Greetings all,

So here's the 'stupid' question that everyone asks at some point.
- If there are cards out there with 2 GPU embedded,
- And with 4 or 6 RAM points avail,
Q: What's the reason that cards don't just install the largest RAMs available?
(getting rid of the 1, 2, 3Gbs on most, and just mounting 16 or more.)

Are the GPU's the limiting factor? The Bios? With 1000's of cores available on GPU's what's the reason we are not seeing the jump in capabilities? Nvidia has the Tesla series just for this reason, no outputs for any displays, just raw computational power.
- Is it a design issue?
- Is it a cost issue?

Just asking,
9erRed


I think i get what your asking, If it's why don't we see a GTX 960 with 8GB plus of VRAM it is quite simple.

The card's simply don't have the raw power to utilise that much VRAM.
As above the 960 wouldn't be able to use 4GB of VRAM as it simply does not have the grunt to do the work.

Dual GPU cards are another beast, where heat and power are there own restrictions as well as needing Xfire to work

Edited by DV McKenna, 03 March 2015 - 02:02 PM.


#8 9erRed

    Member

  • PipPipPipPipPipPipPipPip
  • Overlord
  • 1,566 posts
  • LocationCanada

Posted 03 March 2015 - 02:36 PM

Greetings all,

Somewhat answered the question, but that still doesn't explain why these single Tesla cards (simple 16x slot plug in) are just blowing away normal cards. And outrageous architecture installed on them.
TESLA GPUS FOR WORKSTATIONS
Feature Tesla K40 / Tesla K20
Peak double precision floating point performance 1.43 Tflops / 1.17 Tflops
Peak single precision floating point performance 4.29 Tflops / 3.52 Tflops
Memory bandwidth (ECC off) 288 GB/sec / 208 GB/sec
Memory size (GDDR5) 12 GB / 5 GB
CUDA cores 2880 / 2496

If these cards are so powerful, why are we not seeing this tech installed? If I pay $400 -$1000 or more for what the industry calls a high end or top card, and these Tesla units 'laugh' at them, what am I missing here. Yes I know these cards are in the $3000 range.
http://www.nvidia.ca...rkstations.html

9erRed

#9 Oderint dum Metuant

    Member

  • PipPipPipPipPipPipPipPipPip
  • Ace Of Spades
  • Ace Of Spades
  • 4,758 posts
  • LocationUnited Kingdom

Posted 03 March 2015 - 02:48 PM

View Post9erRed, on 03 March 2015 - 02:36 PM, said:

Greetings all,

Somewhat answered the question, but that still doesn't explain why these single Tesla cards (simple 16x slot plug in) are just blowing away normal cards. And outrageous architecture installed on them.
TESLA GPUS FOR WORKSTATIONS
Feature Tesla K40 / Tesla K20
Peak double precision floating point performance 1.43 Tflops / 1.17 Tflops
Peak single precision floating point performance 4.29 Tflops / 3.52 Tflops
Memory bandwidth (ECC off) 288 GB/sec / 208 GB/sec
Memory size (GDDR5) 12 GB / 5 GB
CUDA cores 2880 / 2496

If these cards are so powerful, why are we not seeing this tech installed? If I pay $400 -$1000 or more for what the industry calls a high end or top card, and these Tesla units 'laugh' at them, what am I missing here. Yes I know these cards are in the $3000 range.
http://www.nvidia.ca...rkstations.html

9erRed


Geforce aimed for consumers, and has a large chunk of its Floating point modules disabled compared to say a Quadro card which will have all the FP units enabled, this is done to keep power requirements and heat in check, and let the CPU deal with FP calculations (see MWO using FP alot!)

Quadro is aimed at 3d professionals like video editing CAD and other 3d software the hardware used is exactly the same the difference is the Bios/driver inside of it, you can game on these but it's behind the consumer Geforce versions

Tesla is a different beast entirely the difference is the unlocked double-precision floating-point performance giving around about 1/2 of peak single-precision floating point performance in Tesla cards compared to alot less for GeForce cards.
Tesla cards have ECC-protected memory and are available in models with higher on-board memory also i don't believe tesla cards have any output ports as display is not it's function, its there to do compute work

You could put 12GB of VRAM on a 980 or 290x but the card simply does not have the processing power to use that amount.

Edited by DV McKenna, 03 March 2015 - 02:55 PM.


#10 Flapdrol

    Member

  • PipPipPipPipPipPipPipPip
  • The 1 Percent
  • The 1 Percent
  • 1,986 posts

Posted 04 March 2015 - 02:34 AM

While possible, I don't think most developers are even going to try to bother with anything other than alternate frame rendering, which pretty much requires identical cards. Could maybe run compute tasks on one brand of card and render on the other.

Civ BE in crossfire mantle does something fancy, that lets multiple gpu's work on a single frame, there's reduced latency compared to afr, but the overall performance is barely above single gpu.

Anyway, I'm not a fan of multigpu. Will stick to single.





1 user(s) are reading this topic

0 members, 1 guests, 0 anonymous users