Jump to content

Future Proofing


8 replies to this topic

#1 Iron Riding Cowboy

    Member

  • PipPipPipPipPipPip
  • 293 posts

Posted 21 September 2014 - 11:20 PM

I know for the longest time future proofing was not really possible in the PC world tell now. I think with the intel skylake coming out next year its very possible to future proof for the next 5 - 10 years on everything but the GPU

1. CPUs have not made much progress sens sandybrige. So the only way we are going to see faster CPUs is going smaller which we are hitting a brick wall or go with more cores but games will have to start being made with 8+ cores in mind. Or go with new tech like quantum computing and we will be lucky to see that in the next 30 years or or so.
There is Graphene CPUs but we are a good ways from that yet
http://www.technolog...ing-any-faster/

2. PCI E . we just now maxed out PCI E 2.0 with the GTX 700 series and just now starting to brake into the PCI E 3.0 and skylake is going to have PCI E 4.0.... yah

3. ram. we have not maxed the speed of DDR3 ram for gaming yet and now we getting DDR4

PCs in the next 10 years are probably going to focus more on energy efficacy and other stuff over performance.

If im wrong then feel free to point it out :)

Edited by Iron Riding Cowboy, 21 September 2014 - 11:58 PM.


#2 Stormwolf

    Member

  • PipPipPipPipPipPipPipPipPip
  • Elite Founder
  • Elite Founder
  • 3,951 posts
  • LocationCW Dire Wolf

Posted 22 September 2014 - 12:06 AM

View PostIron Riding Cowboy, on 21 September 2014 - 11:20 PM, said:

I know for the longest time future proofing was not really possible in the PC world tell now. I think with the intel skylake coming out next year its very possible to future proof for the next 5 - 10 years on everything but the GPU

1. CPUs have not made much progress sens sandybrige. So the only way we are going to see faster CPUs is going smaller which we are hitting a brick wall or go with more cores but games will have to start being made with 8+ cores in mind. Or go with new tech like quantum computing and we will be lucky to see that in the next 30 years or or so.
There is Graphene CPUs but we are a good ways from that yet
http://www.technolog...ing-any-faster/

2. PCI E . we just now maxed out PCI E 2.0 with the GTX 700 series and just now starting to brake into the PCI E 3.0 and skylake is going to have PCI E 4.0.... yah

3. ram. we have not maxed the speed of DDR3 ram for gaming yet and now we getting DDR4

PCs in the next 10 years are probably going to focus more on energy efficacy and other stuff over performance.

If im wrong the feel free to point it out :)


I work in IT, 10 years is too long a time to make accurate predictions. Hell, I remember when the first CD-ROM players came on the market, burning ROMs at home seemed impossible, yet 4 years later you could even do that. There are far too many unexpected developments to make any predictions.

Though you are right that CPU's have hit a brick wall and that we had to move to multicore systems because of it. Even now various ways are being explored around this but there isn't a definite solution.

#3 Iron Riding Cowboy

    Member

  • PipPipPipPipPipPip
  • 293 posts

Posted 22 September 2014 - 12:16 AM

View PostStormwolf, on 22 September 2014 - 12:06 AM, said:


I work in IT, 10 years is too long a time to make accurate predictions. Hell, I remember when the first CD-ROM players came on the market, burning ROMs at home seemed impossible, yet 4 years later you could even do that. There are far too many unexpected developments to make any predictions.

Though you are right that CPU's have hit a brick wall and that we had to move to multicore systems because of it. Even now various ways are being explored around this but there isn't a definite solution.

true but im not seeing it anytime soon unless Graphene CPUs make a brakethrough but even then it well take a long time for game start being made for it

#4 Catamount

    Member

  • PipPipPipPipPipPipPipPipPip
  • LIEUTENANT, JUNIOR GRADE
  • 3,305 posts
  • LocationBoone, NC

Posted 22 September 2014 - 07:30 AM

View PostStormwolf, on 22 September 2014 - 12:06 AM, said:


I work in IT, 10 years is too long a time to make accurate predictions. Hell, I remember when the first CD-ROM players came on the market, burning ROMs at home seemed impossible, yet 4 years later you could even do that. There are far too many unexpected developments to make any predictions.

Though you are right that CPU's have hit a brick wall and that we had to move to multicore systems because of it. Even now various ways are being explored around this but there isn't a definite solution.


The problem is that multicore systems quickly run into Amdahl's Law. The benefit from continuously adding threads hits quick diminishing returns.

The decades-long explosion of computing technology revolved entirely around component miniaturization. Now that's no longer really getting us anything. Even though we can push a little further than scientists were predicting a handful of years ago before hitting quantum tunneling problems, doing so is not serving much of a purpose. That's largely why things haven't budged since Sandy Bridge.

Yes, sure, new steps could be taken. We could move to graphene chips, or light-based computing as HP is working on, or quantum computing for those tasks where it's actually faster (which isn't everything), or numerous other directions, but miniaturization easily netted us doubling performance every couple of years or so, sometimes faster, for decades. That's exponential growth, continuously. Now we've picked all that low-hanging fruit, and even a revolutionary step might net us, like, one doubling of performance, at enormous cost, instead of the half dozen or more doublings that came in a typical decade just as a matter of normal, relatively cheap fabrication process development.

I think it's safe to say things are going to last a lot longer than we've been used to. my standard for hardware replacement is usually the existence of something twice as powerful, without massive price hikes over what I own (so no $1000 CPUs). By that standard, I expect my 3570k to remain viable until close to 2020, if not all the way to 2020, before something comes out in the $200-$350 range that's twice as fast. This is especially true because new Intel chips on smaller processes increasingly lack overclockability. Doubling my 3570k at 3.6 (max stock 4-core turbo) is going to be hard enough, but doubling it at 4.2? I won't hold my breath. For GPUs my 7970 might take until like 2016/2017 to become replaceable, and that's assuming I am willing to spend more. If I purchased a GTX 970 right now, I'd expect it to last longer still. I wholly expect one purchased today could stay quite viable until decade's end.

Like the OP says, RAM is even more glacial, and only modest increaes in speed and price reductions to bring them into the mainstream have been necessary. Sure, we've hit a point where a few programs can saturate DDR3-1600, but DDR3 goes much higher at reasonable prices. 2133 is cheap, 2400 and up is getting there. It's going to be a long time before we need more RAM performance than DDR3-2400 can provide, so DDR4 is mostly useless to the consumer.

Edited by Catamount, 22 September 2014 - 07:35 AM.


#5 ninjitsu

    Member

  • PipPipPipPipPipPip
  • FP Veteran - Beta 2
  • FP Veteran - Beta 2
  • 402 posts

Posted 22 September 2014 - 09:37 AM

Catamount, the i7 5820k is already twice as fast as your 3570k and it's under $400

#6 Iron Riding Cowboy

    Member

  • PipPipPipPipPipPip
  • 293 posts

Posted 22 September 2014 - 12:22 PM

View Postninjitsu, on 22 September 2014 - 09:37 AM, said:

Catamount, the i7 5820k is already twice as fast as your 3570k and it's under $400
May wish to look this over before you make such bold clams. And haswell e has more cores but not much faster per cores

http://www.tomshardw...cpu,3918-5.html

Edited by Iron Riding Cowboy, 22 September 2014 - 11:44 PM.


#7 Worm Seraphin

    Member

  • PipPipPip
  • Legendary Founder
  • Legendary Founder
  • 92 posts
  • LocationToronto, Canada

Posted 22 September 2014 - 12:33 PM

Ever hear of planned obsolescence?
A large component, epends on people like us buying stuff, so they will find a way to keep us doing as we have been for years.
It pretty much accepted now that multiple cores are where it's going.
For me its usually about $200 a year for a component then a platform upgrade (mobo/cpu/ram) every 3-4 years. I don't see that changing.

Edited by Worm Seraphin, 22 September 2014 - 12:40 PM.


#8 Catamount

    Member

  • PipPipPipPipPipPipPipPipPip
  • LIEUTENANT, JUNIOR GRADE
  • 3,305 posts
  • LocationBoone, NC

Posted 22 September 2014 - 01:22 PM

View Postninjitsu, on 22 September 2014 - 09:37 AM, said:

Catamount, the i7 5820k is already twice as fast as your 3570k and it's under $400


It can be up to twice as fast, but even in good hex-threaded apps, it's more often closer to 50%, and it costs just shy of twice as much as what I paid for my 3570k, which means that dollar per dollar it actually gives less performance, and that's only in good multithreaded software. It's only very moderately faster than the 3770k which was almost $100 cheaper back when I built this system than the 5820k is now.

Anyone can get faster components if they're willing to spend vastly more :P

#9 ninjitsu

    Member

  • PipPipPipPipPipPip
  • FP Veteran - Beta 2
  • FP Veteran - Beta 2
  • 402 posts

Posted 22 September 2014 - 03:06 PM

View PostIron Riding Cowboy, on 22 September 2014 - 12:22 PM, said:

..http://www.tomshardware.com/reviews/intel-core-i7-5960x-haswell-e-cpu,3918-5.htmlMay wish to look this over before you make such bold clams. And haswell e has more cores but not much faster per cores


What is this supposed to be showing me? It's not testing any i5's. Read my post before making bold claims?





1 user(s) are reading this topic

0 members, 1 guests, 0 anonymous users