Jump to content

Amd Unleashes First-Ever (5 Ghz) Processor


78 replies to this topic

#61 Smokeyjedi

    Member

  • PipPipPipPipPipPipPipPip
  • Liquid Metal
  • Liquid Metal
  • 1,040 posts
  • LocationCanada

Posted 22 July 2013 - 01:16 PM

View PostIqfishLP, on 22 July 2013 - 12:25 PM, said:

My room is hot enough, tell me, why should i use AMD?


DO you cry at the IHRA nationals when the Nitro methane exhaust burns your eyes........Sissies, Sissies everywhere. Maybe open a window, crack a door, hell even better crack a cold beer.

#62 Narcissistic Martyr

    Member

  • PipPipPipPipPipPipPipPipPip
  • Veteran Founder
  • Veteran Founder
  • 4,242 posts
  • LocationLouisville, KY

Posted 22 July 2013 - 03:52 PM

View PostSmokeyjedi, on 22 July 2013 - 01:16 PM, said:


DO you cry at the IHRA nationals when the Nitro methane exhaust burns your eyes........Sissies, Sissies everywhere. Maybe open a window, crack a door, hell even better crack a cold beer.


Here here!

#63 Aim64C

    Member

  • PipPipPipPipPipPipPip
  • 967 posts

Posted 22 July 2013 - 04:10 PM

This is pretty interesting - though I'm not sure pushing the clock speeds higher are really going to be that big of a draw to AMD cores.

AMD's strength is in the very linear scaling of their CPU architecture compared to Intel. The current gaming and programming environment is still focused on very limited and sequential threading processes, but that is really going to start changing in a few years.

Open CL and PhysX are getting more common, and the hardware able to run them (and run them efficiently) is getting very cheap. A bargain DX11 graphics card can run kick an x86-64 CPU out the door in terms of floating point performance, particularly with massively parallel tasks (such as collision detection and real-time deformation).

Both of those languages have data and processing parallelism native to their structure - but they still run more efficiently with multiple x86-64 cores to help balance against linear tasks and centralized organization of data. Which is where AMD's linear scaling of their cores comes in handy.

8 AMD cores can hum away with plenty of data throughput where an 8 core intel CPU clogs up and sees considerable performance drop off as core count and task parallelism increase.

As games start to go 64 bit and the physics environments start to get more complex and more reliant on standards like OpenCL and PhysX - we're going to see AMD in a much better standing.

Though it won't be because they've got 5 Ghz processors. That seems a little unnecessary.

I have a 3850, myself, with 36 gigs of RAM. I can run MWO and The Sims 3 concurrently with no problems and without even touching my RAM or CPU overhead.

It runs a bit hot for a server environment, as well. Maybe if you wanted to run a couple virtual machines each running dedicated servers for ARMA 3, or something...

Though I'm curious if it's just an up-clocked 8350 or if it has actually undergone some revisions that would make it operate better at higher clock speeds. I think a team got the 8350 up to 8 GHz or something stupid like that - so it would be amusing to see this thing get to 9.

#64 Smokeyjedi

    Member

  • PipPipPipPipPipPipPipPip
  • Liquid Metal
  • Liquid Metal
  • 1,040 posts
  • LocationCanada

Posted 22 July 2013 - 04:51 PM

View PostAim64C, on 22 July 2013 - 04:10 PM, said:

This is pretty interesting - though I'm not sure pushing the clock speeds higher are really going to be that big of a draw to AMD cores.

AMD's strength is in the very linear scaling of their CPU architecture compared to Intel. The current gaming and programming environment is still focused on very limited and sequential threading processes, but that is really going to start changing in a few years.

Open CL and PhysX are getting more common, and the hardware able to run them (and run them efficiently) is getting very cheap. A bargain DX11 graphics card can run kick an x86-64 CPU out the door in terms of floating point performance, particularly with massively parallel tasks (such as collision detection and real-time deformation).

Both of those languages have data and processing parallelism native to their structure - but they still run more efficiently with multiple x86-64 cores to help balance against linear tasks and centralized organization of data. Which is where AMD's linear scaling of their cores comes in handy.

8 AMD cores can hum away with plenty of data throughput where an 8 core intel CPU clogs up and sees considerable performance drop off as core count and task parallelism increase.

As games start to go 64 bit and the physics environments start to get more complex and more reliant on standards like OpenCL and PhysX - we're going to see AMD in a much better standing.

Though it won't be because they've got 5 Ghz processors. That seems a little unnecessary.

I have a 3850, myself, with 36 gigs of RAM. I can run MWO and The Sims 3 concurrently with no problems and without even touching my RAM or CPU overhead.

It runs a bit hot for a server environment, as well. Maybe if you wanted to run a couple virtual machines each running dedicated servers for ARMA 3, or something...

Though I'm curious if it's just an up-clocked 8350 or if it has actually undergone some revisions that would make it operate better at higher clock speeds. I think a team got the 8350 up to 8 GHz or something stupid like that - so it would be amusing to see this thing get to 9.


Ah, so elegantly put together, My frustration whilst inside these forums limits me from getting this entire idea out so cleanly............thank you sir........makes my head hurt less already :P

#65 Bloodshed Romance

    Member

  • PipPipPipPipPipPipPip
  • Legendary Founder
  • Legendary Founder
  • 726 posts
  • LocationFlorence, South Carolina

Posted 22 July 2013 - 05:04 PM

intel users will keep backing intel, amd will keep backing amd.. i've used both (none of the new intel stuff) but I went AMD 8350 because its 8 true cores and the price point was what sold me... 8 cores... for like 180 at the time... sold... lol
everyone with their 3770K or whatever spending 350 just for the CPU I could almost get 2 8350s for that... sorry... not made of money.. and it hasn't caused me problems yet so.... thats what sold me for AMD..

the true 8 cores and no hyperthreading was a nice plus too

#66 Dragoon20005

    Member

  • PipPipPipPipPipPipPip
  • 512 posts
  • Facebook: Link
  • LocationSingapore

Posted 22 July 2013 - 07:30 PM

AMD haters will always be haters

but look here

http://bf4central.co...-amd-optimized/

http://www.pcgamer.c...ostbite-3-game/

http://battlefieldlo...r-amd-hardware/

http://bf4central.co...ttlefield-4-pc/

EA chose to support AMD instead of the Intel/nVidia combo

so yea we will see Intel fans raging

although its the PC use for the BF4 demo, its actually an Intel P67 system but using dual Radeon GPUs

pic ref of the mobo here the PWM fins have signature from Fatal1ty

http://www.asrock.co...20Professional/

Posted Image

Posted Image

Edited by Dragoon20005, 22 July 2013 - 07:40 PM.


#67 zinetwin

    Member

  • PipPipPip
  • Elite Founder
  • Elite Founder
  • 84 posts

Posted 22 July 2013 - 07:38 PM

Benchmarks!
http://www.hardwarec...river-5ghz.html

Saw these the other day, figured I'd pass them along.
Long story short it's an impressive chip, but not worth the money. You could overclock the cheaper parts and get close enough. Or spend a little more for the intel and get better overall performance. However, it is a fast chip and put's AMD back near the top. It at least shows that AMD has the ability to make fast chips, TDP be damned!

#68 Dragoon20005

    Member

  • PipPipPipPipPipPipPip
  • 512 posts
  • Facebook: Link
  • LocationSingapore

Posted 22 July 2013 - 08:01 PM

that price...

I don't mind getting the FX9590 if the price was closer to the i7 Core 4770K

but as mentioned before IMO the FX9590 is a super-turbo charged version of the FX8350

with that price difference between the FX8350 and FX9590, one could invest in good water cooling and still get similar results or invest on a powerful GPU

#69 Erasus Magnus

    Member

  • PipPipPipPipPipPip
  • Veteran Founder
  • Veteran Founder
  • 383 posts
  • LocationUnited States Of Mind

Posted 22 July 2013 - 08:28 PM

View PostSmokeyjedi, on 17 July 2013 - 03:41 PM, said:


You missed the entire ^point that AMD has now cornered the market in everything besides CPUs for desktops and laptops, with APUs Pcs and Desktops aren't the priority so much anymore........or so it seems.


I read somewhere that AMD is backing out of the PC CPU market and tries to conquer the console and tablet CPU market.
Perhaps they realized that they do not get a foot in the desktop sector again an try a new niche.
XBox One and PS4 are already powered by AMD CPUs.
It`s a bit sad really. I was a day one fan of AMD. I loved how they entered the scene and pretty much kicked Intels arse on day one until the end of the Pentium 4 era.

#70 Aim64C

    Member

  • PipPipPipPipPipPipPip
  • 967 posts

Posted 22 July 2013 - 09:17 PM

View PostErasus Magnus, on 22 July 2013 - 08:28 PM, said:



I read somewhere that AMD is backing out of the PC CPU market and tries to conquer the console and tablet CPU market.
Perhaps they realized that they do not get a foot in the desktop sector again an try a new niche.
XBox One and PS4 are already powered by AMD CPUs.
It`s a bit sad really. I was a day one fan of AMD. I loved how they entered the scene and pretty much kicked Intels arse on day one until the end of the Pentium 4 era.


Actually, I'd say that AMD has it right on the money.

A lot of game engines, these days, are developed to be run between consoles and PCs. CryEngine, Unreal, and many of EA's engines (such as Frostbite).

By being the hardware supplier of both of the major title consoles - it means a lot more development of engines optimized for their hardware - which translates to better performance on the desktop and laptop segment.

Further - where AMD's architecture shines is in massive parallel execution, both in CPU and GPU (actually, exceptionally so in the GPU). Both Nvidia and Intel have taken somewhat opposite approaches to hardware architecture, preferring to balance for fewer linear operations.

Those programming for OpenCL have already noticed the difference, particularly on graphics cards - where you'll slam into a wall with certain programming styles that evoke many parallel operations on an Nvidia card - and you'll notice performance decay somewhat on AMDs if you structure your program to be more friendly to the CUDA specifications.

It's more than simple compiler presets and instructions. It's in the logical flow and design of your program, how you declare and handle pointers, arrays, etc. It's not just "optimize for" in your compiler.

Which means AMD is now the standard for software programming and Intel/Nvidia will see many games not scale well into their architectures (Intel won't have as much of a problem - but Nvidia's architecture differs substantially from AMD's for GPUs).

#71 Milocinia

    Member

  • PipPipPipPipPipPipPipPip
  • Ace Of Spades
  • Ace Of Spades
  • 1,470 posts
  • LocationAvalon City, New Avalon

Posted 23 July 2013 - 07:24 AM

View PostAim64C, on 22 July 2013 - 04:10 PM, said:

This is pretty interesting - though I'm not sure pushing the clock speeds higher are really going to be that big of a draw to AMD cores.

AMD's strength is in the very linear scaling of their CPU architecture compared to Intel. The current gaming and programming environment is still focused on very limited and sequential threading processes, but that is really going to start changing in a few years.

Open CL and PhysX are getting more common, and the hardware able to run them (and run them efficiently) is getting very cheap. A bargain DX11 graphics card can run kick an x86-64 CPU out the door in terms of floating point performance, particularly with massively parallel tasks (such as collision detection and real-time deformation).

Both of those languages have data and processing parallelism native to their structure - but they still run more efficiently with multiple x86-64 cores to help balance against linear tasks and centralized organization of data. Which is where AMD's linear scaling of their cores comes in handy.

8 AMD cores can hum away with plenty of data throughput where an 8 core intel CPU clogs up and sees considerable performance drop off as core count and task parallelism increase.

As games start to go 64 bit and the physics environments start to get more complex and more reliant on standards like OpenCL and PhysX - we're going to see AMD in a much better standing.

Though it won't be because they've got 5 Ghz processors. That seems a little unnecessary.

I have a 3850, myself, with 36 gigs of RAM. I can run MWO and The Sims 3 concurrently with no problems and without even touching my RAM or CPU overhead.

It runs a bit hot for a server environment, as well. Maybe if you wanted to run a couple virtual machines each running dedicated servers for ARMA 3, or something...

Though I'm curious if it's just an up-clocked 8350 or if it has actually undergone some revisions that would make it operate better at higher clock speeds. I think a team got the 8350 up to 8 GHz or something stupid like that - so it would be amusing to see this thing get to 9.

There's a lot of truth in this, in that it's not necessarily that Intel processors are the best, it's the software and benchmarks which are better suited to them.

Once multi-core support is better implemented, we might see the gap closing due to AMD's better scaling.

At least the graphics market is a lot less clear cut and the healthy competition there has made it a win-win for us gamers.

#72 Aim64C

    Member

  • PipPipPipPipPipPipPip
  • 967 posts

Posted 24 July 2013 - 11:02 PM

View PostKyocera, on 23 July 2013 - 07:24 AM, said:


There's a lot of truth in this, in that it's not necessarily that Intel processors are the best, it's the software and benchmarks which are better suited to them.

Once multi-core support is better implemented, we might see the gap closing due to AMD's better scaling.

At least the graphics market is a lot less clear cut and the healthy competition there has made it a win-win for us gamers.


This is true - though I, honestly, see AMD shooting ahead of Nvidia in the graphics arena.

I'm still swimming in the literature from the people who developed the graphics card industry - but from what I'm coming across, AMD ultimately has the superior architecture model for the processing that GPUs are good at.

Instructions that require little in the way of synchronous execution (where the result of one segment of code is necessary to begin executing another) run exceptionally well on GPUs - which REQUIRE there be little in the way of synchronous execution requirements to 'hide' memory latency. AMD's architecture is exceptionally reductionist and can execute massive amounts of parallel code very easily.

NVidia seems to have kind of taken a bit of a compromise between the two, with fewer more complex execution units that can better handle synchronous execution.

But it seems like a horrible compromise, considering the way x86 cores are developing. The core design would make sense when paired with system-on-a-chip designs like Tegra in mobile devices (where space and power limit how task-specific your hardware can become).

http://www.dailytech...article9005.htm

Those projects are old, and were overly optimistic at the time of their writing - but the per-pixel parallelism of raytracing lends itself exceptionally well to massively parallel designs. There's some sequential portions of execution - but those can be handled well in the program structure (and environments like OpenCL allow for certain portions of code to be assigned by preference and availability to certain hardware).

And real-time Ray-tracing is where the graphics industry will eventually start to head. Scene complexity (which can go off the wall under 64-bit environments) does not appreciably affect render times for given resolutions. High-poly models with high resolution textures don't affect render times in the slightest - though reflections and effects add potentially a bit more render time - their detail is exquisite in any sense.

The technology is also rather premature - just as with raster renders, a number of detections for redundant operations can be identified and short-hand solutions implemented to lower the costs of rendering certain things.

http://www.evga.com/...ge=1&print=true

Nvidia seems to have the advantage for the time being, because CUDA has been around a little longer and has better developer support...

But with both major console gaming systems running off of AMD GPUs and an increasing trend towards OpenCL (which can run on a lot of non-CUDA hardware, including FPGAs that could form the basis for exceptionally task-specific hardware...) - as OpenCL matures, I think we will see far better scalability and performance on AMD architectures in real-time ray-tracing environments.

Just on the graphics end of things.

From the heterogeneous computing side of things - x86 cores can handle sequential instructions far quicker than most graphics cards. You can process on the CPU and bus the results over PCI-Express before you're going to run through the same operations locally on the GPU, in many cases. Which means it's usually going to be best to prioritize your GPU for what it's intended - parallel execution.

So, that's why I'm a little vexed by Nvidia's approach to desktop GPUs. Though, perhaps they are going for something that will appeal to the mobile markets more as the concept of heterogeneous computing takes off and more programs start to utilize assets like the GPU for computation.

Ultimately, I think CUDA is in a bad position. While it's certainly more mature than OpenCL - OpenCL is much less picky, and runs on just about any digital circuit with processing capacity. Nvidia doesn't seem to have been marketing CUDA too aggressively, relying mostly on their ties with gaming industries to secure a market edge for APIs like PhysX (the GPU elements of which are horribly under-utilized by most developers because of its proprietary nature that would exclude customers with AMD cards).

Which is why I see Nvidia having to either drop their CUDA standard or to make it an API on top of the OpenCL kernel (OpenCL was based loosely on the CUDA standards) and not restrict it to Nvidia cards, only.

Though I think they will try to stick to their guns. Having either not bid strongly on the console market, about the only draw developers have to their architecture is the PhysX and CUDA APIs. There's been serious speculation that they've been intentionally not optimizing PhysX to run on CPUs efficiently in order to force the GPU computation end of things more strongly. Which is going to backfire when the logos that play at the launch of a game stop showing that "Nvidia" logo and replace it with an AMD one.

We'll just have to see how strong Nvidia's industry lobby is, or how rapidly they can shift their hardware over to address new paradigms.

Which is kind of sad - their API support is very good, and standards like OpenCL would see serious advancements if they were to refocus efforts on it.

Of course... the biggest drawback to OpenCl is that since it can run on so many different platforms, it's got a lot of rope for programmers to hang themselves with when using the term: "cross platform." You can program for a lot of different environments, and run the same code on a lot of different environments... but it doesn't mean it's going to run very well in any of those environments (spare the one you had in mind when programming it).

But I'm kind of rambling at this point.

#73 LordDante

    Member

  • PipPipPipPipPipPipPip
  • IS Exemplar
  • IS Exemplar
  • 782 posts
  • Locationmy Wang is aiming at ur rear... torso

Posted 26 July 2013 - 07:06 AM

View PostNarcissistic Martyr, on 13 June 2013 - 10:37 AM, said:


A better comparison is comparing a Turbo charged high revving 4 banger to big block v8. The 4 banger has to rev to 9000 RPM to put out the same amount of power as the V8 does at 3000 RPM.


my saab 95 aero full hirsch tuining just told me that u dont know jack!

#74 DarkBazerker

    Member

  • PipPipPipPipPipPip
  • The God
  • The God
  • 258 posts
  • Twitch: Link
  • LocationWaffle House

Posted 27 July 2013 - 05:26 PM

This cpu needs more work, its not ready for show yet.

#75 Aim64C

    Member

  • PipPipPipPipPipPipPip
  • 967 posts

Posted 27 July 2013 - 10:18 PM

View PostDarkBazerker, on 27 July 2013 - 05:26 PM, said:

This cpu needs more work, its not ready for show yet.


Not exactly.

Intel dominates the market for CPUs, right now. Most programs are compiled to take advantage of Intel CPU architecture.

Which is going to shred performance on an AMD CPU (just as compiling for AMD would see performance hits on Intel).

While I'm still learning all of the little details - processors use a combination of different techniques to schedule and execute instructions. CPUs are not necessarily serial execution, and haven't been for quite some time. Memory latency is hundreds or thousands of CPU clock cycles. To manage this - multiple instructions are loaded and stacked into the available Arithmetic Logic Units (ALUs) as 'efficiently' as possible, so that the CPU can continue doing useful work even while it's awaiting data for a set of instructions.

There are something like six general strategies for how to accomplish this - and processors use different amounts of each to accomplish the same goal. Some are very dependent upon the compiler to schedule instructions - others use complex on-board circuitry to decode parallel instructions, and others load several different sets of instructions onto the ALU and the system works like an inverted set of RAID-0 drives where several instructions are executed across the same ALU (this is known as SIMD - or "Hyper Threading" as Intel's solution has coined itself).

Because of the differences and because of the reality of hardware patents - it's almost impossible for two companies to design hardware that really performs 'the same.'

Intel's Nahelem Architecture is -heavy- on decoders and branch predictions and makes extremely heavy use of SIMD. It's a front-end heavy processor that is going to perform extensive analysis to many of its segments of code.

At the processing end, Intel has inserted very long vector pipelines that can run loops and other recursive operations very quickly.

AMD has gone a somewhat different direction, going with a "more ALUs are better" philosophy with more simplistic decoding. This is going to make AMD's solution more compiler dependent (it's ironic that the two companies have kind of introduced architectures opposite of what would be expected, given their market share... Intel -could- use a compiler-dependent architecture while AMD would benefit from a more compiler-independent architecture with more advanced decoding). AMD is, also, going to take a bit of a penalty when it comes to vector operations.

That's not to say that the CPU is perfect. There are a lot of things that can be done to improve the architecture, and a lot of lessons to apply to architectures in the future.

The reality is, though, that the performance difference between AMD and Intel CPUs is as much to do with how something is programmed as it is 'how good' the CPU is. It also kind of depends upon what you're wanting to do with the CPU.

Take the Netburst architecture, for example. It rocked at multimedia applications because you were dealing with code that responded very well to SIMD and SMT. You could make very efficient use of the core and the overwhelming majority were going towards productive calculations. When you got into code where branch prediction fell behind the curve and where SMT wasn't as effective - you ran into huge problems where an AMD processor clocked at half the speed could perform better.

It wasn't necessarily a "bad" processor. It just wasn't as great of an idea if you were wanting to appeal to the crowds where game benchmarks mattered (it was a damned good CPU for putting into render farms and other such applications).

The same goes for AMD's current architecture. It's a sort of return to the Athlon 64 versus Netburst era, really - except this re-hash of Netburst has a much better front-end that avoids wasted cycles and is better at keeping the ALUs busy.

AMD is more dependent upon the compiler to aid in proper scheduling, and its architecture is inherently geared toward physical instruction parallelism... which doesn't translate as well to an environment with an Intel-dominated market and where most programs are still using very serially structured code (where parallelism has to be pulled out by the compiler and by the CPU scheduler).

The programming environment and the market environment are not really in favor of AMD's architecture... but I think the apparent performance gap between AMD and Intel (in this generation) will appear to shrink as parallelism becomes more integral to programming and compiling as a whole.

#76 F lan Ker

    Member

  • PipPipPipPipPipPipPip
  • 827 posts
  • LocationArctic Circle

Posted 30 July 2013 - 04:41 AM

S!

Just wanted to thank AIM64C for his informative and well put replies. Even I could understand the stuff because no techno jargon was used. I run full AMD setup at the moment and I am happy with it. No problems and does what I want to do: runs games excess of 60fps at 1920x1080. So using FX-8350, 7970HD GHz Edition and 990FX chipset with PCIe 3.0 support -_-

#77 Byzan

    Member

  • PipPipPipPipPip
  • 111 posts

Posted 30 July 2013 - 07:07 PM

I think people havn't done homework or read the fine print with regards to AMD CPU's

they are not " true" or "pure" 8 core CPU's

they consist of 4 "modules" and each module has 2 integer cores which share a single FP unit, it does mean the CPU has 8 threads but it's also the reason they lag so far behind on "per core" or "per thread" performance compared to intel.

It is a more hardware related threading setup compared to intel hyperthreading but at the same time saying they have "pure 8 cores" is not really true either.

I'm not an intel fanboi, I was really happy with my AMD 550 black edition I got a number of years ago and at that time I hoped I had seen my last intel CPU. but when it came time to upgrade I really had no option but to switch to intel and I honestly fear AMD may really struggle to make up the ground it's lost to intel since bulldozer came out.

I almost rushed to get a bulldozer when they came out, glad I checked myself and looked around for benchmarks...

in the same way I ditched Intel for AMD when the P4 game out I've had to ditch AMD for intel with the bulldozer and now piledriver.

It's like AMD did not take note of how intel screwed up and lost ground with the P4, they have done something very similar in many ways with their current CPU's.

AMD now only have a small edge with applications that are heavely optomised for threading but there is not enough of that about and the edge is not big enough to make it that important. And by the time it is I recon intel will be well placed to bring out a CPU that will beat AMD in that space anyway.

the only reason to by AMD is at the bottom end cheap as chips CPU's, even then you need to consider power usage because depending on the CPU and how much you pay for power if you use it a lot having an AMD CPU can add as much $15-20 to a power bill compared to a comparable Intel CPU.

#78 Narcissistic Martyr

    Member

  • PipPipPipPipPipPipPipPipPip
  • Veteran Founder
  • Veteran Founder
  • 4,242 posts
  • LocationLouisville, KY

Posted 31 July 2013 - 03:08 AM

View PostLordDante, on 26 July 2013 - 07:06 AM, said:


my saab 95 aero full hirsch tuining just told me that u dont know jack!


My DSM (when it isn't breaking down) runs at 9k RPM with a smallish turbo to put out about 350 WHP. The 1969 AMX I built as a teen (which is ironically enough way more reliable than my DSM) puts out about the same power (with an extra 100 ft-lb of torque). All in all, I figure it's an adequately accurate metaphor.

Edited by Narcissistic Martyr, 31 July 2013 - 03:11 AM.


#79 Smokeyjedi

    Member

  • PipPipPipPipPipPipPipPip
  • Liquid Metal
  • Liquid Metal
  • 1,040 posts
  • LocationCanada

Posted 31 July 2013 - 05:42 AM

View PostNarcissistic Martyr, on 31 July 2013 - 03:08 AM, said:


My DSM (when it isn't breaking down) runs at 9k RPM with a smallish turbo to put out about 350 WHP. The 1969 AMX I built as a teen (which is ironically enough way more reliable than my DSM) puts out about the same power (with an extra 100 ft-lb of torque). All in all, I figure it's an adequately accurate metaphor.


MMMMMM DSM, and yes the upkeep isn't as sweet as that powerband is............





1 user(s) are reading this topic

0 members, 1 guests, 0 anonymous users