I figure this should also be a decent guide for anyone looking for advice for a system build, and a general FAQ guide in the future. (Perhaps it could get stickied?)
Onward with the myths then?
Myth 1: AMD / ATI only, Intel / Nvidia only-
This is one of the myths I've heard more of the time, in that Radeon GPUs are best with AMD motherboards, and Nvidia GPUs are best with Intel boards, and there are issues if you put one on the other system. This is simply false. Both GPUs work just as well on either platform, regardless. End of story. Ask any professional system builder, manufacturer rep, or reviewer. Though you might be suggested to use Intel by Nvidia as they have better performance CPUs and PCI-E 3.0, but otherwise you won't see a difference.
Myth 2: PCI-E lanes-
I don't hear this much, but I still hear people suggesting that people go with AM3+ or 2011 motherboards based on PCI-E lanes. Unless you're running RAID and / or SSDs off of your PCI-E lanes, you really don't need the bandwidth for your multi-GPU setups. Even for top of the line GPUs, PCI-E 2.0 X8 lanes are enough for Xfire/SLI with no real (see less than 5%) drops in performance. Which means PCI-E 1.0 X16, 2.0 X8, 2.0 X16, 3.0 X4, 3.0 X8, and 3.0 X16 lanes are all fine for a multi-GPU setup.
Myth 3: Intel is better than AMD-
Well, if you're looking at a pure performance ($1000 CPU: http://www.newegg.co...N82E16819116491 ), performance/watt (AMD comparible CPU's generally have ~30watts higher TDP, and you're looking at at least 50watts after overclocking for performance / core in the desktop. Laptop-wise, with trinity coming out they're pretty much equal here now, with trinity having just slightly lower performance in the CPU-end versus Ivy dual core + SMT) , or company profit standpoint, then yes.
If you are looking whatsoever at a price / performance aspect, then no. Not really. While many would argue that with Sandy Bridge, (and more recently, Ivy Bridge) Intel is now better at price / performance. This is simply not true. For budget setups, AMD's Llano processors (and upcoming Trinity) are better than what you can get with Intel for the same price, in that you're getting a much better integrated GPU, In mainstream gaming (Sub-$250) AMD has a lead at the lower end, with the FX-4100 at $109 having an unlocked multiplier (so you can easily overclock it. Though you need a decent PSU for it... if you're running budget and building your own, this isn't an issue however, given a good 550w PSU isn't hard to get for around $50, and the 4100 OC's to around 3.8-4.0ghz safely on it's stock cooler). At $140, the FX-4170 is basically a 4100 with a better cooler, and comes pre-overclocked. Then at $160, the FX-8120 overclocks well,and has 8 threads. While most games don't gain any benefits from having more cores on their own... if you don't close every program running in the background when you game, you will lose some performance with just a quad core. More cores make multitasking faster, and means you won't take a hit while gaming. While an i5-2500k can overclock on a decent cooler rather well, for example with a $30 Coolermaster Hyper 212 plus you can safely OC to about 4ghz, and you can get to about the same clocks with the FX-8120, while the i5 will be about 20% faster, your also paying $50 more (30% more), and if you ever are multitasking while gaming, you're going to be brought down to about the same speed, or less if you are doing some heavy multitasking. Motherboard wise, for an Intel equivalent to an AMD board, generally you can expect to pay $10-20 more as well. The FX CPU will also be faster for more or less anything not gaming oriented overall. Price / performance still goes AMD.
FX-4170 vs i3 (price - price) http://compare-proce...e-i3-2100/3247/ (AMD 25% faster, AMD CPU 15% more expensive [$20] vs Intel, same system cost)
FX-4170 vs i5 (quad - quad) http://compare-proce...-i5-2500k/3542/ (25% faster Intel, 50% more expensive Intel CPU [$70] vs AMD, $80-90 more expensive system Intel vs AMD CPU)
FX-8120 vs i5 (value - value) http://compare-proce...-i5-2500k/4373/ (AMD 5% faster performance vs Intel, Intel 15% faster gaming vs AMD, Intel 33% more expensive [$50] vs AMD, $60-70 more expensive system Intel vs AMD CPU.)
Price point guide here;
http://mwomercs.com/...asic-cpu-guide/
Myth 4: Aftermarket cooling is only for overclocking-
Even without overclocking, some stock heatsinks are just inadequate for their processor. Other times, if you just want a quieter system, you may want to put down $20 for a much quieter cooler than anything you'll get stock. You are cooler and quieter, even if you don't overclock. Which is always a good thing.
Good example if you're in the USA I recommend if you are in such a case; http://www.newegg.co...N82E16835226050
Myth 5: Futureproofing-
Let me simply say, it can't be done. While you can make a system that will still play games in say, 6 years, you really won't be able to push it past that without an upgrade to the GPU given DirectX changes. Eight years is generally going to be your maximum you can push your system even with a GPU upgrade. Even still, you're going to hit a huge wall on framerates past 4-6 years, and some newer games you may not be able to run. The general recommended system replacement time is every 2-4 years. Past that and unless you spent thousands of dollars on your PC, then budget retail (sub $700) desktops are going to start equaling if not being faster than your PC, (though your GPU may still be faster than what is in them, depending on how much you've paid. It's still not a great place to be. At such a point for $700 you can upgrade your Motherboard, CPU, RAM, and GPU and get a faster system as an upgrade however, and so long as ATX remains the standard you should be able to reuse your case in most situations.)
Myth 6: Artic Silver 5 is the best thermal paste-
Once upon a time this was true, back when the two best thermal pastes there were were Arctic silver and liquid metal were the best thermal interface materials, and although both are electrically conductive, liquid metal is very expensive (often times five times more expensive than AS5) and is corrosive (which would kill your CPU cooler eventually) and as such AS5 became a standard. Now however, new ceramic based thermal pastes which are cooler, much the same cost, lack the "set time" (time it takes for tempratures to hit their minimum after application) of AS5, and best of all- they're not electrically conductive. The four current "best" (per people's opinions for the most part, they perform within 1 degrees Celsius of each other in most cases.) thermal compounds are Arctic Cooling MX-4, Tuniq TX-2, Noctura NT-H1, and GELID GC-Extreme. Most of these can be gotten for the same price as AS5 as well, which mostly sells on its name anymore. (And people unwilling to try new things.)
MX-4: http://www.newegg.co...N82E16835186038
TX-2: http://www.newegg.co...N82E16835154003
NT-H1: http://www.newegg.co...N82E16835608008
GC-Extreme: http://www.newegg.co...N82E16835426020
Myth 7: Whatever RAM will work just fine-
By this I mean people who think that putting any random assortment of RAM is going to work just fine for their PC. First of all, for years RAM has been dual channel (Intel extreme and other server based setups may be tri/quad channel) for your CPU, and therefore you should get two sticks (Called DIMMs, however I will refer to them as sticks for less technical readers.) of RAM first of all, to take advantage of it (otherwise you are taking a rather large hit on bandwidth.) Having four sticks is somewhat harder on your RAM controller, and three sticks causes an imbalance, which is worse than having four sticks of RAM. Second, don't mix and match RAM amounts per stick, this causes balance issues for your RAM controller. Which is even worse than three sticks. Having various speed RAM will also cause issues. (which is why I recommend AMD RAM, which is guaranteed to be the same on every single batch, plus the lifetime warranty and quality control. You pay a bit more, but it's guaranteed to always work and work well without issue. [unless you are unlucky and get something DOA or damaged in shipping, which happens with any company. The former is less likely with AMD RAM though, given it goes through both Patriot and AMD QC] http://www.newegg.co...deId=1&name=AMD )
Myth 8: Single rail vs Multi-rail PSU-
Johnnyguru did a great bit on this;
Quote
What is "multiple +12V rails", really?
In most cases, multiple +12V rails are actually just a single +12V source just split up into multiple +12V outputs each with a limited output capability.
There are a few units that actually have two +12V sources, but these are typically very high output power supplies. And in most cases these multiple +12V outputs are split up again to form a total of four, five or six +12V rails for even better safety. To be clear: These REAL multiple +12V rail units are very rare and are all 1000W+ units (Enermax Galaxy, Topower/Tagan "Dual Engine", Thermaltake Tough Power 1000W & 1200W, for example.)
In some cases, the two +12V rail outputs are actually combined to create one large +12V output (Ultra X3 1000W, PC Power & Cooling Turbo Cool 1000W, for example.)
So why do they do they split up +12V rails??
Safety. It's done for the same reason that there's more than one circuit breaker in your house's distribution panel. The goal is to limit the current through each wire to what that wire can carry without getting dangerously hot.
Short circuit protection only works if there's minimal to no resistance in the short (like two wires touching or a hot lead touching a ground like the chassis wall, etc.) If the short occurs on a PCB, in a motor, etc. the resistance in this circuit will typically NOT trip short circuit protection. What does happen is the short essentially creates a load. Without an OCP the load just increases and increases until the wire heats up and the insulation melts off and there's a molten pile of flaming plastic at the bottom of the chassis. This is why rails are split up and "capped off" in most power supplies; there is a safety concern.
Is it true that some PSU's that claim to be multiple +12V rails don't have the +12V rail split at all?
Yes, this is true. But it's the exception and not the norm. It's typically seen in Seasonic built units (like the Corsair HX and Antec True Power Trio.) It's actually cheaper to make a single +12V rail PSU because you forego all of the components used in splitting up and limiting each rail and this may be one reason some OEM's will not split the rails, but say they are split. Some system builders adhere very closely to ATX12V specification for liability reasons, so a company that wants to get that business but also save money and reduce R&D costs will often "fib" and say the PSU has it's +12V split when it does not.
Why don't those PSU companies get in trouble? Because Intel actually lifted the split +12V rail requirement from spec, but they didn't actually "announce" it. They just changed the verbiage from "required" to "recommended" leaving system builders a bit confused as to what the specification really is.
So does splitting the +12V rails provide "cleaner and more stable voltages" like I've been told in the past?
It is true that marketing folks have told us that multiple +12V rails provides "cleaner and more stable voltages", but this is usually a falsehood. Quite frankly, they use this explaination because "offers stability and cleaner power" sounds much more palletable than "won't necessarily catch fire". Like I said before, typically there is only one +12V source and there is typically no additional filtering stage added when the rails are split off that makes the rails any more stable or cleaner than if they weren't split at all.
Why do some people FUD that single is better?
Because there are a few examples of companies that have produced power supplies with four +12V rails, something that in theory should provide MORE than ample power to a high end gaming rig, and screwed up. These PSU companies followed EPS12V specifications, which is for servers, not "gamers". they put ALL of the PCIe connectors on one of the +12V rails instead of a separate +12V rail. The +12V rail was easily overloaded and caused the PSU to shut down. Instead of correcting the problem, they just did away with the splitting of +12V rails altogether. Multiple +12V rail "enthusiast" PSU's today have a +12V rail just for PCIe connectors or may even split four or six PCIe connectors up across two different +12V rails. The rails themselves are capable of far more power output than any PCIe graphics card would ever need. In fact, Nvidia SLI certification these days REQUIRE that the PCIe connectors be on their own +12V rail to avoid any problems from running high end graphics cards on split +12V rail PSU's.
There's less components and less engineering to make a PSU that DOES NOT have the +12V rail split up, so it's cheaper to manufacturer (about $1.50 less on the BOM, $2 to $3 at retail) and typically this cost savings is NOT handed down to the consumer, so it actually behooves marketing to convince you that you only need single +12V rails.
But some people claim they can overclock better, etc. with a single +12V rail PSU
B.S. It's a placebo effect. The reality is that their previous PSU was defective or just wasn't as good as their current unit. If the old PSU was a cheap-o unit with four +12V rails and the new one is a PCP&C with one +12V rail, the new one isn't overclocking better because it's a single +12V rail unit. It's overclocking better because the old PSU was crap. It's only coincidental if the old PSU had multiple +12V rails and the current one has just one.
The only "problem" the occurs with multiple +12V rails is that when a +12V rail is overloaded (for example: more than 20A is being demanded from a rail set to only deliver up to 20A), the PSU shuts down. Since there are no "limits" on single +12V rail PSU's, you can not overload the rails and cause them to shut down..... unless you're using a "too-small" PSU in the first place. Single +12V rails do not have better voltage regulation, do not have better ripple filtering, etc. unless the PSU is better to begin with.
So there are no disadvantages to using a PSU with multiple +12V rails?
No! I wouldn't say that at all. To illustrate potential problems, I'll use these two examples:
Example 1:
An FSP Epsilon 700W has ample power for any SLI rig out there, right? But the unit only comes with two PCIe connectors. The two PCIe connectors on the unit are each on their own +12V rail. Each of these rails provides up to 18A which is almost three times more than what a 6-pin PCIe power connector is designed to deliver! What if I want to run a pair of GTX cards? It would have been ideal if they could put two PCIe connectors on each of those rails instead of just one, but instead those with GTX SLI are forced to use Molex to PCIe adapters. Here comes the problem: When you use the Molex to PCIe adapters, you have now added the load from graphics cards onto the rail that's also supplying power to all of your hard drives, optical drives, fans, CCFL's, water pump.. you name it. Suddenly, during a game, the PC shuts down completely.
Solution: To my knowledge, there aren't one-to-two PCIe adapters. Ideally, you'd want to open that PSU up and solder down another pair of PCIe connectors to the rails the existing PCIe connectors are on, but alas... that is not practical. So even if your PSU has MORE than ample power for your next graphics cards upgrade, if it doesn't come with all of the appropriate connectors, it's time to buy another power supply.
Example 2:
Thermo-Electric Coolers (TEC's, aka "Peltiers") take a lot of power and are typically powered by Molex power connectors. I, for one, prefer to run TEC's on their own power supply. But that's not always an option. If you had a power supply with split +12V rails and powered your TEC's with Molexes, you would be putting your TEC's on the same +12V rail as the hard drives, optical drives, fans, CCFL's, water pump.. you name it, just as you did with the Molex to PCIe adapters. The power supply could, essentially, shut down on you in the middle of using it. A power supply with a single, non-split, +12V rail would not have any kind of limit as to how much power is delivered to any particular group of connectors, so one could essentially run several TEC's off of Molex power connectors and not experience any problems if one had a single +12V rail PSU.
Typical multiple +12V rail configurations:
- 2 x 12V rails
- Original ATX12V specification's division of +12V rails.
- One rail to the CPU, one rail to everything else.
- VERY old school since it's very likely that "everything else" may include a graphics card that requires a PCIe connector.
- Typically only seen on PSU's < 600W.
- Original ATX12V specification's division of +12V rails.
- 3 x 12V rails
- A "modified" ATX12V specification that takes into consideration PCIe power connectors.
- One rail to the CPU, one rail to everything else but the PCIe connectors and a third rail just for PCIe connectors.
- Works perfectly for SLI, but not good for PC's requiring four PCIe connectors.
- A "modified" ATX12V specification that takes into consideration PCIe power connectors.
- 4 x 12V rails (EPS12V style)
- Originally implemented in EPS12V specification
- Because typical application meant deployment in dual processor machine, two +12V rails went to CPU cores via the 8-pin CPU power connector.
- "Everything else" is typically split up between two other +12V rails. Sometimes 24-pin's two +12V would share with SATA and Molex would go on fourth rail.
- Not really good for high end SLI because a graphics card always has to share with something.
- Currently Nvidia will NOT SLI certify PSU's using this layout because they now require PCIe connectors to get their own rail.
- In the non-server, enthusiast/gaming market we don't see this anymore. The "mistake" of implementing this layout was only done initially by two or three PSU companies in PSU's between 600W and 850W and only for about a year's time.
- Originally implemented in EPS12V specification
- 4 x 12V rails (Most common arrangement for "enthusiast" PC)
- A "modified" ATX12V, very much like 3 x 12V rails except the two, four or even six PCIe power connectors are split up across the additional two +12V rails.
- If the PSU supports 8-pin PCIe or has three PCIe power connectors on each of the +12V rails, it's not uncommon for their +12V rail to support a good deal more than just 20A.
- This is most common in 700W to 1000W power supplies, although for 800W and up power supplies it's not unusual to see +12V ratings greater than 20A per rail.
- A "modified" ATX12V, very much like 3 x 12V rails except the two, four or even six PCIe power connectors are split up across the additional two +12V rails.
- 5 x 12V rails
- This is very much what one could call an EPS12V/ATX12V hybrid.
- Dual processors still each get their own rail, but so do the PCIe power connectors.
- This can typically be found in 850W to 1000W power supplies.
- This is very much what one could call an EPS12V/ATX12V hybrid.
- 6 x 12V rails
- This is the mack daddy because it satisfies EPS12V specifications AND four or six PCIe power connectors without having to exceed 20A on any +12V rail
- Two +12V rails are dedicated to CPU cores just like an EPS12V power supply.
- 24-pin's +12V, SATA, Molex, etc. are split up across two more +12V rails.
- PCIe power connectors are split up across the last two +12V rails.
- This is typically only seen in 1000W and up power supplies.
- This is the mack daddy because it satisfies EPS12V specifications AND four or six PCIe power connectors without having to exceed 20A on any +12V rail
The bottom line is, for 99% of the folks out there single vs. multiple +12V rails is a NON ISSUE. It's something that has been hyped up by marketing folks on BOTH SIDES of the fence. Too often we see mis-prioritized requests for PSU advice: Asking "what single +12V rail PSU should I get" when the person isn't even running SLI! Unless you're running a plethora of Peltiers in your machine, it should be a non-issue assuming that the PSU has all of the connectors your machine requires and there are no need for "splitters" (see Example 1 in the previous bullet point).
The criteria for buying a PSU should be:
- Does the PSU provide enough power for my machine?
- Does the PSU have all of the connectors I require (6-pin for high end PCIe, two 6-pin, four 6-pin or even the newer 8-pin PCIe connector)?
- If using SLI or Crossfire, is the unit SLI or Crossfire certified (doesn't matter if a PSU is certified for one or the other as long as it has the correct connectors. If it passed certification for EITHER that means it's been real world tested with dual graphics cards in a worst case scenario).
- Is the PSU rated at continuous or peak?
- What temperature is the PSU rated at? Room (25° to 30°C) or actual operating temperature (40°C to 50°C)
- If room temperature, what's the derating curve? As a PSU runs hotter, it's capability to put out power is diminished. If no de-rate can be found, assume that a PSU rated at room temperature may only be able to put out around 75% of it's rated capability once installed in a PC.
- Does the unit have power factor correction?
- Is the unit efficient?
- Is the unit quiet?
- Is the unit modular?
- Am I paying extra for bling?
- Do I want bling?
In the end, the main advantage to a single-rail 12v PSU however, is that you can get a good PSU of the wattage you need/want (or better) for less cost (generally) compared to a multi-rail PSU of the same wattage. You also need to take more care when setting up a multi-GPU setup with a multi rail PSU.
Guide at price points here:
http://mwomercs.com/...r-supply-guide/
Myth 9: All SSDs are created equal-
This comes from people recommending more or less "Whatever" SSD, just that someone get an SSD for a boot drive. While all SSDs are faster than a HDD, depending on the controller and the NAND type, as well as the capacity, will genearally determine how good an SSD is.
1: Controllers: There are four controllers on the majority of the market at this time. The Sandforce controller, Marvel, Indilinx, and Samsung 830. Each has their ups and downs.
A.) First of all; the Sandforce line. The fastest SSDs on the market, and are used my many companies. Their main weakness lies in their reliability; they lose speed over time, and more commonly have stability issues, including blue screens of death versus other controllers, though some OEMs (such as Intel) include custom tweaks which improve reliablility.
B.) The Marvel line. The second most common controller. Not as fast as the Sandforce SSDs, and still lose speed over time. However, they are more stable and therefore have fewer issues.
C.) The Indilinx. Less common than Marvel, and they don't lose speed over time and are stable, as well as having the fastest overall read times. Their main issue comes to actually writing data; They are the slowest modern SSDs at writes.
D.) The Samsung 830. Only used by the Samsung 830 line of SSDs, they are the second fastest overall, Don't lose speed over time, are stable, and all are thin enough to fit in an ultrabook/ultrathin/sleekbook. Their main catch is that they tend to be slightly more expensive compared to other SSDs for their capacity.
2. MLC (multi-level cell) or SLC (single-level cell) NAND.
A.) MLC: cheap and common, generally found in a consumer level SSD.
B.) SLC: More expensive, but lasts ten times as long as MLC memory, and is much faster. However, it is prohibitively expensive in large volumes as would be needed for a storage drive for most consumers, as such you mostly find these on enterprise-level SSDs.
3: Asycnchronus vs synchronus NAND: While Asynchronus memory is cheaper, synchronus memory is 30-50% faster, lasts longer, and is more stable. Whenever you are going to purchase an SSD, you should try to get synchronus memory.
4: Capacity and speed; They go hand-in-hand when it comes to SSDs. The larger the capacity of an SSD, generally translates into a faster write speed.
5: TRIM support; While mostly standard anymore, if you want your SSD to last, you are going to want this.
6: Manufacturer "extras": Some manufacturers have proprietary technology they incorperate into their SSD to improve reliability or speed. Kingston HyperX drives, Patriot Pyro Wildfire, and Intel drives are good examples. Some brands have "tiers" based on what of their technologies are in an SSD, with asynchronus NAND at the bottom, and synchronus RAM in higher end SSDs, with a "premium" level above that with higher reliability.
Myth 10: Nvidia always has better drivers than AMD/ATI-
While once upon a time this was true, since AMD purchased the Radeon brand, the drivers have gotten considerably better. Although Nvidia tends to generally have better release drivers for their graphics cards, within a couple months of release AMD has perfecly fine drivers if there are bugs (which doesn't happen very often anymore either), although in some cases there are isses on either side. The AMD Radeon HD 7xxx sereis image quality levels with their release drivers for example, or the Nvidia GTX 590 release drivers which caused some cards to overheat to their death, and some to even explode during operation. (though the explosions are small and limited to transistors for the most part. No balls of flame, just pops and a dead card.)
So things can have issues either way.
Myth 11: GPGPU is for enthusiasts only-
This is sadly a misconception based on that many GPGPU apps are not known about by the general public. Folding for example, can be done by anyone with a computer who isn't going to be on it all the time. Just click a button and your volunteering your computer to aid in research. It's simple, charitable, and generally easy. The thing is people need to know about it, and because of the lack of public perspective on it, people see it as an elitist thing. Hardware acceleration is next in line, which is becoming more common, and with it everything can be sped up, from games to web browsing. Also, with modern architectures like GCN and CUDA, lines of code can directly be implemented by the GPU, such as C++, enabling processing to be much faster than interfacing on the CPU with x86.
Myth 12: It's hard to build a computer-
It's really not, all you have to do is have a Phillips screwdriver, a bit of time, and the following skills: Plugging a wire into a power socket, screwing a screw, picking up objects, and spreading butter. Because the majority of a computer build is just screwing in components, plugging in wires, and spreading the thermal paste on your CPU.
Beyond that it's pressing a button, inserting your operating system disk, and you're good to go.
Myth 13: A company ships a bad part, and therefore all parts from that manufacturer must be bad-
This doesn't just go for computer hardware, however in general, every manufacturer is going to have some bad parts. Whether they slip through QC, something breaks during shipping, or there was a handling issue in retail, things happen. If you're able to have something replaced under warranty, do it. I suggest a three strike rule on any company before you decide that they're sub par. Though some individual items may have issues, so it is best to read reviews beforehand on any parts you purchase. Also important to remember is companies change over time - and so do their products. AsRock for example was once budget branding for Asus, and now they are their own company with their own high end, quite reliable products.
Myth 14: A discrete sound card will make your PC faster-
This, while once true long ago, no longer is the case. Between how many times faster modern CPUs are compared to when this was the case, and that there are dedicated sound processors on the vast majority of modern motherboards, the only thing a discrete sound card is going to do is maybe get you better sound quality. Though unless you're buying an audiophile or high end gaming discrete audio processing unit, then most likely you won't be getting anything out of it.
Myth 15: My PC will need a huge power supply-
Let's be honest here, the first thing is you need to pay attention to the quality of your PSU. Secondly, what are you going to be putting in for your graphics card, and do you overclock your CPU?
1. CPU and overclocking. The currently most power hungry modern CPU you can put in your system should you overclock would be the AMD FX-8150p. Push it up to nearly 5ghz, and you may be pulling over 300w at load. If you don't overclock, an Intel Extreme Edition i7 comes in at the top, with a TDP of 130w.. technically. The AMD FX-8150p has been known to pull up to 150w at stock, and moderately overclocked to 4ghz will pull about 200w.
2. GPU. Well, the best thing to do is to account that the most power hungry single GPU modern card is probably the Nvidia GTX 580, at about 250w. The newer Radeon HD 7970 and Nvidia GTX 680 pull about 200w. So you really won't be eating a ton here.
3. The remainder of the system? Hard drives are the next most power hungry component in most systems, eating off 20 or so watts. Fans generally can be expected to pull 15 or so, unless you're grabbing a delta in which case you're pulling up to 24 watts at maximum (the limit for fans at 12v / 2amp limitations.)
Your motherboard may also pull up to 50 watts.
So what does this mean? A good 550w PSU in general is able to power an average overclocked CPU, the highest end GPU, and the rest of the system without issue.
This PSU for example; http://www.newegg.co...N82E16817207013
The thing to look at is the 12v rail. In this case 44 amps are possible, which adds up to about 530 watts. Which means for a system without SLI/Xfire, you don't need anything above a 650w PSU.
Just make sure you get a good PSU. There are some bad ones out there. The main thing is ask yourself, does this seem like the price is too good to be true? (and it's not a sale price but the normal price.)
For one, here is an example of a bad PSU: http://www.newegg.co...N82E16817170010
Note the 12v rail; it only has 25 amps available, or 300 watts. This doesn't come close to the XFX PSU, despite them being listed as having the same wattage. It has most of it's amperage on the 5v rail, which is not used very heavily in modern PCs. Also note the warranty; it's one year versus five. That should say something in itself.
Now, Logisys isn't a bad company - they make fine fans, heatsinks, and cases (or rather redistributes the Deepcool brand in the USA)
However, don't get this power supply.
The general rule of thumb I hold is go get one of the following power supplies (or one that is made by a partner company, such as Mushkin [Topower], XFX [Seasonic], Kingwin [Superflower] etc.), listed in order of how good they are (I really would not recommend getting one with internals not from this list at this time, at least unless you find astonishingly good reviews on a particular PSU's internals);
1. Seasonic
2. Enermax
3. Superflower
4. FSP
5. Topower
You should also get one with at least an 80+ rating and active PFC; they are generally base line requirements for a good PSU anymore as well. The 12v rail(s) should also match at least 90% of the advertised wattage.
Myth 16: RAM misconceptions-
1. I hear a lot of people asking if they need a ton of RAM; simple answer; Unless you're running CAD or a server, no. At the moment, the most RAM which can be recommended for a gaming PC is 8GB. Any more is simply overkill. Servers use more RAM, and CAD will eat whatever you throw at it, but games at most will use 2GB unless you remove the buffer limit, and even then it is rather excessively hard to eat 4GB as things are at this time.
2. RAM speeds; As RAM speeds go, unless you're running an AMD APU, there are no real gains above 1600mhz unless you're an extreme overclocker looking to try to top a benchmark score. In gaming, there is less than a 1% gain between 1600mhz and 1866mhz RAM in a normal CPU.
3. Number of DIMMS; Ideally, you should only have as many DIMMs as there are RAM channels in your controller; in most systems this means you should only have two sticks of RAM unless you're running LGA 1366 or 2011, or another server based board. (which the two listed are three and four DIMMs respectively)
Myth 17: Nvidia vs AMD-
Simply put, it depends on your price point. I've done a guide here; http://mwomercs.com/...eral-gpu-guide/
Edited by Vulpesveritas, 16 June 2012 - 10:24 PM.