Jump to content

Nvidia or AMD


137 replies to this topic

Poll: Nvidia or AMD (174 member(s) have cast votes)

So what do you prefer based on price, and performance?

  1. Nvidia (96 votes [53.33%] - View)

    Percentage of vote: 53.33%

  2. AMD(ATI) (84 votes [46.67%] - View)

    Percentage of vote: 46.67%

Vote Guests cannot vote

#21 Catamount

    Member

  • PipPipPipPipPipPipPipPipPip
  • LIEUTENANT, JUNIOR GRADE
  • 3,305 posts
  • LocationBoone, NC

Posted 10 January 2012 - 02:32 PM

Which company offers more performance for a given price is something quantifiable, but if differs greatly by application being used (game vs GPGPU), price range, and also what series we're comparing here (Geforce 8 series vs Radeon HD 2000 series is a lot different than Radeon HD 4000 series vs Geforce 200 series).


At the immediate moment, most of Nvidia's lineup is beaten by AMD's lineup, in gaming, and the three cards remaining in Nvidia's lineup that are still viable are only competitive; so insofar as gaming is concerned, Nvidia ranges from about as good to vastly worse, depending on the card we're talking about.


In the sub-$200 lineup, Nvidia has nothing worth purchasing. The Geforce 550ti is nearly identical to the Radeon HD 5770/6770 in performance, and costs more money, and the 560 (non-TI) does no better against the 6870, being just about identical in performance, for $20-$25 more.


On the really high end, the 580 didn't even make sense even before the 7970 beat it handily for both performance and value, because even before the 7970, it was still only 10%-15% faster than the 570 and 6970, for a 40% higher price, which means it was a significantly performer for the price. You'll rarely ever have a difference that small make or break a game, or even be noticeable; we're talking getting 44-46fps instead of getting 40 with the cheaper two. If the 580 made little sense before, it makes no sense today, with the 7970 having a much higher gap in performance over the 580 than price.


So that leaves the 560ti, 560ti Core448, and 570, and the 560ti is usually only faster than the 6950 in cases where it doesn't matter (where both are overkill), while it tends to lose in the the real chunky games where the difference is meaningful, so it's questionable as to whether the 560ti is even worthwhile. So you have a total Nvidia lineup of two viable cards, or maybe very questionably three, while AMD equal cards to those cards, and has better cards for rest of the market.


In the mobile sector, it gets even worse for Nvidia. There are rare cases like the mobile 6970, where AMD doesn't generally beat Nvidia, but in generaly, nearly every single one of their discrete GPUs don't just have lower TDP than Nvidia's cards, but have much lower TDP. That matters a somewhat less on desktops, but it's a crucial factor in laptops, because higher TDP means lower battery life, more heat (which hurts both component longevity, something already bad in laptops, and ergonomics), and makes the machine heavier by requiring beefier cooling.

AMD cards also handle their TDP better because of powertune, something Nvidia has absolutely no equivalent to. Powertune differs from other methods of power regulation, because rather than just running the card at full bore until it gets too hot and then grossly overclocking it, powertune makes many scalable adjustments based on actual power draw, so if your power draw goes 10% over what's healthy on an Nvidia card, the card will wait until the GPU hits some unreasonable temperature, then slash your clocks by some absurd amount in a panic to cool the card down (and let's not even get into Nvidia power circuitry, which was FRYING on the first batch of 590s in reviews). If that happens to an AMD card, the card will dynamically underclock the card by exactly 10%, for only as long as is needed, and you won't likely even notice. Again, this matters most on the mobile market... though it might have saved reviewers a few busted 590s had Nvidia implemented something similar :rolleyes: It doesn't help that Nvidia doesn't bat an eye at violating their own stated TDP, as well as the PCIE spec, with their beefiest cards. AMD is incapable of that with cards, because powertune locks the card to a given TDP.



If Nvidia wasn't loosing bad enough in the mobile sector to AMD's discrete cards, then AMD has Liano to plaster them with too. Liano has all the advantages of a discrete card, and none of the disadvantages. It's actually faster than most low-mid discrete GPUs, but it doesn't consume absurd amounts of power to operate, so it keeps battery life high (without the need for a convoluted, half-functional switching-GPU system).



So right now, insofar as gaming is concerned, Nvidia has only a couple of cards that are even competitive, none of them mobile, has no cards that are better than their AMD equivalents, and by far the vast majority of their lineup is inferior in terms of performance/price.

I'm sure at some point, Nvidia will have AMD in a similar situation, as it goes back and forth, but that day is not today :lol:


Also note that this does not count for GPGPU, where Nvidia handly beats AMD, or at least did until the 7970 was released. I won't comment on that, however, until it plays out more, since the 7000 series is very new and we haven't seen what Kepler will do when it's released... someday.

#22 T0RC4ED

    Member

  • PipPipPipPipPipPip
  • Liquid Metal
  • Liquid Metal
  • 312 posts

Posted 11 January 2012 - 07:57 AM

View PostDV^McKenna, on 10 January 2012 - 12:06 PM, said:

Do you do rendering or programing to make use of that 16Gb of ram? seems an obscene amount if its just for gaming.

Its not realy that far out there... winblows (depending on what version you run) can take up to 2 gig of ram right out of the box... on its own. I stip mine down to look like win 2k but I like to push a machine to the very edge and then some.

#23 Catamount

    Member

  • PipPipPipPipPipPipPipPipPip
  • LIEUTENANT, JUNIOR GRADE
  • 3,305 posts
  • LocationBoone, NC

Posted 11 January 2012 - 08:04 AM

View PostT0RC4ED, on 11 January 2012 - 07:57 AM, said:

Its not realy that far out there... winblows (depending on what version you run) can take up to 2 gig of ram right out of the box... on its own. I stip mine down to look like win 2k but I like to push a machine to the very edge and then some.


Windows Vista sometimes cached memory for applications, and Windows 7 does it too (though it seems to less aggressively), but that's memory that's still useable by any process if it needs it, and the OS itself won't pass 1GB, and 32-bit applications can't use more than 2GB of RAM in Windows by default (and it would take a horribly coded game to anyways), and most will steer quite shy of that 2GB, so barring an absurd amount of background programs, 4GB is still quite more than anyone needs, at least for gaming.

Edited by Catamount, 11 January 2012 - 08:05 AM.


#24 Thorqemada

    Member

  • PipPipPipPipPipPipPipPipPipPip
  • 6,396 posts

Posted 11 January 2012 - 08:26 AM

Afaik you can have a 32Bit Software use 4GB of RAM if the "Large Adress Aware Flagg" (Skyrim recently patched for it) is used and its either Vista 64 or Win7 x64 as OS.
Also the caching of Windows works quite well and for example the change from 2GB to 8GB had speed up my overall AoC gameplay experience significantly (not FPS but smoother and faster loading (ok some FPS when Data was already cached in the abundant RAM and not loaded from HD)) even as AoC is not using the "LAAF".
RAM is dirt cheap and if you have a 64Bit OS you should go straight to a Kit of 2 x 4GB sticks (in case of Dual Channel Layout)

#25 Catamount

    Member

  • PipPipPipPipPipPipPipPipPip
  • LIEUTENANT, JUNIOR GRADE
  • 3,305 posts
  • LocationBoone, NC

Posted 11 January 2012 - 08:36 AM

View PostThorqemada, on 11 January 2012 - 08:26 AM, said:

Afaik you can have a 32Bit Software use 4GB of RAM if the "Large Adress Aware Flagg" (Skyrim recently patched for it) is used and its either Vista 64 or Win7 x64 as OS.
Also the caching of Windows works quite well and for example the change from 2GB to 8GB had speed up my overall AoC gameplay experience significantly (not FPS but smoother and faster loading (ok some FPS when Data was already cached in the abundant RAM and not loaded from HD)) even as AoC is not using the "LAAF".
RAM is dirt cheap and if you have a 64Bit OS you should go straight to a Kit of 2 x 4GB sticks (in case of Dual Channel Layout)


2GB to anything is an improvement :P

I actually noticed a substantial improvement in AoC myself upgrading from 2GB (to 4GB in my case), so the game can use a fair bit of memory.

Would 4GB to 8GB be? Probably not for the immediate moment, but as you point out, RAM is dirt cheap, at least to a point. 8GB kits cost barely anything. Even if just for future-proofing, 8GB is probably what'd I'd recommend for anyone building a PC today.


I also wasn't aware that games were bothering with that at this point. I know a couple of games (RTSs) which violated the 2GB limit, necessitating the user override that limit, but I didn't know any games were shipping that did it themselves. I'm not surprised Skyrim is capable of breaking that limit, to be honest. On rare occasions BF3 gets a hair too close for my comfort (1.7-1.8), though it's never crashed.

Edited by Catamount, 11 January 2012 - 08:37 AM.


#26 Thorqemada

    Member

  • PipPipPipPipPipPipPipPipPipPip
  • 6,396 posts

Posted 11 January 2012 - 08:46 AM

Right, to few games take use of it, though when i remember correct Crysis had also the LAA-Flagg set (aside from the 64Bit Version).
Does that mean that MWO based on the Cryengine 3 will have use of it?

Edited by Thorqemada, 11 January 2012 - 08:53 AM.


#27 Prince Ian Davion

    Member

  • PipPipPip
  • 58 posts
  • LocationTasmania, Australia

Posted 11 January 2012 - 10:57 AM

View PostDV^McKenna, on 10 January 2012 - 12:06 PM, said:

Do you do rendering or programing to make use of that 16Gb of ram? seems an obscene amount if its just for gaming.



Actually...neither. When I went into the local computer shop for the RAM, I originally planned for 8GB of 2000? MHz (Can't remember the exact speed now)..However, Someone had ordered the 16gb, then cancelled the order when it was in the shop, so I got it VERY cheap.

#28 Nicook5

    Rookie

  • 1 posts

Posted 11 January 2012 - 12:28 PM

I had to go with AMD, even though i have a gtx460 at the moment :P

#29 Catamount

    Member

  • PipPipPipPipPipPipPipPipPip
  • LIEUTENANT, JUNIOR GRADE
  • 3,305 posts
  • LocationBoone, NC

Posted 11 January 2012 - 02:11 PM

View PostThorqemada, on 11 January 2012 - 08:46 AM, said:

Right, to few games take use of it, though when i remember correct Crysis had also the LAA-Flagg set (aside from the 64Bit Version).
Does that mean that MWO based on the Cryengine 3 will have use of it?


At this point, I don't see why they don't just use 64-bit executables. There's no reason to use a 32-bit OS anymore, and there hasn't really been since the early post-XP days, where there was a year or two of lackluster driver support for some hardware. In short, anyone not running a 64-bit OS probably can't run MWO anyways.

As for whether MWO will require more than 2GB of RAM in any case, I'm guessing the answer is probably no. At this point, PC hardware is still bound in many ways to console hardware, because most big releases are cross-platform, which puts limits on the average PC gamer's hardware. That's why requirements haven't climbed very fast, and of course, that means PC hardware capabilities haven't either.

I'm still using the same GPUs (two 5770s) that I did two years ago, something I never would have done previously (2 years was my GPU replacement interval through the prior three video cards), and not only do these GPUs still work, they still work well :P CPU requirements have grown even slower.


In short, MWO may be PC-only, but there are still ways in which it is indirectly bound to console hardware, since they still have to make a game runable on the majority of passable gaming PCs, the hardware in which is linked very closely to console power, because of the nature of cross-platform releases being bound to the slower platforms for requirements. That's a big part of why, even to this day, Crysis 1 is probably in the top half dozen most intensive games, and it's five years old! (the other reason was just terribad coding :P)

RAM usage is not left untouched by this simple reality, because even if game requirements can be scaled up for PC with adjustable options, this only affects RAM usage in particular so much, and the basic engines for msot games still have to be coded with the realities of 256/mb512mb systems in mind, so until we see a quantum leap on console tech in a year or two (or three?), things will remain relatively stagnant on PCs.

Edited by Catamount, 11 January 2012 - 02:14 PM.


#30 Ceefood

    Member

  • PipPipPipPipPip
  • Legendary Founder
  • Legendary Founder
  • 118 posts
  • LocationBathurst NSW Australia

Posted 11 January 2012 - 03:27 PM

why still have 32 bit because alot of PCs still use it - my partners one is only 2 years old & runs a 32 bit system
I do agree that 64 bit needs to be included

#31 Thorqemada

    Member

  • PipPipPipPipPipPipPipPipPipPip
  • 6,396 posts

Posted 11 January 2012 - 03:39 PM

From the reading about 32Bit vs 64Bit i still have in mind that 64Bit should be a tad faster bcs it can fully utilize the 64Bit CPU architecture, which has more or bigger register that speed up code execution if done right. Correct me if i am wrong.

Well, MWO must be still a 32Bit compatible game bcs of the still big amount of XP and 32Bit Vista/Win7 machines, right!

But i use 64Bit since Vista was released (which made me aware i like blue background more than green) and i dont understand why a 32Bit OS is still sold to the people - they (MS) should really make the final transition!

Edited by Thorqemada, 11 January 2012 - 03:45 PM.


#32 Vulpesveritas

    Member

  • PipPipPipPipPipPipPipPipPip
  • 3,003 posts
  • LocationWinsconsin, USA

Posted 11 January 2012 - 05:03 PM

well windows 8 will have to include 32 bit support, as it is supporting ARM, which until the next generation is still a 32bit architecture. hopefully with 9 they'll be 64 bit.

#33 Rabbit Blacksun

    Member

  • PipPipPipPipPipPipPip
  • The Privateer
  • The Privateer
  • 664 posts
  • Google+: Link
  • LocationAround the world ...

Posted 12 January 2012 - 02:33 AM

i run a ATI Radeon HD 5870 so dont run very hard but then again i stripped windows to bare minimum and only real issue i have run into is some games seem to want only nvidia ... which wasnt a problem either since you can run physx n ATI as well (it just crashes the game some times) but in the end I am still a NVIDIA fan since thats typically what ASUS runs.
Though I am definitely turning into a fan for ATI.

#34 Mezzanine

    Member

  • PipPipPipPipPip
  • 106 posts
  • LocationMinneapolis, MN

Posted 12 January 2012 - 06:08 AM

I just built a new PC last month, and for me it just came down to dollars. This was my first build, so I didn't have a preference in terms of brand loyalty, but honestly the GPU competition is way closer than the CPU competition (that's another thread, though).

The Nvidia 560ti and AMD 6950 were both comparable cards at the ~$200-$250 price point, but I went with a 6950 because I got a better deal on it at my hardware store. There are some brand new cards coming out over the next month or two, so if you're looking to upgrade just for MWO I'd strongly recommend waiting until late Spring or early Summer before you go shopping. GPU prices ought to drop, and odds are you'll get more performance/$$ in a few months.

It's important to remember that whatever GPU you go with, it absolutely doesn't have to be "top of the line" to run all of the latest games. You could probably get by just fine with ~$100-150 GPU that would be fine for now and deliver decent performance for another year or two. If you have the extra dollars and you want something that could (theoretically) last you into the next generation of consoles, then it might be worth looking into a higher end card.

#35 Drachenwolf

    Member

  • PipPip
  • 23 posts
  • LocationKnoxville Tn

Posted 14 January 2012 - 09:40 PM

While AMD and NVIDIA are consistently revising their GPU architectures, for the most part the changes they make are just that: revisions. It’s only once in a great while that a GPU architecture is thrown out entirely, which makes the arrival of a new architecture a monumental occasion in the GPU industry. The last time we saw this happen was in 2006/2007, when unified shaders and DirectX 10 lead to AMD and NVIDIA developing brand new architectures for their GPUs. Since then there have been some important revisions such as AMD’s VLIW4 architecture and NVIDIA’s Fermi architecture, but so far nothing has quite compared to 2006/2007, until now.

At AMD’s Fusion Developer Summit 2011 AMD announced Graphics Core Next, their next-generation GPU architecture. GCN would be AMD’s Fermi moment, where AMD got serious about GPU computing and finally built an architecture that would serve as both a graphics workhorse and a computing workhorse. With the ever increasing costs of high-end GPU development it’s not enough to merely develop graphics GPUs, GPU developers must expand into GPU computing in order to capture the market share they need to live well into the future.

At the same time, by canceling their 32nm process TSMC has directed a lot of hype about future GPU development onto the 28nm process, where the next generation of GPUs would be developed. In an industry accustomed to rapid change and even more rapid improvement never before have GPU developers and their buyers had to wait a full 2 years for a new fabrication process to come online.
All of this has lead to a perfect storm of anticipation for what has become the Radeon HD 7970: not only is it the first video card based on a 28nm GPU, but it’s the first member of the Southern Islands and by extension the first video card to implement GCN. As a result the Radeon HD 7970 has a tough job to fill, as a gaming card it not only needs to deliver the next-generation performance gamers expect, but as the first GCN part it needs to prove that AMD’s GCN architecture is going to make them a competitor in the GPU computing space. Can the 7970 do all of these things and live up to the anticipation? Let’s find out

http://www.anandtech...lery/Album/1599

The Radeon HD 7970 is a card of many firsts. It’s the first video card using a 28nm GPU. It’s the first card supporting Direct3D 11.1. It’s the first member of AMD’s new Southern Islands Family. And it’s the first video card implementing AMD’s Graphics Core Next architecture. All of these attributes combine to make the 7970 quite a different video card from any AMD video card before it.
Cutting right to the chase, the 7970 will serve as AMD’s flagship video card for the Southern Islands family. Based on a complete AMD Tahiti GPU, it has 2048 stream processors organized according to AMD’s new SIMD-based GCN architecture. With so many stream processors coupled with a 384bit GDDR5 memory bus, it’s no surprise that Tahiti is has the highest transistor count of any GPU yet: 4.31B transistors. Fabricated on TSMC’s new 28nm High-K process, this gives it a die size of 365mm2, making it only slightly smaller than AMD’s 40nm Cayman GPU at 389mm2.

Looking at specifications specific to the 7970, AMD will be clocking it at 925MHz, giving it 3.79TFLOPs of theoretical computing performance compared to 2.7TFLOPs under the much different VLIW4 architecture of the 6970. Meanwhile the wider 384bit GDDR5 memory bus for 7970 will be clocked at 1.375GHz (5.5GHz data rate), giving it 264GB/sec of memory bandwidth, a significant jump over the 176GB/sec of the 6970.

These functional units are joined by a number of other elements, including 8 ROP partitions that can process 32 ROPs per clock, 128 texture units divided up among 32 Compute Units (CUs), and a fixed function pipeline that contains a pair of AMD’s 9th generation geometry engines. Of course all of this hardware would normally take quite a bit of power to run, but thankfully power usage is kept in check by the advancements offered by TSMC’s 28nm process. AMD hasn’t provided us with an official typical board power, but we estimate it’s around 220W, with an absolute 250W PowerTune limit. Meanwhile idle power usage is looking particularly good, as thanks to AMD's further work on power savings their typical power consumption under idle is only 15W. And with AMD's new ZeroCore Power technology (more on that in a bit), idle power usage drops to an asbolutely miniscule 3W.

Overall for those of you looking for a quick summary of performance, the 7970 is quite powerful, but it may not be as powerful as you were expecting. Depending on the game being tested it’s anywhere between 5% and 35% faster than NVIDIA’s GeForce GTX 580, averaging 15% to 25% depending on the specific resolution in use. Furthermore thanks to TSMC’s 28nm process power usage is upwards of 50W lower than the GTX 580, but it’s still higher than the 6970 it replaces. As far as performance jumps go from new fabrication processes, this isn’t as big a leap as we’ve seen in the past.

In a significant departure from the launch of the Radeon HD 5870 and 4870, AMD will not be pricing the 7970 nearly as aggressively as those cards with its launch. The MSRP for the 7970 will be $550, a premium price befitting a premium card, but a price based almost exclusively on the competition (e.g. the GTX 580) rather than one that takes advantage of cheaper manufacturing costs to aggressively undercuts the competition. In time AMD needs to bring down the price of the card, but for the time being they will be charging a price premium reflecting the card’s status as the single-GPU king.
For those of you trying to decide whether to get a 7970, you will have some time to decide. This is a soft launch; AMD will not make the 7970 available until January 9th (the day before the Consumer Electronics Show), nearly 3 weeks from now. We don’t have any idea what the launch quantities will be like, but from what we hear TSMC’s 28nm process has finally reached reasonable yields, so AMD should be in a better position than the 5870 launch. The price premium on the card will also help taper demand side some, though even at $550 this won’t rule out the first batch of cards selling out.

Beyond January 9th, AMD as an entire family of Southern Islands video cards still to launch. AMD will reveal more about those in due time, but as with the Evergreen and Northern Islands families AMD has a plan to introduce a number of video cards over the next year. So 7970 is just the beginning.

NVIDIA lol eat your heart out!!!

#36 Catamount

    Member

  • PipPipPipPipPipPipPipPipPip
  • LIEUTENANT, JUNIOR GRADE
  • 3,305 posts
  • LocationBoone, NC

Posted 14 January 2012 - 10:02 PM

Yeah Nvidia's consumer GPU line is not doing well right now.


They've still got the market almost to themselves in GPGPU, though the 7970 is AMD's first crack at seriously competing with them there, and it's a pretty darned good first attempt.


With Nvidia barely hanging on for the moment with their consumer GPUs, having only two, maybe three cards that are actually worth it for gamers and HTPC users to buy, and AMD threatening their once-safe niche in GPGPU, Nvidia really has to get their act together.

#37 Fyrwulf

    Member

  • PipPipPipPipPipPip
  • The Sureshot
  • The Sureshot
  • 262 posts

Posted 16 January 2012 - 12:31 PM

The graphics card in my first computer was a Rage TNT2. After that was no longer adequate, I bough a Radeon (don't remember which model, although I think it was a 9100) and haven't looked back since. Presently I have an X800 PRO, but I'm going to upgrade to an HD3450 for this game.

#38 Catamount

    Member

  • PipPipPipPipPipPipPipPipPip
  • LIEUTENANT, JUNIOR GRADE
  • 3,305 posts
  • LocationBoone, NC

Posted 16 January 2012 - 12:43 PM

View PostFyrwulf, on 16 January 2012 - 12:31 PM, said:

The graphics card in my first computer was a Rage TNT2. After that was no longer adequate, I bough a Radeon (don't remember which model, although I think it was a 9100) and haven't looked back since. Presently I have an X800 PRO, but I'm going to upgrade to an HD3450 for this game.


That's kind of a curious upgrade choice (and one I wouldn't bet on being able to play MWO with).

I'm curious, are you limited to the AGP bus or something?


If you gave your total system specs I bet we could come up with something more appropriate for a video card that would still fit into whatever budget you have.

#39 Thorqemada

    Member

  • PipPipPipPipPipPipPipPipPipPip
  • 6,396 posts

Posted 16 January 2012 - 12:51 PM

I guess its the less expensive 64bit 256mb DDR2 passive cooled version?

Hard to say, i would expect the frames per minute in the double digit range...

:)

#40 Gremlich Johns

    Member

  • PipPipPipPipPipPipPipPipPip
  • The 1 Percent
  • The 1 Percent
  • 3,855 posts
  • LocationMaryland, USA

Posted 16 January 2012 - 02:31 PM

I use a single overclocked EVGA GTX460 (nvidia) with the Arctic Cooling accelero extreme plus cooler. The thing idles at 25 deg C and 42 at loads. I am very happy with the card for now.

I currently have 8 GB system RAM, but I do graphics work (photography) (and perhaps some video in the future). When I bump up to an AMD 4100 FX 3.6 Black (AM3+) with an AM3+ mobo, I'll install the 16 GB RAM I purchased from newegg on sale for $80.

Edited by Gremlich Johns, 16 January 2012 - 02:31 PM.






1 user(s) are reading this topic

0 members, 1 guests, 0 anonymous users