Amd Confirms Radeon Hd 8000 Delay
#1
Posted 09 February 2013 - 11:02 AM
http://www.tomshardw...8000,20979.html
"the December 2012 reveal of desktop and mobile HD 8000 series GPUs is simply an OEM rebrand of its existing lineup and a true successor to the HD 7000 will likely only see release during Q4 2013."
#2
Posted 09 February 2013 - 02:35 PM
Edited by Catamount, 09 February 2013 - 02:39 PM.
#3
Posted 09 February 2013 - 02:48 PM
A couple years ago I should have spent that extra buck, and I'd still have top quality stuff to this day, which is weird by me. I suppose I'll wait until graphical requirements out this rigs capabilities as usual.. by then computers will be steadily declining in price as pads and phones become the popular choice for most.
or i'll make enough money to not give a hoot.
Edited by M4NTiC0R3X, 09 February 2013 - 02:50 PM.
#4
Posted 09 February 2013 - 03:32 PM
M4NTiC0R3X, on 09 February 2013 - 02:48 PM, said:
A couple years ago I should have spent that extra buck, and I'd still have top quality stuff to this day, which is weird by me. I suppose I'll wait until graphical requirements out this rigs capabilities as usual.. by then computers will be steadily declining in price as pads and phones become the popular choice for most.
or i'll make enough money to not give a hoot.
It's called "mature technology"
Back in World War I, internal combustion engines were advancing as fast as computers were in the 90s. Aircraft were literally doubling in their engine output just about every year. Then it just stopped, because the technology reached a point of maturity, which basically means all the low-hanging fruit had been picked for improvement.
We always knew the same thing would happen with computers. We'd hit barriers in the easy means of improvement, Moore's Law would die, and that would be that, and in about 2009 that's exactly what happened (mind you, things tapered off; we didn't just hit a brick wall). We hit the temperature wall long before then, but compensated until far more fundamental problems hit us in the face.
Part of the reason this happened with GPUs so drastically is because a very expensive 32nm fabrication process was invested in, then abandoned halfway through, so 40nm stuck around for awhile, but really, what we've hit is a TDP barrier. Basically, miniaturization is still continuing at almost the same rate it was, say, from 1995 onward, but chips used two methods to improve performance: get smaller and increase TDP. Now we've hit a barrier there, especially for GPUs.
Just look at AMD's TDPs for the few generations of performance doubling:
3970: 106W
4870: 150W
5870: 228W
See? Each generation of card, after AMD's takeover of Ati, went up about 50%. This is how they doubled performance every generation. They would get some increase out of better tech, out of better architecture, a smaller fabrication process, better instructions, etc, for some of that improvement, and then they'd just make a bigger, more power-hungry GPU for the rest, and so it would be double the speed of the last. Then the 6970 came among, and its TDP was 250W, or only a 10% increase. The 7970 came along and also had a 250W TDP, so no increase at all. AMD stopped brute-forcing more performance by just having a more power-hungry card, so the performance increases suddenly nosedived.
The reason for this is because it was found that 250W-300W represents the rough mechanical limit of dissipation for a dual-slot card cooler (and 300W is pushing it; things get louder and engineering headroom gets smaller). Even looking back at the Ati monstrosity that signaled the end of the company, the 2900XT, you get the same thing: 215W, right in the higher part of that range of what a dual-slot cooler can do. Even dual-GPU cards like the 4870X2 and 5970 aren't much above that 250 in their real-world dissipation, and their absolute max TDP is below the 300 mark, because much above that, and either the card gets hot, the cooler gets loud, and/or the cooler design gets very expensive. Sapphire Vapor-X coolers seem to be able to exceed this limit somewhat, as they can sustain absurdly high overclocks and overvolts while keeping cards cool compared to reference coolers, but look at the cost! A Vapor-X 7970 is almost $100 more expensive than the cheapest reference 7970.
Even if everyone adopted such a design, and reference cards were suddenly using the best cooler mankind has engineered to date for a GPU, that would net us what, maybe one more doubling of performance?
So now performance has become far more incremental, because we can only decrease fabrication size, and that's going to hit a limit too. Intel thinks they can push it below the 10nm mark, but frankly, I think it's a pipedream, because we're already seeing problems, if problems the companies are a little bit reticent about discussing.
For instance, Nvidia just imposed rules that effectively voltage lock all cards. Board partners can overvolt, but Nvidia will not warrant cards that that's done with anymore, or even cards with unlocked voltages that users can overvolt. I suspect AMD will not be far behind. The reason is because the smaller the process, the quicker you hit quantum tunneling and electron migration, and these problems get worse and worse with higher voltage (so basically, the smaller the process, the more voltage-sensitive the cards become). I expect we'll hit a barrier very quickly where even smaller amounts of voltage, what we consider "stock" these days, will produce such big problems in this area, that we'll need to reduce voltage on every successive chip just for it to run reliably, and then you're reducing performance, so you're defeating the point of a smaller process in the first place. That's speculation of course, but it's what I think will happen, and if it does, what little bit is left of Moore's Law will finally gasp for its last breath and die, and the transformation of computing into a mature technology will be complete (at least until quantum computers come along).
Edited by Catamount, 09 February 2013 - 03:37 PM.
#5
Posted 09 February 2013 - 03:40 PM
But in the end, we are limited by what physics allows.
#6
Posted 09 February 2013 - 03:46 PM
#7
Posted 09 February 2013 - 06:03 PM
#8
Posted 09 February 2013 - 06:57 PM
Sennin, on 09 February 2013 - 06:03 PM, said:
Two 6950s is a little more powerful in terms of the framerate you get than any single GPU out today (but not much), but it doesn't mean it's a bad tradeoff, because dropping down to a slightly slower single-GPU setup will make things more playable than a faster dual-GPU setup, because inter-card latency between two GPUs (what we call "microstutter") makes games less responsive on two cards than on one.
I went through this myself a bit ago, switching from two 5770s to one overclocked 5850 (this was before I got my 7970). The 5850 was vastly superior.
I'd still hesitate to replace two 6950s, because it would be only a slight upgrade (it's more like a sidegrade losing a few FPS, but not having to deal with microstutter), but since one is beginning to fail anyways...
The 670 and 7970 are two great options. The 680 is too expensive to be worth consideration (as much or more than a 7970 GHZ Edition, but slower).
The 7970 and 7970 GHZ edition are both faster than a 670, but only modestly (~13%), and the 670 can be a good deal. A 7970 (non GHZ edition) is a bit faster than a 670, for about $15 more, so call it a wash between them for value. The cheapest 670 I can find is here, but this is a good overclocked GTX670, for only about $15 more than the cheapest normal 670 (it's 10% faster, for ~5% more money). I got one of these myself, picked the 7970 for the slightly better performance and so I could get Crysis 3 and Bioshock Infinite for free. You can get cheaper 7970s, but the Sapphire Dual-X OC models overclock absurdly well, up to 1300mhz core, so that's my kind of thing, plus it's faster out of the box.
You'll just have to decide what kind of setup is right for you. Another option is the 660TI. It's 30% cheaper, but about 20% slower, so it's a better value, but isn't as powerful, and you want something that's not going to be a downgrade from your present system.
They key point is that anything you buy should be good for a long time. Even a 5870, a >3 year old card, is a fine GPU these days. My spare 5850 is even pretty decent (runs Skyrim on Ultra, runs MWO on medium/high, etc), and things are moving slower than ever for advancement, so 2-3 years from now, a 7970 or 670 should still be pretty good. We're not even going to get the next measurable upgrades until late next year.
Also note that if you have the budget, the Geforce Titan will be worth something like two 680s in power, in a single GPU, but it's also slated to be $1200 so far, so it's outrageously expensive, and an inferior value ($900 would make it a much better value, but that price is just rumored). Still, if you want the best performance, and don't mind a big investment, it'll be insanely powerful.
Edited by Catamount, 09 February 2013 - 07:03 PM.
#9
Posted 09 February 2013 - 07:05 PM
Sennin, on 09 February 2013 - 06:03 PM, said:
If I may, before you chuck the card there is one piece of equipment that I always recommend people have for times likes this.
http://www.newegg.co...h=1&srchInDesc=
It is a power supply tester, and they are the most inexpensive piece of insurance you can own. I'm not saying that your PS is bad, you actually have a pretty good one. While yours is a single Rail design individual components and connectors can become problems just as easy. Even a little bit of corrosion on a connection can drive your Ohms (resistance) way out of tolerances. A bad solder or crimp joint can as well. Both can seriously damage your components.
I've chased enough "ghosts" in the PC over the years that now I always start with the PS whenever an issue creeps up, then RAM. Most PC components usually fail due to out of tolerance voltages being supplied or heat issues. So let's say you replace that GPU, If the PS supply were a bit quirky then the new one could run well for awhile, but also fail once the power handling circuits eat themselves up.
Either way, the tester can save you big bucks now or in the future.
Edited by Bad Karma 308, 09 February 2013 - 07:12 PM.
#10
Posted 09 February 2013 - 07:37 PM
#11
Posted 09 February 2013 - 09:54 PM
Catamount, on 09 February 2013 - 07:37 PM, said:
I do, my #2 GPU. I run MSI afterburner to monitor temps and gpu usage etc. After playing MWO or MWLL for an hour or two I recently started getting major artifacts, flashing items, and atari looking colors. I checked MSI Afterburner and my #2 card was maxxed out on load and heat was at 72+ celcius but the #1 card was just fine. I thought I might just need to give the #2 card a rest but the next time I loaded the game it did the same thing almost immediately. So I turned off Crossfire and just ran the #1 card and have not had the same issue yet. I dont know what I can do to actually test either card. I'm not a computer tech. I'm just the average joe that knows the basics of how to assemble a PC and bandaid small problems. Any suggestions on where to start troubleshooting the GPU itself would be great help.
@ Bad Karma, thanks for the suggestion, I will have to look into purchasing a tester before I go buying a new GPU. Which one do you recommend? For my level of troubleshooting I think one with simple Pass/Fail style of readout with some basic info would be better.
[Edit: My awful spelling.]
Edited by Sennin, 09 February 2013 - 09:54 PM.
#12
Posted 10 February 2013 - 12:30 AM
Sennin, on 09 February 2013 - 09:54 PM, said:
Which one do you recommend? For my level of troubleshooting I think one with simple Pass/Fail style of readout with some basic info would be better.
From that list I'd look at any from Rexus, I've also been very happy with Apevia. However, right now I have pro versions of the "Thermaltake" unit (similar to what Newegg offers) that I keep on my workbenches. Rosewill (in general) is a Newegg re-branding of items from many different vendors, so since I don't know who the real manufacture can be, I tend to steer away from them, but I've also never had any issue with them either so take that advise with a grain of salt.
However, all of them will let you know what "is" or "isn't" in a pass fail manner. But what you really want to look for during your testing is for fluctuations. Remember that power coming off of a PS is proportional to what is coming from your outlet, just converted to direct current (DC) by the PS. So if the power your local power company is providing oscillates, you'll likely see that in the converted DC during testing as your PS attempts to compensate. So your tests numbers will move around a bit, and that is OK, wildly is not. But as I mentioned the tester will let you know what is acceptable, I just don't want you to freak when or if you see it happening on the test.
If you need help just let me know, I'll be here.
Also, if I may step in on your reply/question to Catamount, easiest way to test the card is to throw it in a different machine, by itself, and run it hard for about 30-45 minutes to see how it responds.
#13
Posted 10 February 2013 - 12:58 PM
Catamount, on 09 February 2013 - 03:32 PM, said:
It's called "mature technology"
Back in World War I, internal combustion engines were advancing as fast as computers were in the 90s. Aircraft were literally doubling in their engine output just about every year. Then it just stopped, because the technology reached a point of maturity, which basically means all the low-hanging fruit had been picked for improvement.
Not sure I agree. I get 197 hp out of a 2.0 litre 4 cylinder. It does not consume more gas.
Modern GPU's may require more power, because the motherboard can provider that power now (where it couldn't before).
That's an improvement.
There isn't a huge gap between each new generation, but when you compare the spectrum, there's a huge improvement going forward. Quad core is manditory. People aren't willing to pay the $$ for the top-end, power sucking technology, and that's the direction the market is going.
The tech improvements on PC are not great because the market is stale. But look at the advancements on cell phones and tablets. Iphone 5 has a triple core graphics card or something similar, yet PC graphics are still running 1 core.
Motherboard tech needs to improve as tablets start to catch up.
The original P4 wasn't thought to be a significant improvement on the Pentium 3.
5 years later, the first duel cores weren't thought to be a significant improvement on the Pentium 4 with hyper threading.
My first quad core Phenom 2.2 gave me worse frame rates than many of the Core 2 duels on the market.
Now I am running an 8-core FX 8350. Technology is always moving forward. Software has yet to catch up
AMD is waiting for the Next Gen consoles, because that's where the gaming industry is going to head. There's no point in pleasing the last generation of Xbox 360 compatbile games, everything will be making a shift to DX11 in the next couple years.
The quickest Sports car on the market may not change tech as far the engine mechanics, but it's the low-end cars that have made vast improvements.
Edited by Badconduct, 10 February 2013 - 01:03 PM.
#14
Posted 10 February 2013 - 04:22 PM
Badconduct, on 10 February 2013 - 12:58 PM, said:
Not sure I agree. I get 197 hp out of a 2.0 litre 4 cylinder. It does not consume more gas.
Is that twice as good as engines were doing only a year or two before your car was manufactured? If not, then you're still talking about very mature technology compared to when these engines were introduced. Being mature doesn't mean a technology never advances; it means that the low-hanging fruit for improvement has long-since been picked, so advancement slows down. Sure, engines have made some strides in the last 20 years, but those same strides were happening every couple of years during WWI, in aircraft.
The same is true of computing. Hardware isn't stagnating because the market is stagnant; we have ample applications for vastly more powerful GPUs than we have now, thanks to high resolutions and multi-monitor setups. If there wasn't a need for more powerful GPUs than were on the market, AMD and Nvidia never would have have pursued a technology so fraught with problems as Crossfire and SLI.
Yet, despite this fact, we've seen cards hit barriers and slow down. If it's just a matter of needing a place on the market, why aren't cards passing the 300W mark? We already know people want more powerful GPUs, because Nvidia is making one (Titan), so why didn't we make more powerful GPUs a year or two ago, by just making 400W cards? The answer is that you can't; it's not practical. CPUs haven't stagnated quite as much, but they're still not exactly rocketing along, either, and we've had to change how we improve them because brick walls have sprung up in our way on previous methods (namely, never-ending clock speed increases).
Mobile technology is the one sector advancing quickly, but that has nothing to do with the market. Every computing market needs more power. It's just that small mobile ARM chips and extremely low-powered X86 chips are both relatively new technology, that are poorly optimized, and simply have more obvious room for improvement.
The very fact that chips moved to multicore designs in the first place shows that we're hitting brick walls. Multicore processors are a very non-ideal way to increase performance, because even with a lot of software being reasonably threaded, there are still limits to improvement, as per Amdah's Law. It's a means of advancement that will quickly run into barriers, but we went that way because single core chips had already hit the aforementioned barrier, that Intel demonstrated amply with the whole Netburst debacle.
Pretty soon, the same thing is going to happen on the mobile market.
We're hitting limits to miniaturization, we're hitting TDP limits, we're hitting parallelization limits, and no matter how much anyone wants to pretend that computer hardware is just going to advance as a rapid pace forever and ever and ever, the simple fact is that the death of Moore's Law is already behind us. Now, for better or worse, we'll have a slow tapering in the rate of advancement to look forward to over the next decade, or rather I should say a continuation of the slow tapering that began four years ago.
Edited by Catamount, 10 February 2013 - 04:26 PM.
#15
Posted 10 February 2013 - 04:29 PM
AMD Plans To Stick With The HD 7000 Series For The Bulk of 2013
"The Radeon 7000 series will be sticking around a bit longer than we expected.
The GPU nuclear arm’s race between AMD and Nvidia over the last several years has been amazing for consumers, however the R&D costs associated with this competition must have been astronomical. Both companies have been trading blows at different price points for the last few generations, and AMD is finally throwing up the white flag. According to AMD Product Manager Devon Nekechuk, the company will be sticking with its HD 7000 series for the bulk of 2013, and will use promos and software bundles to remain competitive against the green team."
Whole Article:
http://www.maximumpc...lk_2013#slide-0
Edited by Bad Karma 308, 10 February 2013 - 04:31 PM.
#16
Posted 10 February 2013 - 09:28 PM
Catamount, on 10 February 2013 - 04:22 PM, said:
Is that twice as good as engines were doing only a year or two before your car was manufactured? If not, then you're still talking about very mature technology compared to when these engines were introduced. Being mature doesn't mean a technology never advances; it means that the low-hanging fruit for improvement has long-since been picked, so advancement slows down. Sure, engines have made some strides in the last 20 years, but those same strides were happening every couple of years during WWI, in aircraft.
I still disagree completely with you. You have old-man perspective.
"Back in my day, computers advanced twice as fast!"
Even if the engine itself does not advance much, the automobile continues to advance at a very rapid pace. Consider, for a moment, the amount of Hybrid cars on the road. Or the (horrible, horrible) Volt.
The Porsche 918 would not be possible in the 1940's. The computer techonology was none exsitent. It probably wasn't possible in the early 2000's either.
Hardware is stagnent because the economy is stagnent, the US economy specifically. Americans are out buying guns and calling their Government Nazis, camping out in the city streets and generally acting irrational. They bring this "Doomsday" world view with them everywhere, about how time is ending, economys are crashing and technology is slowing.
AMD (for example) isn't willing to take a risk on advancing technology, when people are searching for budget computer systems. They can't afford to buy a $300, or even a $200 graphic card so there isn't much point in building a better one.
Technology always moves forward at exactly the same pace. It never stops. Someone is always doing something to advance technology.
You have a very pesimestic world view. This is a redevelopement corporate strategy, that has more to do with wasting money on a dying market. Consumers do not upgrade their video card fast enough to justify a new card every 8 months, especially with new consoles around the corner.
Quote
Enter cloud computing. If they can't make a faster processor, they'll just put the workload offsite. Do you think the Earth is just going to come to a stand still and we start to decline?
Just because the GPU in it's current form isn't progressing quickly, does not mean that a giant technology leap is not on the way. The GPU is stuck because of the motherboard design.
The only company who can take the risk and intigrate the GPU onto the motherboard is AMD. Nvidia doesn't make processors, and Intel can't make a real GPU yet. The A10 processors are getting there, and that's where things are heading. At $2.59 a share, I am going to invest in AMD this month. If AMD changes their game plan away from PCI-E cards, Nvidia is going to be in trouble.
This has nothing to do with the advancement of technology. Technology is moving along just fine.
This has everything to do with the slow demise of the home PC. As soon as someone comes up with a better design, this style of PC tower will go in the CRT monitor piles.
Edited by Badconduct, 10 February 2013 - 09:43 PM.
#17
Posted 11 February 2013 - 03:58 AM
Sennin, on 09 February 2013 - 09:54 PM, said:
Try running each card by it's self. If one card works great, but the other gives problems, you have a bad card.
I upgraded from 5870 crossfire to a single 7970. A single 5870 is about equal to a single 6950(less than 5% performance difference). 6950 crossfire is a little more better than 5870 crossfire because of better crossfire scaling(6950 crossfire is around 15% better than 5870 crossfire). In games that support crossfire, my single 7970 basically matches the performance I was getting with the 5870 crossfire set-up.
Edited by Barbaric Soul, 11 February 2013 - 04:02 AM.
#18
Posted 11 February 2013 - 06:45 AM
Badconduct, on 10 February 2013 - 09:28 PM, said:
I still disagree completely with you. You have old-man perspective.
"Back in my day, computers advanced twice as fast!"
in more ways than you could possibly know
I think the last time I was having a discussion with someone your age, I was trying to explain who Macgyver was and convey the concept of a 2D scrolling video game The poor kid just didn't get either.
What it has to do with this conversation is absolutely, positively nothing.
Quote
So let me get this straight: Humans are on the cusp of cracking long-standing environmental and economic problems with practical nuclear fusion, revolutionizing medicine with stem cell and genetic research, unlocking practical interplanetary travel and cheap LEO access, discovering orders of magnitude more about our own origins than ever thought possible with molecular genetics, beginning the first experiments with space-time warping in labs (yes, that might mean FTL travel), to name an infinitessimal fraction of what's going on in science and technology at the moment, but the world is a sad, pessimistic place because I think your Playstation isn't going to continue getting much faster in a few years?
Humanity has a boundless future ahead of it...
... but suddenly it's all for naught, because your cell phone might hit a limitation on the size of textures it can render at 30fps in a game?
One of us definitely has a pessimistic world view here, that's for sure, but it's not me.
Everyone who's taken up the study of computers at some point in the last 30 years has known that a hard-barrier in the speed of microchips was going to hit sooner or later, and probably sooner. Intel's been saying their chips will hit a hard cap on improvement around 2020 for years (more recently, they've suggested they might be able to push that a little further... a little). Even Gordon Moore himself has long said that his law wouldn't hold forever.
That doesn't mean technology stops; I've never said technology would stop. I've said that certain technologies would mature. Do humans still sit around preoccupied with making mass advancements to the wheel, or finding fundemental ways to redesign the mouse trap every year? Do we make constant quantum leaps in the technology of firearms (for the record, autopistols have changed little since the Colt 1911 of 102 years ago), or obsess over constant improvements to the railroad?
Of course not. Every once in a while, a small change will creep in here or there (which, cumulatively, can even amount to significant changes, sometimes), but for the most part, we've moved on to other technologies, to more interesting pursuits. If people thought like you propose, we wouldn't have microchips at all today in their present form, because most of the scientists who went into that new frontier decades ago would have sat around, instead, working on never-ending and increasingly difficult improvements to existing technology, rather than branching out to new technology.
Again, it seems to me that it's you who has the pessimistic world view.
Quote
You see? Congratulations, you've just made my point for me. Old areas of technology stagnate to a creep of incremental improvement, so we go off and find new technologies to explore.
It would seem you get this concept after all.
Quote
And this just sounds like nonsense altogether. Motherboards have more bandwidth than any GPU in existence, or even projected for the foreseeable future, can ever use, with the majority of the power requirement now circumventing the board altogether, and being easily satisfied directly by the PSU.
There is demand for more powerful GPUs (to the point that monstrosities like the 690 and Titan are perfectly sensible business propositions), and ample backbone in systems for them. The only thing holding back GPUs are GPUs.
Quote
And this is absolutely nonsense. AMD just tweaked the 7970 into a the more expensive, faster 7970GE, and Nvidia is preparing to unleash a $1200 GPU on the market. Why would these two long-standing profitable companies be doing that if there wasn't a market? If that's all that's holding GPUs back, then why isn't Nvidia releasing a GPU as powerful as the titan for $400, instead of $1200? Why, if nothing is holding back the tech, is the tech going nowhere in a market that's clearly willing to put down big bucks for it?
The market absolutely craves more power, not all of it, but a big enough portion of it to be worth massive technological investment, by both companies, yet we're not seeing it the way we used to.
Even looking out further ahead, as we speak, both companies are hard at work for a massive new release, late next year, with new architectures and smaller processes. I wouldn't be surprised if we saw better than 50% improvement, at the same pricepoint, with these new cards, maybe even considerably better. It's just taking vastly longer than the past progressions to achieve. It'll be almost three years from the HD 7000 and Geforce 600 series to their real replacements (small, incremental interim cards are planned in the meantime, but they aren't slated to do much), whereas the Radeon HD 4000 -> 5000 progression took only a year, a year to double performance.
The progression after Q4 2013 series will take longer still, because that's the trend we're seeing, well, most of us, anyways
It doesn't mean technology will cease advancing, as I suspect a total maturity of technology should be some thousands of years away from now. It just means the microchip has matured, and instead of obsessing over trying to combat that, our scientists and engineers will start looking to other, more interesting, more useful frontiers.
Bring them on, I say.
If you want to continue obsessing over uninteresting things like Moore's Law, then go ahead, but you're welcome to join the rest of us any time.
Edited by Catamount, 11 February 2013 - 07:52 AM.
#19
Posted 11 February 2013 - 07:37 AM
Badconduct, on 10 February 2013 - 09:28 PM, said:
What, pray tell, could this possibly have to do with anything?
People buy guns for practical reasons, and because they're fun to shoot, and because they enjoy hunting, and because they like to collect and tinker with them, not because they think the state of computing is going to usher in the zombie apocalypse
and they take the streets because they're angry over the environment, or because they oppose abortion, or because wages are too low, or because we're engaged in a war somewhere, not because they're angry that Moore's Law is coming to an end
Your opinions of firearm ownership and political activism have absolutely nothing, as far as I can tell, to do with the state of technology.
#20
Posted 11 February 2013 - 05:12 PM
While I'm sad we'll have to wait for the next round of cards, I'm also pleased I'm going to get atleast another 12months out of my 7970.
Edited by Az0r, 11 February 2013 - 05:14 PM.
1 user(s) are reading this topic
0 members, 1 guests, 0 anonymous users