How about bam bam bam let me know if you wanna go to war I am waiting.It can be carried to 'war'. Will serve a good purpose. (Swing Swing Swing -- hit enemy on head -- swing swing swing)
Sure, right after pentagon borrows it for some time, they wanna run their coffee machine.^I like that SMPS. Please sell it to me for Rs. 50 after you use it for a day or two. Let me my rig too.
Yeah, ever thought about Microfusion cells?? (the only power source in Fallout 3).WoW! That was like one big green monster or we should call it just the incredible Hulk ?
& PSU and cabbys pic you posted was also nice..
BTW, why use atomic power plants just use water to generate the required power that those power drawere need ... go green .. save the world
Legion Hardware has reported that the ATI Radeon HD 5970 experienced certain overclocking "issues" during extended FurMark benchmarking.
According to Steve Walton, the tests were executed at 1680x1050 with an "increased run time," as both GPUs were clocked at at 875MHz. GDDR5 memory remained at default specification levels, while the core voltage was raised to 1.1625v.
"With the core at 875MHz, the average frame rate was sitting at a fairly constant 115fps. However after just 40 seconds temperatures exceeded 100 degrees and shortly after this the Radeon HD 5970 overclocking problem presented itself," wrote Walton.
"In order to avoid cooking itself, the Radeon HD 5970 throttles both cores down, negating any positive performance impact the overclock is going to have. However, rather than throttle the cores down just a little, or even to the default operating specification, it drops the frequency to just 550MHz, that's a 24 percent under clock on each core."
Walton explained that the clock decrease dropped the average frame rate from 115fps to "well under" 100 FPS.
"Eventually the average was reduced to 90fps, which is a 22 percent drop in performance."
However, AMD's Dave Baumann told TG Daily that the FurMark results indicated the 5970 was functioning as per its design specifications.
"The Legion FurMark results tell me that the 5970 is doing exactly what it was designed to do. There are protection measures in place that kick in when thermal or power levels exceed maximum permitted levels, so the card was taking the correct actions to protect both itself and the motherboard," said Baumann.
"Now, I would also like to note that this particular benchmark (FurMark) was coded primarily as a stress test. It is certainly not elective in how it utilizes the GPU, as the application basically lights up the entire card to create or emulate maximum power draws and scenarios. No real world app or game does that for an extended period of time. FurMark was lighting up everything, with average power draws exceeding the requirements of any standard app by 20-40 percent.
"As I explained earlier, the 5970 has critical mechanisms in place to protect a system's hardware. So, what would generally happen in an extreme extended overclocking scenario? Yes, the clock cycle decreases, but returns to overclocking speeds as soon as thermal temperatures stabilize.
"I wouldn't say that this happens in many cases in an extended overclocking session - with normal apps most will run fine, but if we do hit a case where a game is particularly stressful in some areas then usually it will not last long and then return to the OC speeds when the power or temperatures have got back in check.
"In my opinion, the Legion Hardware article illustrates that our card is working exceptionally well - exactly in the way it is supposed to."
Please note we are only using FurMark as a tool to show the overclocking problem that we encountered.
However, the problem was first noticed when benchmarking the overclocked Radeon HD 5970 in long stressful benchmarks such as S.T.A.L.K.E.R Clear Sky. In such games the overclocked Radeon HD 5970 failed to provide strong performance gains and if we looped the tests several times the results often ended up being lower than before any overclocking took place and this was because the card would throttle down to 550MHz.
AMD did make it clear that the Radeon HD 5970 does throttle down to avoid any damage when operating at high temperatures. However they also portrayed the Radeon HD 5970 as a stellar overclocker that could and would hit Radeon HD 5870 speeds. While this is true to a certain extent as the Radeon HD 5970 will reach Radeon HD 5870 frequencies, it will also throttle back after a few minutes in certain stressful games such as S.T.A.L.K.E.R.
So this is not really a practical overclock then since the card will inevitably throttle back. While it won’t happen as fast as it did in Furmark it will happen and that is the point. The Radeon HD 5970 is still a great product and it is the world’s fastest single graphics card but it is not the overclocking behemoth that AMD made it out to be. At least not with the standard cooler which they say is rated up to 400w.
The overclocking issues were first uncovered using FurMark, which is an excellent program for placing full load on both GPU’s while measuring temperatures and performance. For the purpose of this article we clocked both GPU’s at 875MHz, while leaving the GDDR5 memory at the default specification. In order to achieve this overclock the core voltage must be increased and therefore we were testing at 1.1625v.
With the overclock now in effect we ran the FurMark benchmark at 1680x1050, please note that we did increase the run time for the purpose of this test. With the core at 875MHz, the average frame rate was sitting at a fairly constant 115fps. However after just 40 seconds temperatures exceeded 100 degrees and shortly after this the Radeon HD 5970 overclocking problem presented itself.
In order to avoid cooking itself, the Radeon HD 5970 throttles both cores down, negating any positive performance impact the overclock is going to have. However rather than throttle the cores down just a little, or even to the default operating specification, it drops the frequency to just 550MHz, that’s a 24% under clock on each core.
This saw the average frame rate decrease from 115fps before the throttling, to well under 100fps. The screen shot below shows an average of just 94fps after 260 seconds, with the temperature leveling off at 89 degrees. Eventually the average was reduced to 90fps, which is a 22% drop in performance.
What we found interesting was that when testing with FurMark, the Radeon HD 5970 would even back off the card when operating at the default specification of 725/1000MHz. In fact, the only way we were able to avoid the throttling issue was to reduce the core frequencies to 650MHz, at this frequency the throttling never took place.
Do you trust Anandtech?? Well: *www.anandtech.com/weblog/showpost.aspx?i=657unless & untill these issues are mentioned by a few site i'm sceptical about there claims
From this, we can conclude that the VRM banks are receiving wildly different amounts of cooling. The VRMs on the right side are not cooled nearly as well as those on the left and as a result the card is being held back by the VRMs on that right side. To that extent, we believe that if all the VRMs received the same level of cooling as the VRMs on the left side, then the card would have no problem maintaining 5870 speeds while running the Dnet client, and likely even FurMark. It’s also worth noting that all the 5800 series cards share the design of placing the VRMs under a metal bar under the fan, but the 5970 seems to suffer more for it compared to the 5800 series.
Meanwhile in games it was a similar story. Crysis and the STALKER benchmark are two of the most demanding games we’ve tested on the 5970, and in both cases the VRMs again peaked at near 100C. As games aren’t going to hammer the SIMDs like GPGPU applications do, the power load from games should be lower than for GPGPU applications.
*Courtesy Guru3D (*www.guru3d.com/article/radeon-hd-5970-review-test/2)The Radeon 5970 will get a clock frequency at 725 MHz with it's total of 3200 (!) stream/shader processors. The memory is clocked at 4000 MHz effective(gDDR5). The end result of this is a graphics card with an idle wattage of 42W (less then your average dedicated graphics card anno 2009) and the peak wattage (two GPU 100% stressed) is then 294 Watt. And sure that's a whole lot alright but considering what we are dealing with, not exactly surprising or concerning. So while we have the full Radeon HD 5870 spec available including the 1600 shader processor, it's clocked (per GPU) at Radeon 5850 specification.
Here's the kicker though, the 5970 cards will come 'unlocked' as ATI likes to call it. That means full control on overclocking. So if you want to forfeit on power consumption and can move beyond 300W, you could take this card to 800 ~ 900 MHz perhaps even 1000 MHz yourself and gain even more performance out of it -- at the expense of higher power consumption though.
I doubt Air Coolers will cut it, its already using the Vapor cooler from Sapphire, undoubtedly the best air cooler to date, I think as Anandtech pointed out(look @ my last post a bit earlier) they need to redesign the PCB so both set of VRM(voltage regulator modules, one set for each GPU) must lie under the Vapor chamber, here one set on the right barely makes contact with the vapor chamber and overheats.let's see what HD5970 with custom coolers can do here
ATI(AMD) posted some saucy graphs that showed that 5970 had atleast 20% OC potential but in reality it need to be underclocked in GPU hungry games and apps, wait, I'll dig around and post those graphs.What Tkin has bought to the table, is true and makes sense. The HD5970 was touted as a dual (internal Xfire) card which would match the speeds of the flagship HD5870.
It was quoted as being:
*Courtesy Guru3D (*www.guru3d.com/article/radeon-hd-5970-review-test/2)
I read about this on the Guru3d, HardOCP, and Tom'sHardware.
All three had difficulties making the card reach 850Mhz (core) and 1200Mhz memory. As ATI had stated these were cherry picked parts, so they should be best-of-the best. So it is clearly the case, the this dual card will not perform at the HD5870 levels. There could be many reasons for this...engineering samples, incorrect coolers, wrong marketing (please no).
So for now we can assume:
HD5970 <> 2 x HD5870.
Yeah, much better, but I think HD5850 CF can handle anything thrown at it, and 17x2=34k of HD5850CF is way VFM that 27x2=54k of HD5870CFI'll go for the 2 x HD5870. Much safer, then to go for the LOOONG card.
Not for CrysisI think 1x5850 is good enough for almost anything today.