AMD Offers Petaflop Supercomputing Via GPUs

Status
Not open for further replies.

4T7

Journeyman
AMD HAD some interesting things to say at their CES keynote, one of which signaled an industry paradigm shift. When AMD's CEO Dirk Meyer says that more transistors are no longer the way to push innovation, you have to pay attention.

While he didn't say it outright, the point is simple. More transistors have a diminishing rate of return in making a core, and more cores aren't really going to do much for the consumer at this point. There has to be another way to use the transistors, and to use what the fabs can pump out. That is where innovation will be in the future.

One thing that AMD thinks will drive things is film quality animation. The example it used was Spiderman 2, one of the first films with a totally digital actor. In the end when Doc Oc is falling into the pit, he is totally digital. The actor was captured in a lightstage, and from there you can animate him, slowly, but almost perfectly. Will Smith in Hancock was also shown using this technology.

But when frames take a day or more to render, that puts a big crimp on the usefulness of any technology. With GPGPU acceleration, companies are now able to manipulate that data in nearly real time instead of days. That is progress for you, but ever increasing data rates will catch up and overwhelm that performance capability soon enough.

Luckily, there is a solution. AMD announced a petaflop supercomputer for on-premises or cloud rendering.

Imagine a supercomputer like the current petaflop designs, but instead of being sized like a datacenter, it will fit in a single room. AMD is building a system with more than 1,000 R4870 class GPUs, each pumping out just over a teraflop, and the resulting complex will consume about 1/10th of the power of other petaflop-scale supercomputer installations.

The software that AMD runs on it is from Otoy, and the system is called the System Fusion Render Node.

Because AMD controls both the CPU and GPU designs, it can operate them in much closer synchronization, building a cohesive fabric of nodes instead of just a collection of cards in boxes like some other 'solutions'. The system can also be continuously upgraded, so that when the R5870 hits, you will be able to upgrade the machines.

With that, AMD will be taking the concept of petaflop machines from large government sponsored institutions and purpose-built facilities with customised power feeds and cooling down to a single room. Not only that, but it will be commercialized and available.

This technology will trickle down to the consumer in short order, and soon enough, people will be complaining that their petaflop phone is too damn slow.

Source
 

Adam_waugh

Right off the assembly line
AMD HAD some interesting things to say at their CES keynote, one of which signaled an industry paradigm shift. When AMD's CEO Dirk Meyer says that more transistors are no longer the way to push innovation, you have to pay attention.

While he didn't say it outright, the point is simple. More transistors have a diminishing rate of return in making a core, and more cores aren't really going to do much for the consumer at this point. There has to be another way to use the transistors, and to use what the fabs can pump out. That is where innovation will be in the future.

One thing that AMD thinks will drive things is film quality animation. The example it used was Spiderman 2, one of the first films with a totally digital actor. In the end when Doc Oc is falling into the pit, he is totally digital. The actor was captured in a lightstage, and from there you can animate him, slowly, but almost perfectly. Will Smith in Hancock was also shown using this technology.

But when frames take a day or more to render, that puts a big crimp on the usefulness of any technology. With GPGPU acceleration, companies are now able to manipulate that data in nearly real time instead of days. That is progress for you, but ever increasing data rates will catch up and overwhelm that performance capability soon enough.

Luckily, there is a solution. AMD announced a petaflop supercomputer for on-premises or cloud rendering.

Imagine a supercomputer like the current petaflop designs, but instead of being sized like a datacenter, it will fit in a single room. AMD is building a system with more than 1,000 R4870 class GPUs, each pumping out just over a teraflop, and the resulting complex will consume about 1/10th of the power of other petaflop-scale supercomputer installations.

The software that AMD runs on it is from Otoy, and the system is called the System Fusion Render Node.

Because AMD controls both the CPU and GPU designs, it can operate them in much closer synchronization, building a cohesive fabric of nodes instead of just a collection of cards in boxes like some other 'solutions'. The system can also be continuously upgraded, so that when the R5870 hits, you will be able to upgrade the machines.

With that, AMD will be taking the concept of petaflop machines from large government sponsored institutions and purpose-built facilities with customised power feeds and cooling down to a single room. Not only that, but it will be commercialized and available.

This technology will trickle down to the consumer in short order, and soon enough, people will be complaining that their petaflop phone is too damn slow.

Source

Nice article.it's really appreciable and looking like helpful to all members.
 
Status
Not open for further replies.
Top Bottom