nVidia's GT 300 will have 512 SHADERS and DX11

Status
Not open for further replies.

comp@ddict

EXIT: DATA Junkyard
*www.techpowerup.com/images/news/nvidia.gif
GT300 to Pack 512 Shader Processors

A real monster seems to be taking shape at NVIDIA. The company's next big graphics processor looks like a leap ahead of anything current-generation, the way G80 was when it released. It is already learned that the GPU will use a new MIMD (multiple instructions multiple data) mechanism for its highly parallel computing, which will be physically handled by not 384, but 512 shader processors. The count is a 112.5% increase over that of the existing GT200, which has 240.

NVIDIA has reportedly upped the SP count per cluster to 32, against 24 for the current architecture, and a cluster count of 16 (16 x 32 = 512). Also in place, will be 8 texture memory units (TMUs) per cluster, so 128 in all. What exactly makes the GT300 a leap is not only the fact that there is a serious increase in parallelism, but also an elementary change in the way a shader processor handles data and instructions, in theory, a more efficient way of doing it with MIMD.

The new GPU will be DirectX 11 compliant, and be built on the 40 nm manufacturing process. We are yet to learn more about its memory subsystem. The GPU is expected to be released in Q4 2009.

My ADD - Memory connection to be 512-bit with 2GB GDDR5 @ 4/5/7/?GHz

*www.techpowerup.com/92099/GT300_to_Pack_512_Shader_Processors.html
 
OP
comp@ddict

comp@ddict

EXIT: DATA Junkyard
^^ Let's hope ATi doesn't go from 800 SPs to 960 and stop there.

For HD5870, atleast 1200 SPs would be recommended, even then, I don't think the HD5870 X2 with 2400SPs aka 480 Shaders(=GTX295) will even be able to beat GTX 350.
 

desiibond

Bond, Desi Bond!
yes. SP based architecture worked really well for 4xxx series cards and it's time to move on!!

The dual core (one core on top of other) design of HD5870X2 looks interesting. Less heat, better cooling and better performance
 
OP
comp@ddict

comp@ddict

EXIT: DATA Junkyard
^^ That wud become like Pentium D which had one core on top of the other, and there was virtually no increase in performance.

No, we want:-

- Native Dual Core
- 2GB Memory (in unganged mode as in AMD proccs)
- A single PCB structure to reduce costs and improve efficiency
 

desiibond

Bond, Desi Bond!
^^ That wud become like Pentium D which had one core on top of the other, and there was virtually no increase in performance.

No, we want:-

- Native Dual Core
- 2GB Memory (in unganged mode as in AMD proccs)
- A single PCB structure to reduce costs and improve efficiency

On the other hand, in the size of current gen X2 cards, they can put two PCB's, each PCB have two cores of GPU.

That's performance for same size and being 40nm, they will be much cooler and less power hungry.
 

dOm1naTOr

Wise Old Owl
Are they paying attention or doing anything to reduce the electricity consumption.

ofcourse
in idle mode, most todays GPUs turns much of the PSs off. Power consumption will be down by bout 1/2~1/3 on idle or normal windows apps.
also idle/2D clocks are also much lower than the real threshold.
 

desiibond

Bond, Desi Bond!
Looks like nVidia is having problem with 40nm fabrication process and they pushed the release to 2010.

Looks like AMD's acquisition of ATI has done wonders to ATI. They are not letting nVidia to take breath.
 
OP
comp@ddict

comp@ddict

EXIT: DATA Junkyard
^^^ Both nVidia and AMD have pushed their DX11 40nm GPUs to Q4 2009. SO the rest of the year is gonna be pretty BORING.
 
OP
comp@ddict

comp@ddict

EXIT: DATA Junkyard
^^I'm not too sure that high-end 40nm cards will come, as TMSC is having problems with 40nm node.
 
Status
Not open for further replies.
Top Bottom