GPU NEWS Channel

hellknight

BSD init pwns System V
what's up with the low memory interface on all of the ATI cards?? why can't they expand it like 448-bit on GTX260 and GTX 275?? Please explain..
 

topgear

Super Moderator
Staff member
As of now GDDR5 supports 256-bit memory interface ( max ). So there is no question of 448 or 512 bit :p

Even with 256 bit memory interface it's equivalent to 512bit GDDR3 :p

So with just 128 bit mem interface it's same as 256bit GDDR3 ( it used on many nvidia cards )
 
Well yes, i agree on that. But now TMSC production rate on 40nm is 60% not 20% anymore.
60% still ain't 99% as it should be...
what's up with the low memory interface on all of the ATI cards?? why can't they expand it like 448-bit on GTX260 and GTX 275?? Please explain..
cheaper to manufacture lower memory interface cards. the performance drops can be gained back from else where, like using GDDR5 instead of GDDR3 and making cards more VFM.
 
OP
comp@ddict

comp@ddict

EXIT: DATA Junkyard
512-bit GDDR5 is for nVidia.

AMD plays smart, goes for Dual 2 x 256-bit GDDR5 instead in the RV870 X2 or R800 chip.
 

desiibond

Bond, Desi Bond!
whatever the specs are, in the end fps is what we take as test result. Look at HD4770. it rocks even with a 128bit GDDR5 :)
 
whatever the specs are, in the end fps is what we take as test result. Look at HD4770. it rocks even with a 128bit GDDR5 :)
Thats for end users. BUT:

1. Like the old GHz myth, that higher means better (see athlon vs pentium wars), even the bandwidth myth exists. People tend to think that 256bit GDDR3 is better than 128bit GDDR5.

2. Lower bandwidth indeed is CHEAPER to manufacture. And that is the reason why HD4870X2 came at the price during launch of a SINGLE GTX280 and pwned the world.
 
OP
comp@ddict

comp@ddict

EXIT: DATA Junkyard
Yup, GDDR5 has become cheap to manufacture thanks to AMD, and 128-bit is dead cheap for them.

I wonder if they will use 64-bit GDDR5 in the lower end HD5000 series.
 

desiibond

Bond, Desi Bond!
ATI Stream vs. NVIDIA CUDA - GPGPU computing battle royale

Parameter 1: Evaluate CPU usage and determine how much of the computing load being handled by the CPU with ATI Stream/CUDA enabled and disabled

Winner: ATI Stream. During our evaluation, we noticed considerable differences in CPU usage between transcoding with ATI Stream and CUDA. CUDA's average CPU usage was in the 80s, while Stream was closer to the high 60s. The extra CPU usage didn't really help CUDA in producing faster transcoding times either. So, the winner would have to be ATI Stream because it used less resources and produced faster transcoding times. It also left enough resources for users to do additional tasks during transcoding.

Parameter 2: What performance differences will consumers notice between using ATI Stream or CUDA?

Winner: ATI Stream. The performance differences between these two GPGPU technologies was a bit mixed because Stream used less CPU power and had better transcoding times, but it seemed to produce lower quality videos. If we strictly viewed just the "performance" portion of our review, ATI Stream would win because of its benchmark results during performance testing. We'll give a slight edge to ATI Stream in this portion of our ranking.

Parameter 3: Subjectively evaluate the image quality of outputted video that was transcoded with ATI Stream and CUDA

Winner: NVidia CUDA. CUDA seemed to produce a higher-quality image in two out of the three video clips we captured screenshots from. ATI Stream's outputted video was a little bit softer in a few parts of the test videos and CUDA's screenshots were brighter, clearer, and showed a little more detail overall. So, we'll give CUDA the image quality crown.

Read On
 
ATI Stream vs. NVIDIA CUDA - GPGPU computing battle royale

Parameter 1: Evaluate CPU usage and determine how much of the computing load being handled by the CPU with ATI Stream/CUDA enabled and disabled

Winner: ATI Stream. During our evaluation, we noticed considerable differences in CPU usage between transcoding with ATI Stream and CUDA. CUDA's average CPU usage was in the 80s, while Stream was closer to the high 60s. The extra CPU usage didn't really help CUDA in producing faster transcoding times either. So, the winner would have to be ATI Stream because it used less resources and produced faster transcoding times. It also left enough resources for users to do additional tasks during transcoding.

Parameter 2: What performance differences will consumers notice between using ATI Stream or CUDA?

Winner: ATI Stream. The performance differences between these two GPGPU technologies was a bit mixed because Stream used less CPU power and had better transcoding times, but it seemed to produce lower quality videos. If we strictly viewed just the "performance" portion of our review, ATI Stream would win because of its benchmark results during performance testing. We'll give a slight edge to ATI Stream in this portion of our ranking.

Parameter 3: Subjectively evaluate the image quality of outputted video that was transcoded with ATI Stream and CUDA

Winner: NVidia CUDA. CUDA seemed to produce a higher-quality image in two out of the three video clips we captured screenshots from. ATI Stream's outputted video was a little bit softer in a few parts of the test videos and CUDA's screenshots were brighter, clearer, and showed a little more detail overall. So, we'll give CUDA the image quality crown.

Read On
Wait a second, OpenCL mar gaya kya ? I am waiting for both ATI and nVidia to embrace OpenCL and say tata to Stream and CUDA so that we can see some interoperatability here and these guys still have STREAM vs CUDA wars ???
-----------------------------------------
Posted again:
-----------------------------------------
Yup, GDDR5 has become cheap to manufacture thanks to AMD, and 128-bit is dead cheap for them.

I wonder if they will use 64-bit GDDR5 in the lower end HD5000 series.
64-bit GDDR5 sounds possible but I think there is a higher probability of them using "normal" DDR3 (not GDDR3) memory since its fast evolving, cheap to manufacture, widely available and has low power consumption.
 
Last edited:

topgear

Super Moderator
Staff member
@ desiibond - thanks for the review :p


@ MetalheadGautham - 64 bit GDDR5 may perform same as 128 bit GDDR3. I don't think gfx card manufacturers will use DDR3 as GDDR3 or GDDR5 64 bit replacement. Coz DDR3 has more latency as compared to both GDDR3 & GDDR5. What you may see is manufacturers may use DDR3 as DDR2 replacement on dirt cheap gfx cards.


@ comp@ddict - yup, ATI may use either 64 bit GDDR5 or 128 bit GDDR3 as both performs naerly same & the production cost is also nearly same.
 

desiibond

Bond, Desi Bond!
World first Radeon HD 4750 pictured

This HD4750 adopts non-reference PCB and features AC cooler instead of older cooler.Compared to HD4770,it has lower frequency due to low heat dissipation of 40nm RV740.In terms of cooler,aluminum heatsink and large-size silent fan with low speed can meet cooling demand.

Read On
 
OP
comp@ddict

comp@ddict

EXIT: DATA Junkyard
But still, it's only 20Mhz slower than HD4770, hardly any difference. And doesn't require a power connector. It will become the new budget king(short lived)
 

saqib_khan

I M A *STAR*
World first Radeon HD 4750 pictured

This HD4750 adopts non-reference PCB and features AC cooler instead of older cooler.Compared to HD4770,it has lower frequency due to low heat dissipation of 40nm RV740.In terms of cooler,aluminum heatsink and large-size silent fan with low speed can meet cooling demand.

Read On

Sorry for asking, but whats so special in it? BTW, Windows 7 has DX 11, so should I wait for DX 11 or should I purchase 4770 ?

Thnx in advance:)
 
Last edited:
Top Bottom