AMD HD 6950 and 6970 released

vickybat

I am the night...I am...
Well buddy , i have answered as a neutral person. Just as you say 5xx series is an optimised 4xx series, the same can be said for amd only its done in the chip level. I said amd realised there folly and are now experimenting with vliw4 design to achieve better scalar performance.

Maybe the next amd series won't use vliw4 at all if they can't reap benefits. Their engineers must be toiling as we discuss to make something new , something that can achieve REALISM. Both camps will do that and we will continue to see some architectural breakthroughs.

But currently i don't see any breakthrough from amd's side.
 

tkin

Back to school!!
Hello i want to get the more on the AMD hard disk and the latest version.
Are you for real?? AMD hard disk, and that too on the first post?? HD6970 is the name of a graphic card not a hard disk.

But wait, if you are a troll/spammer then you win the grand prize for
*images3.wikia.nocookie.net/__cb20100722074142/crysis/images/c/c2/MAXIMUM_Trolling.jpg

Fermi was an architectural improvement but was plaugued by heat issues. These were sorted in 5xx & along with these, so were the compute efficiency as i mentioned earlier.

In case of AMD, they realised their folly with vliw5 as the 5th sp was never used cause as the workloads went narrower . Actually in a unified shader architecture, vliw design is unecessary and more scaler architecture is deemed fit which utilises TLP( Thread level parallelism). Amd relies more on ILP (Instruction level parallelism)

"AMD’s architecture is heavily invested in Instruction Level Parallelism, that is having instructions in a single thread that have no dependencies on each other that can be executed in parallel. With VLIW5 the best case scenario is that 5 instructions can be scheduled together on every SPU every clock, a scenario that rarely happens. We’ve already touched on how in games AMD is seeing an average of 3.4, which is actually pretty good but still is under 80% efficient. Ultimately extracting ILP from a workload is hard, leading to a wide delta between the best and worst case scenarios." ( source within quotes-ANANDTECH).


*i56.tinypic.com/1zf323l.png

So amd discarded the 5th sp which computes scalar operations and allowed three of its 4 sp's to handle it one at at time i.e vertex and pixel shading.

Now i will not call it an architectural breakthrough but rather a fine tune to utilise its resources properly and the reduced space can be used to add more simd's. This is what makes caymans faster than cypress.

Now in their next design it has to be seen whether amd goes the vliw 4 way or takes a more contemporary scaler path like nvidia or maybe have some new tricks in their sleeve.

Now about CUDA, i never cooked up anything but was backed by facts. Cuda is stiil great and transcoding is not the only thing cuda utilises unlike quicksync. There are lots of other apps acclerated by CUDA and stream though getting better is nowhere near cuda yet.

Cuda is good and everybody will agree but in case you don't remember , i was backing physx & i still consider it good but as other members pointed, its not a make or break deal when purchasing a card. Performance matters most.
AMD is experimenting with VLIW4, if not successful they will abandon it on their next gpus, and also they need to redesign their drivers and compilers for this technology, so buying the 69xx is a serious gamble at this point.

CUDA is utilized on mostly transcoding, and in some processing software, in terms of quality both CUDA and stream suck, when i use mediaesprersso to convert video using stream the end result is disgustingly horrible and pixelated, and the video also hitches, in terms of quality x86 rules.
 
Last edited:

vickybat

I am the night...I am...
@ Tkin

Yes buddy i totally agree with you. Stream is a bit better than cuda in terms of the output image quality but cuda transcodes faster. Both are slower than quicksync and the latter also produces better image quality.
 
Last edited:

tkin

Back to school!!
@ Tkin

Yes buddy i totally agree with you. Stream is a bit better than cuda in terms of the output image quality but cuda transcodes faster. Both are slower than quicksync and the latter also produces better image quality.
You can't use quicksync unless you hook a monitor to it, Intel sucks.
 

vickybat

I am the night...I am...
@ tkin

Yeah you got that right again. Lets see what z68 brings to the table.

Offtopic discussion though. We should continue this in the "intel sandybridge released" thread.
 

clear_lot

Journeyman
CUDA is utilized on mostly transcoding, and in some processing software, in terms of quality both CUDA and stream suck, when i use mediaesprersso to convert video using stream the end result is disgustingly horrible and pixelated, and the video also hitches, in terms of quality x86 rules.

+1
i've been using mediaespresso for a few days and using cuda to transcode video makes the video JERKY. i dont know why.
most of the review sites that do the testing never ever said this. even at anandtech, where they compared the video quality of cuda, stream, x86 and quicksync, they never said this.
 
Last edited:

rchi84

In the zone
To me, i look at the arguments for and against the 6950 in this way.

comparisons to the 570 are unfair because that card belongs in an upper price segment. The 6970 and 570 trade blows in a couple of games and the 570 dominates in "TWIMTBP" games, not surprisingly. But for a lot of people, the money saved by opting for the 6950, instead of the 6970, can be better spent of stuff like cooling for e.g.

But at the current 17K price segment, the 6950 2GB model is a phenomenal package, simply because i think the frame buffer advantage will come into play at the 1080 resolution as newer games are released (Skyrim wink wink). even if you don't game at 1600, the added frame buffer helps with AA and AF. Even if the 6950 to 6970 mod isn't stable or doable with your card, it has great performance on its own.

Tessellation is something that Nvidia are much better at than AMD and that's not going to change in the current generation. But with the 6xxx series, AMD aren't completely lost. But if Tessellation is such a game changer, then there is no choice than Nvidia.

Cuda vs stream has never been an issue for me as i only look at gaming. For me, transcoding isn't an issue, as i always leave my PC on all night for such purposes :D So a half hour or 2 min session doesn't affect me when i'm asleep lol.

Now for Physx, from my understanding, when you enable it, except for the highest grade cards like the 480 or 580, your fps take a massive hit. The recommended way to enjoy Physx is to have a dedicated Nvidia card in addition to your GPU. Now that means additional investment in a card, bigger PSU, SLI or Xfire motherboard besides the potential for greater heating inside the cabinet.

And besides, There is no game out there that will not run without Physx enabled, simply because the installed user base isn't large enough. So every game currently can be run without Physx with the assurance that core gameplay elements won't be affected by its absence. Current implementation of Physx is mostly cosmetic, because no developer will take a risk and introduce controls that can only be implented with Physx. So if you decide not to go the Physx route, you haven't lost all that much really, besides watching debris, dust, cloth and papers fly, which to be honest, i don't give more than a cursory glance to when i am playing.

At the moment, if you've got the money for it, the 6950, more than the 6970, is unmatched for the performance you get in the price segment.

Of course, i could be wrong on a lot of things..
 

tkin

Back to school!!
To me, i look at the arguments for and against the 6950 in this way.

comparisons to the 570 are unfair because that card belongs in an upper price segment. The 6970 and 570 trade blows in a couple of games and the 570 dominates in "TWIMTBP" games, not surprisingly. But for a lot of people, the money saved by opting for the 6950, instead of the 6970, can be better spent of stuff like cooling for e.g.

But at the current 17K price segment, the 6950 2GB model is a phenomenal package, simply because i think the frame buffer advantage will come into play at the 1080 resolution as newer games are released (Skyrim wink wink). even if you don't game at 1600, the added frame buffer helps with AA and AF. Even if the 6950 to 6970 mod isn't stable or doable with your card, it has great performance on its own.

Tessellation is something that Nvidia are much better at than AMD and that's not going to change in the current generation. But with the 6xxx series, AMD aren't completely lost. But if Tessellation is such a game changer, then there is no choice than Nvidia.

Cuda vs stream has never been an issue for me as i only look at gaming. For me, transcoding isn't an issue, as i always leave my PC on all night for such purposes :D So a half hour or 2 min session doesn't affect me when i'm asleep lol.

Now for Physx, from my understanding, when you enable it, except for the highest grade cards like the 480 or 580, your fps take a massive hit. The recommended way to enjoy Physx is to have a dedicated Nvidia card in addition to your GPU. Now that means additional investment in a card, bigger PSU, SLI or Xfire motherboard besides the potential for greater heating inside the cabinet.

And besides, There is no game out there that will not run without Physx enabled, simply because the installed user base isn't large enough. So every game currently can be run without Physx with the assurance that core gameplay elements won't be affected by its absence. Current implementation of Physx is mostly cosmetic, because no developer will take a risk and introduce controls that can only be implented with Physx. So if you decide not to go the Physx route, you haven't lost all that much really, besides watching debris, dust, cloth and papers fly, which to be honest, i don't give more than a cursory glance to when i am playing.

At the moment, if you've got the money for it, the 6950, more than the 6970, is unmatched for the performance you get in the price segment.

Of course, i could be wrong on a lot of things..
I have only one issue with amd, that is drivers, the latest 10.12 drivers(11.1a is hotfix) had bsod when monitor goes to sleep, reported by both hd6xxx and hd5xxx users(also by hd4xxx users), seems this is a general bug, it corrupted my torrent downloads(utorrent) and I had to go through a lot of hassle just to get it working back again, also the first 11.1a driver corrupted opengl tess, so amd's driver team is absolutely crap now. I have been badly burned by hd5850(just like I was burnt by x1900xtx that made me jump ship), my next card will be from nVidia.

PS: 570 is mostly faster than 6970, also you get PhysX with it(and with 570's massive processing power you can actually use it in many games, maybe except metro), also currently the only known H.264 codec that can use gpu acceleration is coreavc and it does not support amd, I like to play videos in wmp(due to its minimalistic looks) and coreavc allowed me to use the nvidia gpu for it, there is no amd equivalent for it, and also amd's codecs(avivo) gets stuck on may formats(and any 3rd part tools like media espresso also gets stuck as it uses avivo instead of using its own codecs), nVidia does not provide codecs but opens up their interface and media espresso runs flawlessly.
 

Cilus

laborare est orare
Tkin, CoreAVC is not the only H264 decoder to use GPU acceleration, there are some others. The examples are FFDshow DXVA Decoder and Media Player Classic's home theater's internal video decoder filters. They also support ATI Stream along with CUDA.
I use MPC HC with lots of post processing shaders and it plays them very smoothly without any hassle. I've also checked the GPU and CPU usage while playing those contents using task manager and GPU-z. CPU usages never goes over 15-20% and GPU usage is around 30 to 40%.
 
Last edited:

tkin

Back to school!!
Tkin, CoreAVC is not the only H264 decoder to use GPU acceleration, there are some others. The examples are FFDshow DXVA Decoder and Media Player Classic's home theater's internal video decoder filters. They also support ATI Stream along with CUDA.
I use MPC HC with lots of post processing shaders and it plays them very smoothly without any hassle. I've also checked the GPU and CPU usage while playing those contents using task manager and GPU-z. CPU usages never goes over 15-20% and GPU usage is around 30 to 40%.
Ffdshow dxva has issues with some video files(1080p), coreavc is by far the most stable ever, also coreavc does not get stuck for a few sec when seeking 1080p video like ffdshow dxva does.
 

vickybat

I am the night...I am...
@ tkin

I sort of agree with you. Using dxva decoder currently in media player classic homecinema and having some issues playing 1080p content like video freezing up at times. Even seeking is a problem. My video card is asus radeon 5750.
 

Cilus

laborare est orare
I think you are using Haali MEdia Splitter for Splitting MP4 and MKV files.If not then use it along with the latest ffdshow built. Also the latest media player Classic Home Cinema's inbuilt filters are very good. I have tried it with the Avatar 11 GB 1080P rip. No problem.
 

tkin

Back to school!!
I think you are using Haali MEdia Splitter for Splitting MP4 and MKV files.If not then use it along with the latest ffdshow built. Also the latest media player Classic Home Cinema's inbuilt filters are very good. I have tried it with the Avatar 11 GB 1080P rip. No problem.
Yes, I use haali splitter and decode the h.264 stream with ffdshow(not latest, a few months old, updating codecs are total pain), also the filters in mpchc are nice but the issue is that if I change the filters a lot(like say about 5-6 times) the graphics driver crashes(vpu recovery error), so I just stick to normal dxva in mpchc and no filters, the hassle is way to much.

Again I blame amd drivers for the crashes, using nVidia never experienced a crash with the vpu with mpchc and filters.
 

rchi84

In the zone
Interesting link reviewing the Powercolor PCS 6950 ++.

PowerColor Radeon PCS++ 6950

The best thing is that Powercolor have included two Bios in this card. The first one is reference clocks and the second one :)D) increases the GPU clock to 880 Mhz AND unlocks the additional shaders.

That's right! Powercolor have released a card that comes close to the 6970 specs, straight out of factory.. The only thing separating the two is that the 6950 memory comes clocked slower, but a little OCing takes care of that.

The card's MSRP is around 310$ which is just slightly more than the regular 2GB model.

This has to be the best 6950 model to get!!
 

max_snyper

Maximum Effort!!!!!!
Now that both the camps have launch their upper middle segement cards.what next in their lineup are they building the gpu from scratch or following the same manufacturing process just optimising the current architechture.
Saw tons and tons of review regarding 560gtxi and hd6950.IMO the ati 6950 (1gb version) is just the filler in the ati line up nothing else though it performs good if not best where as nvidia did the same with the 560 filling the gap of their lineup optimisong their lineup.

Read that AMD is working on antilles (dual gpu solution),stated release after chinese new year and southern island is the later one really coming into existance..if so then till when?
And what about nvidia,are they working on new gpu or new lineup of gpu's.

read the heated debate on cuda,stream,physx etc IMO if a person want to buy a graphic card for animation,rendering,image processing...whatever he wants to do he will go and buy a professional cards from any of the camps why he will look for the gaming series.
1.Professional series are tend to provide good result in rendering(though they are costly).
2.Who will use a gaming card thinking that it will provide good result in rendering against professional series.
3.And why a gamer look for cuda or stream or physx technology in their graphic cards as the main criteria for their selection.He will just see the review,how it performs in games and buy the card as per his need.
All the technology that the companies brag about is just a marketing gimmick, for the end user (gamer) its just a add-on feature to brag about.
And seriously,since my exposture to gaming i havent read a single review that has put forward the "add-ons" as the unique selling feature of any of the cards,they just go on the performances on games,power consumption,efficency against last generation of cards thats it. Nor i have spotted any application that has been proved beneficial or been boosted with the help of these add-ons.
performance in games,relibility of cards to support upcoming games(atleast for a year or two,power consumption that only matters when choosing a graphic cards.
Think of it.
Just bragging about the add-ons doesnt work,what works is how these add-on are efficient in doing their work.
Well thats just my opinion about the graphic card manufacturers.
 
OP
topgear

topgear

Super Moderator
Staff member
Gigabyte HD6970 1GB ( GV-R695OC-1GD ) clocked at 870 Mhz ! - probably the fastest and longest HD6950 1GB card.

First of all, the cooling system. The card is equipped with WINDFORCE™ 3X ,the latest cooling technology that differentiates the brand’s graphics cards from the rest. WINDFORCE™ 3X features 3 ultra quiet PWM fans. The special inclined triple-fan design effectively minimizes the flow of turbulence between three fans. With a unique vapor chamber, WINDFORCE™ 3X is able to transfer heat from the hot spot to the cool spot as thermal energy becomes evaporated to the surrounding air. By capillary action, the condensed liquid droplets circulate back to the base of chamber. The cycles of evaporation and condensation enhance heat dissipation for greater cooling efficiency. Moreover, WINDFORCE™ 3X is equipped with three copper heat pipes to strengthen the speed of heat dissipation.

Next is specification. the card has 1GB of GDDR5 with a 256-bit memory bus. The default core clocks have been increased by 70Mhz and amounts to 870 Mhz, while the memory has not been change compared to specs of a reference card. Another important thing to note is that the card belongs to Ultra Durable line-up. Ultra Durable VGA series cards provide dramatic cooling effect on lowering both GPU and memory temperature. By adopting 2 oz PCB board, Japanese Solid Capacitor, Ferrite Core Chokes, and Low RDS (on) MOSFET, Ultra Durable VGA makes PCB board as a big heat sink. According to Gigabyte testing results, Ultra Durable VGA graphics accelerators can lower GPU temperature by 5% to 10% and lower memory temperature by 10% to 40%. Also, these cards feature reduced voltage ripples in normal and transient state, thus effectively lowers power noises and ensures higher overclocking capability. The rest specs and capabilities does not differ from its 2 GB brother.

*hw-lab.com/uploads/hardware/videocards/amd-cayman/gigabyte-hd6950-1gb/gigavyte-hd6950-1gb_1_x500.jpg

image courtesy of hw-lab.com

Gigabyte shows 1GB version of HD 6950 | hw-lab.com
 

Jaskanwar Singh

Aspiring Novelist
read the heated debate on cuda,stream,physx etc IMO if a person want to buy a graphic card for animation,rendering,image processing...whatever he wants to do he will go and buy a professional cards from any of the camps why he will look for the gaming series.
1.Professional series are tend to provide good result in rendering(though they are costly).
2.Who will use a gaming card thinking that it will provide good result in rendering against professional series.
3.And why a gamer look for cuda or stream or physx technology in their graphic cards as the main criteria for their selection.He will just see the review,how it performs in games and buy the card as per his need.
All the technology that the companies brag about is just a marketing gimmick, for the end user (gamer) its just a add-on feature to brag about.
And seriously,since my exposture to gaming i havent read a single review that has put forward the "add-ons" as the unique selling feature of any of the cards,they just go on the performances on games,power consumption,efficency against last generation of cards thats it. Nor i have spotted any application that has been proved beneficial or been boosted with the help of these add-ons.
performance in games,relibility of cards to support upcoming games(atleast for a year or two,power consumption that only matters when choosing a graphic cards.
Think of it.
Just bragging about the add-ons doesnt work,what works is how these add-on are efficient in doing their work.
Well thats just my opinion about the graphic card manufacturers.

+1. very well said. and physx is just good in some games only. like mafia II and metro. but just think what will happen to 560 on enabling physx in metro. already fps are 23 at 1920x1080 physx on will get them to 12-13 :lol:
 
Top Bottom