AMD Graphics Core Next: The architecture of future AMD GPUs!

comp@ddict

EXIT: DATA Junkyard
Well, Fermi was the same, and it bombed untill fixed.

But AMD won't make that mistake. Always make a mainstream GPU to compete at high-end. AKA HD4870, HD5850 and i the future HD7950 most probably.
 

ico

Super Moderator
Staff member
well, I wouldn't really want it to be completely like Fermi.

See, the methodologies of both AMD and nVidia at the moment are completely different. AMD uses VLIW which is Instruction Level Parallelism (shaders are grouped in groups of 5 or 4 and work together) and requires optimization through the compiler to get the best performance. nVidia uses Thread Level Parallelism which doesn't really require much optimization from the compiler level. Now, the point is, VLIW if utilized correctly is better than nVidia's approach but with VLIW under-utilization is a huge issue. nVidia's approach is easier to be precise. That's why AMD cut it from VLIW5 to VLIW4, reducing the transistor hugely and managed to get minor gain in performance.

Now why does AMD uses VLIW? VLIW shaders are very small (compared to nVidia), AMD can pack as many as they want and clock them high while keeping thermals/power in check. nVidia had huge worries with Fermi with GTX 465/470/480 and only got it right with GTX 460 and GTX 500 series.

Hoping AMD won't have it.
 

tkin

Back to school!!
well, I wouldn't really want it to be completely like Fermi.

See, the methodologies of both AMD and nVidia at the moment are completely different. AMD uses VLIW which is Instruction Level Parallelism (shaders are grouped in groups of 5 or 4 and work together) and requires optimization through the compiler to get the best performance. nVidia uses Thread Level Parallelism which doesn't really require much optimization from the compiler level. Now, the point is, VLIW if utilized correctly is better than nVidia's approach but with VLIW under-utilization is a huge issue. nVidia's approach is easier to be precise. That's why AMD cut it from VLIW5 to VLIW4, reducing the transistor hugely and managed to get minor gain in performance.

Now why does AMD uses VLIW? VLIW shaders are very small (compared to nVidia), AMD can pack as many as they want and clock them high while keeping thermals/power in check. nVidia had huge worries with Fermi with GTX 465/470/480 and only got it right with GTX 460 and GTX 500 series.
Only problem is compiler scheduling has its limits, performance gain is not linear with the increase in shader count, and also the tess engine is separated from shaders there maybe a bottleneck, somethings fermi overcame, one thing I'd like to see is for them to keep producing high performance gpus, nvidia is moving to mobiles(lucrative, I'd say), amd is moving to fusion slowly, hope this does not kill the gpu market :sad:
 

ico

Super Moderator
Staff member
performance gain is not linear with the increase in shader count
This isn't really of much concern. As AMD's current approach is different and nVidia's approach is different. :) If AMD was following nVidia's approach and it wasn't linear, then you have a point. afaik, it is linear in AMD's case too if you don't look at VLIW5 and 4 at once.
 

ico

Super Moderator
Staff member
HD 7000 will come at least 4-5 months before Kepler arrives. Southern Islands taped out in February whereas Kepler taped out in July.

AMD said:
We also passed several critical milestones in the second quarter as we prepare our next-generation 28-nanometer graphics family. We have working silicon in-house and remain on track to deliver the first members of what we expect will be another industry-leading GPU family to market later this year. We expect to be at the forefront of the GPU industry's transition to 28-nanometer.
Coming soon.
 

Skud

Super Moderator
Staff member
Nice update, ico.

So BD and HD7000 and 990FX - what name is AMD gonna five to this platform? ;)
 

vickybat

I am the night...I am...
The next gen gpu's are really going to be something. Though nvidia is silent about its kepler architecture, amd has gone forward to show its compute engine based architecture and has done away from the older vliw based designs. The architecture looks promising and actually has x86/64 based computational abilities as well as shader processing very similar to a cell processor.

But the real part was missing i.e will it be good enough to render life like in game graphics and will be a step above the current crop of gpu's?

The answer is YES and AMD was again the first to disclose this and what we can expect from the next gen gpu's. According to amd, the next gen XBOX or rather termed as XBOX 720 , will have the ability to render in game graphics just like the movie "AVATAR" :shock:

That's right and the microprocessor giant also said that A.I. and physics capabilities of the next-gen hardware will allow for every pedestrian in a game such as Grand Theft Auto to have a 'totally individual mentality,' meaning no more mob mentality.

We all know that AMD makes the gpu for xbox and this time, it might give it the latest 28nm compute engine based gpu's. So i think its a clear indication what we might expect from pc as well.

Source
 
Top Bottom