do gigahertz matter?

Status
Not open for further replies.

vickybat

I am the night...I am...
^^ Its not like that. Clock speed difference matters over a same family of processors but not different. The processor architecture matters a lot .

For eg- a core i5 2400 @ 3.1ghz is a lot lot faster than a Amd phenom 2 955be @ 3.2ghz.

Currently, intel's processor architecture is more efficient & better than amd. Its sandybridge based cpu's beat all phenom II's and athlon II's across all benchmarks.
Amd's bulldozer based cpu's are expected to turn the tide and they will have a completely new architecture.
 
guys,I am confused at 1 place.Why intel processors are better than amd ones clocked at same frequency?

Intel has more L1, L2 & L3 cache. More the cache better the performance (I think so)

Lets take example of AMD X4 635 @ 2.9 GHz & Intel core i3 530 @ 2.9 GHz. X4 has 4 cores & 4 threads but i3 has only 2 cores but 4 threads. X4 has no L3 cache but i3 has 4MB L3 cache. So both have approx. same performance.
AnandTech - AMD's New Year Refresh: Athlon II X4 635, Phenom II X2 555, Athlon II X2 255 & Athlon II X3 440
 

vickybat

I am the night...I am...
^^ Its not due to a higher cache memory only. Cache memory is a part of cpu design. The phenom 2 x6's beat the nehalem based core i5 7 series in multithreaded apps despite having 2mb lower cache memory.

They do that owing to the more no. of cores they have. But they lose to a sandybridge i5 2xxx series having similar cache size as of nehalem based cpu's even in multithreaded apps and having a deficit of two cores.

This is because sandybridge's architecture is far more efficient than nehalem or amd's k10.
 

SlashDK

MV VIP to the MAX
Cache is secondary to architecture. As vickybat said intel's current architecture is a lot better than AMDs but thats likely to change soon. The difference in performance is basically, architecture>cache and cores>clock speed.
 

lastdefenda

Broken In
Its a very generalized quarry which can answered as a essay question. I agree with vickybat and Cybertonic, just elaborating. Leaving HPC aside cpu design has many aspects that effects performance , like TDP, length of the pipeline etc. what ever you call it die size or lithography (such as intel uses 32nm amd uses 45nm ) it effects the personal computing(gaming, watt efficiency,cpu throttling ). In general die shrink means better personal computing performance, because all the logical parts in the processor(cache,processing core etc.) is closer i.e it will work efficiently with less resources also means more transistors . But when Pentium iv came out(Willamette) was slower than Pentium iii(Tualatin) . If you talk about high performance computing it is a different ball game.
 

desiibond

Bond, Desi Bond!
@satsworld let me try to explain this

Performance of a CPU depends heavily on architecture and it is not at all easy to bring out a new architecture overnight. If you want to bring out revolutionary changes, it may take years. From the year 1999 till 2006, AMD had architectural advantage over Intel. AMD was the first to bring out 1GHz processor, they were the first to bring out x86-64 (64 bit processors that can also do 32 bit operations) which is the base of today's 64-bit processors (adopted by Intel).

By 2004/2005, AMD was steadily eating into Intel's market share. Their K8 architecture (based on K7) brought in many innovations like on-chip memory controller, hypertransport and they got the dual core architecture right. On the other hand Intel processors were slower, used to run really hot, take more power and were not as overclocker friendly as AMD's. As a result, Intel pushed the panic button, killed most of it's ongoing projects in CPU are and started from scartch, result of which was the revolutionary core microarchitecture. This was the first time that Intel started concentrating on performance-per-watt instead of working to increaes the CPU clockspeed. Today's AMD and Intel processors are still based on modified architectures derived from K8 and core architectures and you can get a detailed comparison between these two here *www.tomshardware.com/forum/178514-28-intel-core-architecture-future

AMD on the other hand had a better idea on evolving the CPU to the next level and they took huge risk in 2006. They acquired ATI and they immediately started working on integrating CPU and GPU, what we know as Fusion today. Like I said earlier, it can take years to bring out a revolutionary CPU. Since 2006 till now, while they were working heavily on bulldozer/fusion architecture, they were bringing out CPUs by minor modifications to the K8 architecture. And since K8 was an evolution from K7, AMD haas been hanging onto architecture from 1999.

Enter 2011, we are going to see a brand new architecture that will change the way that we thought of a CPU. it may even phase out the word CPU and we will start using APU to refer processing unit. Given their stenght in GPU area, we will soon see the APU itself able to run most of the games without the need for a GPU on the motherboard or a entrylevel or lower midrange dedicated GPU.

Given the way that Laptops are eating into desktops market share, AMD's Bulldozer based APUs can be a huge advantage for laptop manufacturers as it is going to provide an all round performance, eliminating the need for dedicated GPU. Llano (first gen fusion chip) has showed this recently. Though it's CPU core (based on K10 microarchitecture) is slower than Intel's Sandbybridge, the GPU part is just spectacular and turned the APU out to be a superb all rounder for midrange laptops. This reduces power, reduces heat, increases battery life. This way, one need not pay heavy price for casual gaming laptops as they can get the same for laptops priced much lower. Bulldozerwill fill the CPU side gap between AMD and Intel thus giving the performance crown to AMD APUs. This was already evident from the way that AMD took over the performance crown from Intel for netbooks (Intel Atom is no match to AMD's E-350 right now) and this is going to spread to notebooks and desktops, finally to servers in coming 1 year or 2.

In short, it's a tick-tock. Intel rules for some timeframe, then AMD takes over and vice versa. 1999-2006 was AMDs, 2006-2011/2012 will be Intels and then lets see how far AMD will go from 2011/2012.
 

lastdefenda

Broken In
@desibond it all depends what you do with you processor. I see the trend towards simd rather than mimd architecture , ie more towards gpu like cores in modern . Commendable stats by the way. But I don't like the conclusion.

Sent from my GT-S5830 using Tapatalk
 

desiibond

Bond, Desi Bond!
@desibond it all depends what you do with you processor. I see the trend towards simd rather than mimd architecture , ie more towards gpu like cores in modern . Commendable stats by the way. But I don't like the conclusion.

Sent from my GT-S5830 using Tapatalk

AMD now have the architecture that takes the industry forward and the reason for the delay is that they are trying to make it more efficient. They will have the upper hand once they bring out Fusion 2G APUs but then, with the amount of cash and R&D budget that Intel has, it should be seen how long AMD will have the advantage. One thing that i just do not understand is Intel's inability to bring out superior graphics processing. Acquiring nVidia can be ruled out as nVidia is getting stronger and stronger in lucrative SoC market.

another superb article on Bulldozer: Real World Technologies - AMD's Bulldozer Microarchitecture
 

ico

Super Moderator
Staff member
A 1.4 Ghz Pentium 3 was faster than 1.4 Ghz Pentium 4. Clock to clock, Pentium 4 was slower than Pentium 3.
In turn, a 3.0 Ghz Pentium 4 was slower than a 2.0 Ghz Athlon 64 and a 2.4 Ghz Athlon 64 would even laugh at Pentium Extreme Edition at 3.8 Ghz.
Today, a Core i5-2400 @ 3.1 Ghz is much faster than 3.2 Ghz Phenom II X4 955.

Architecture matters. Nothing else.
 

lastdefenda

Broken In
Totally agree with you I hate intel for starving poor. People with a budget don't get any thing from intel . They are like exxon mobil like their starving green tech. I expect a knock out punch from Amd. Hpc people are going for nvidia cuda not intel xenon. Even I made a ps3 grid for iiser kolkata geophysics department. It give 10 tflops with 3 ps3.

Sent from my GT-S5830 using Tapatalk
 

vickybat

I am the night...I am...
AMD on the other hand had a better idea on evolving the CPU to the next level and they took huge risk in 2006. They acquired ATI and they immediately started working on integrating CPU and GPU, what we know as Fusion today. Like I said earlier, it can take years to bring out a revolutionary CPU. Since 2006 till now, while they were working heavily on bulldozer/fusion architecture, they were bringing out CPUs by minor modifications to the K8 architecture. And since K8 was an evolution from K7, AMD haas been hanging onto architecture from 1999.

Enter 2011, we are going to see a brand new architecture that will change the way that we thought of a CPU. it may even phase out the word CPU and we will start using APU to refer processing unit. Given their stenght in GPU area, we will soon see the APU itself able to run most of the games without the need for a GPU on the motherboard or a entrylevel or lower midrange dedicated GPU.

Given the way that Laptops are eating into desktops market share, AMD's Bulldozer can be a huge advantage for laptop manufacturers as it is going to provide an all round performance, eliminating the need for dedicated GPU. Llano (first gen fusion chip) has showed this recently. Though it's CPU part is slower than Intel's Sandbybridge, the GPU part is just spectacular and turned the APU out to be a superb all rounder for midrange laptops. This reduces power, reduces heat, increases battery life. This way, one need not pay heavy price for casual gaming laptops as they can get the same for laptops priced much lower. Bulldozer (Fusion gen2 and a true APU) will fill the CPU side gap between AMD and Intel thus giving the performance crown to AMD CPUs. This was already evident from the way that AMD took over the performance crown from Intel for netbooks (Intel Atom is no match to AMD's E-350 right now) and this is going to spread to notebooks and desktops, finally to servers in coming 1 year or 2.

In short, it's a tick-tock. Intel rules for some timeframe, then AMD takes over and vice versa. 1999-2006 was AMDs, 2006-2011/2012 will be Intels and then lets see how far AMD will go from 2011/2012.

I think you are confusing over fusion and bulldozer. Bulldozer is not fusion but purely a cpu codename. It does not have an integrated gpu communicating with cpu through peripheral interconnects.
The upcoming zambezi and kommodo cores are purely cpu cores and do not have embedded gpu.

LLano is fusion and uses x86-64 cpu cores along with vliw5 gpu core in a single-die. Current llano apu's use older phenom 2 architecture codenamed stars and the gpu is radeon 5570 class dubbed as sumo.

Next year, amd will launch trinity that will have bulldozer cores for cpu and probably amd's next gen gpu cores featuring compute engines rather than vliw.
So bulldozer is purely a cpu and not an apu.

Tick tock is a model adopted by Intel and this does not apply to AMD.

Tick corresponds to a shrinking of semiconductor die to create a somewhat identical circuitry using a more advanced fabrication process, usually involving an advance of lithographic node. It reduces R&D costs and also gives better thermals to the chip overall and many more advantages.

Tock corresponds to micro architectural change.

Every year for intel is either a tick or a tock i.e, if this year is a tock (sandybridge), next year will be a tick (ivybridge) and the year after will again be a tock (haswell) which is a new
microarchitecture.

Though amd also undergoes die shrinks and architectural change, it does not follow the tick-tock model.
 
Last edited:

ico

Super Moderator
Staff member
Totally agree with you I hate intel for starving poor. People with a budget don't get any thing from intel . They are like exxon mobil like their starving green tech. I expect a knock out punch from Amd. Hpc people are going for nvidia cuda not intel xenon. Even I made a ps3 grid for iiser kolkata geophysics department. It give 10 tflops with 3 ps3.

Sent from my GT-S5830 using Tapatalk

I'm an Intel fan on the CPU side, but I'll accept they are absolute morons. Underhand deals with OEMs for not selling Athlon 64, Gigahertz myth and what not.
 

desiibond

Bond, Desi Bond!
I think you are confusing over fusion and bulldozer. Bulldozer is not fusion but purely a cpu codename. It does not have an integrated gpu communicating with cpu through peripheral interconnects.

The upcoming zambezi and kommodo cores are purely cpu cores and do not have embedded gpu.

LLano is fusion and uses x86-64 cpu cores along Pwith vliw5 gpu core in a single-die. Current llano apu's use older phenom 2 architecture codenamed stars and the gpu is radeon 5570 class dubbed as sumo.

Next year, amd will launch trinity that will have bulldozer cores for cpu and probably amd's next gen gpu cores featuring compute engines rather than vliw.

So bulldozer is purely a cpu and not an apu.

ah damn. thanks for correcting. made the changes to the post
 

lastdefenda

Broken In
OEM and hardware dealers are real scumbags. Here in kolkata all hp dealers told me don't buy amd it will fry I bought lotte computers because they talk less :) . @ico well not all die shrink is good because it makes processor fragile ie with little miscalculation you can fry the chip

Sent from my GT-S5830 using Tapatalk
 

vickybat

I am the night...I am...
@ico well not all die shrink is good because it makes processor fragile ie with little miscalculation you can fry the chip

Nope its wrong buddy. Actually die-shrink makes way to better circuitry and gives much better thermal limits to a chip than before. So it becomes even more overclock friendly owing to cooler temps than before.

In other words, it becomes more durable and not fragile.
 

lastdefenda

Broken In
I have to power up for you. Lol @vickybat

Sent from my GT-S5830 using Tapatalk

Megahertz myth - Wikipedia, the free encyclopedia
@vickybat

I'm just saying a thin waffer offers more headroom for overclock by bclk/qpi/fsb or multiplier
that's true, but It has a more resistance at the wafer because its thin so, more voltage can kill cpu .
All I'm saying you can't sustain 5ghz oc in 32nm intel soi , but you 4ghz celeron4(mpga478 2.0ghz).
look how it droped *forums.overclockers.co.uk/showthread.php?t=18227651

If you ask me what you expect for die shirnk i say use germanium(very costly) or move toward new processing technology like Bio processing and stuff. When I was a kid I hear a lot of talk about using reproduced neuron form earth worm can be next big thing I'm talking 2000AD . Ps. above link is not for your statement, just a fuel for the thread topic.
 
Last edited:

asingh

Aspiring Novelist
Basically via a die-shrink the manufacturer makes their product more "enriched". The lithography is modified for enhanced circuitry, less leakage and more efficient processes. More cores can be stamped out of each basic wafer set.
 
Status
Not open for further replies.
Top Bottom