Speed of Core 2 Duo

Status
Not open for further replies.

fun2sh

Pawned!... Beyond GODLIKE
i still dont get why C2D is faster than any of the processor. can some1 elaborate it in detail and with some technical points. i read a lot on wikipedia abt C2d but still dont get the reason. I JUST UNDERSTAND THAT C@D IS MUCH FATER THAN OTHERS BUT WHY?? I DONT KNOW!!!! PLZ SOME1 ELABORATE ABT IT IN DETAIL. PLZ!!!!
 

faraaz

Evil Genius
If you have to write 2000 lines of "I don't know why C2D is the fastest" alone, it will take you 12 hours maybe.

If you rope in one of your mates to help you out, you may finish your share's each in say 6 hours total. Whoa...what if he's actually faster than you are? He may take up the lions share and you will end up finishing in 5 hours.

The example I have given is not a DIRECT analogy, but atleast it goes a part of the way to explain how C2D works and is the fastest. As for why Intel is superior to AMD dual cores, I have no idea...(>.<)
 

BULLZI

Ambassador of Buzz
fun2sh said:
i still dont get why C2D is faster than any of the processor. can some1 elaborate it in detail and with some technical points. i read a lot on wikipedia abt C2d but still dont get the reason. I JUST UNDERSTAND THAT C@D IS MUCH FATER THAN OTHERS BUT WHY?? I DONT KNOW!!!! PLZ SOME1 ELABORATE ABT IT IN DETAIL. PLZ!!!!

The Core architecture, meanwhile, is the opposite of a speed demon; it's a "brainiac" instead. Core has a relatively short 14-stage pipeline, but it's very "wide," with ample execution resources aimed at handling lots of instructions at once. Core is unique among x86-compatible processors in its ability to fetch, decode, issue and retire up to four instructions in a single clock cycle. Core can even execute 128-bit SSE instructions in a single clock cycle, rather than the two cycles required by previous architectures. In order to keep all of its out-of-order execution resources occupied, Core has deeper buffers and more slots for instructions in flight.

Like other contemporary PC processors, Core translates x86 instructions into a different set of instructions that its internal, RISC-like core can execute. Intel calls these internal instructions micro-ops. Core inherits the Pentium M and Core Duo's ability to fuse certain micro-op pairs and send them down the pipeline for execution together, a provision that can make the CPU's execution resources seem even wider that they are. To this ability, Core adds the capability to fuse some pairs of x86 "macro-ops," such as compare and jump, that tend to occur together commonly. Not only can these provisions enhance performance, but they can also reduce the amount of energy expended in order to execute an instruction sequence.

Another innovation in Core is a feature Intel has somewhat cryptically named memory disambiguation. Most modern CPUs speculatively execute instructions out of order and then reorder them later to create the illusion of sequential execution. Memory disambiguation extends out-of-order principles to the memory system, allowing for loads to be moved ahead of stores in certain situations. That may sound like risky business, but that's where the disambiguation comes in. The memory system uses an algorithm to predict which loads are to move ahead of stores, removing the ambiguity.
See? Ahh. This optimization can pay big performance dividends.

for more info *techreport.com/reviews/2006q3/core2/index.x?pg=2
 
Status
Not open for further replies.
Top Bottom