It's got more to do with the video card than with the processor. nyone can see that the AMD64s beat the living s**t out of the P4s when you pump up the resolution and the eyecandy.
And its a lot more than the newer drivers or the shader profile programming. The very architecture of the Nvidia cards are biased towards the 3DMark benchmark process. I dont think you guys would have forgotten how sneaky the Nvidia guys got with 3DMark03 and the cheating in the rendering process. Bl00dy cheating %@#%^&*@!@)
You have to understand the Nvidias support accelerated depth maps and depth stencil textures (DSTs) which arent yet part of the DX9.0 specification. However, 3DMark05 scans the target hardware and if it finds that it can use DST, it goes ahead and enables it. Also, when it comes to filtering, the Nvidia Percentage Closest Filtering (PCF) is pretty much non-standard and while the ATIs have to have a program to handle this, its just bundled on the Nvidias for free in hardware mode.
For a TRUE result of the comparison between ATI and Nvidia, you have to turn off DST from the 3DMark interface, but somehow I doubt if the guys posting the top scores in ORB actually did this. So in the end, Futuremark actually is a bit biased towards Nvidia, and Nvidia also includes a lot of shortcuts to get better scores in the benchmarks. Why else would they allow an Nvidia only specification to make it to the final build and influence the final score?