SLI suffers at x8/x8 2.0 speeds by at least 10% across all games.
What about having a CFX setup and a third card for PhysX? I know its not the aim of this thread, but was curious if this is a common syndrome of Z68 chipset.
There isn't a diff. between x8/x8 & x16/x16 SLI/CFX atleast till 2560x1600...
There is. Not much in single GPU and previous generations. But its there.
If you are running on a 30" display at 2560x1600 or below, an x8/x8 SLI or CFX configuration will perform the same as a x16/x8 or x16/x16 configuration. The only time that you should even be slightly concerned about running at x8/x8 is when you move up to a multiple display setup. When we pushed the GTX 480 SLI at 5760x1200 we saw up to a 7% difference in performance between x8/x8 and x16/x16, in favor of x16/x16, but that was in one game only.
In SLI there is a big difference.
It depends on how much data you send. HardOCP NEVER truly maxes out any configuration.
A guy I know who has 2 470s (not in this forum/country) sees an 8% loss vs. 16x/8x mode and 10% loss vs 16x/16x mode SLI.
I don't see any comparison that shows a diff.
Well, in the link i gave all games are at highest possible with 4xAA &16xAF.
4xAA, but which AA? I run 16xQ CSAA, 4x MSAA and 8x SSAA, of which 8x SSAA is the best but most expensive and 16xQ CSAA, 4x MSAA are very close with 16xQ being quite a bit better.
There are other kinds like FXAA (not very common), MLAA (AMD exclusive and better quality vs. CSAA), Ubersampling (only found in Witcher 2 atm) etc.
I'd rather take the word from a reviewer(w/proof).. No offence..
None taken. But that is hardly proof. There is insufficient data to conclude convincingly. They do not mention their driver settings and tweaks (if any). They also do not mention if they used software like Nvidia Inspector.