OK I'll clear up some confusion.
ATI recently launched the X800GT series, and to counter that, NVIDIA launched the GeForce 6800XT (Google it).
As far as raw performance goes, no - a GeForce 6600GT can spank the 6800LE, but not the 6800NU, because 6800NU has 12 pixel pipelines whereas 6600GT has 8.
Also, A nice little google search on SM 3.0 will reveal a lot of things.
SM 3.0 increases performance on GeForce 6 and GeForce 7 (even then, SM 2.0a seems better for GeForce 6. Either case, SM 3.0 and 2.0a have good speed boosts over vanilla 2.0).
On the other hand, ATI's X800 has SM 2.0b, which also increases performance, but ATI's OpenGL drivers still have a lot of work to be done.
So, as of right now, the better choice would be NVIDIA IMO, but ATI is not far behind. But in OpenGL, ATI really needs to improve!
In the grand scheme of shader models:
1)Register Combiners - Preliminary per-pixel lighting and bump mapping. This was supported on GeForce1, GeForce2 and GeForce4 MX
2)SM 1.0 (first shader model, never made it to DX8). This was a feature of the first Radeon, but since DX8 did not use this model, these features ended up unusable for the most part.
3)SM 1.1 (DirectX 8.0 specification) - Used in GeForce3
4)SM 1.3 (DirectX 8.1) - GeForce4 Ti
5)SM 1.4 (DirectX 8.1) - Radeon 8500+
6)SM 2.0 (DirectX 9) - Radeon 9700
7)SM 2.0b (DirectX 9.0c) - Radeon X800. This SM was created to support the new features in the Pixel Shader of the X800.
8)SM 2.0a (DirectX 9.0b) - GeForce FX. This Shader model was created to help the FX cards perform better in DX9 games. Using SM 2.0a instead of plain 2.0 can provide a minimum performance increase of 20%.
9)SM 3.0 (DX 9.0c) - GeForce 6800. No comments