Riva Tuner v2.05

Status
Not open for further replies.

Who

Guess Who's Back
Riva Tuner v2.05 will be out in few hours. you guys must be thinking why i made a thread for soft which has not come out yet ?


*img394.imageshack.us/img394/2158/oclc7.jpg


Just look at this image, yes this version has the shader overcloking ability so 8600 GT & above get ready overclock your underclock shaders & see some big perfomance improvment in shader based games (oblivion etc), also don't you like the force constant perfomance option :D
 

The_Devil_Himself

die blizzard die! D3?
smit said:
Riva Tuner v2.05 will be out in few hours. you guys must be thinking why i made a thread for soft which has not come out yet ?


*img394.imageshack.us/img394/2158/oclc7.jpg


Just look at this image, yes this version has the shader overcloking ability so 8600 GT & above get ready overclock your underclock shaders & see some big perfomance improvment in shader based games (oblivion etc), also don't you like the force constant perfomance option :D
wow man great news.Thanks a million.I am gonna love my 8600GT I am getting next week.
 
OP
Who

Who

Guess Who's Back
ALso RIVA tuner 2.05 out on *www.nvworld.ru/downloads/rivatuner.zip

a russian site :D


Found an shader based overclocking guide, enjoy.


NVIDIA G80 based GPU shader clock speed adjustment using 163.67 drivers and RivaTuner 2.05
================================================== ====================

Overview/Background
---------------------

Prior to NVIDIA driver release 163.67, the shader clock speed was linked to the core clock (aka ROP domain clock) speed and could not be changed independently by itself. The relationship between core and shader domain clock speeds (for most cards) is shown in table A. Some cards have slightly different set freq vs resultant core/shader speeds so take the table as an illustration of how the shader clock changes with respect to the core clock rather than precise values. To overclock the shader speed it was necessary to flash the GPU BIOS with a modified version that sets a higher default shader speed.

By way of an example, my 8800 GTS EVGA Superclocked comes from the factory with BIOS programmed default core and shader speeds of 576 and 1350, respectively. When increasing the core speed, I found 648 to be my maximum stable speed. From table A, you can see that with a core of 648 the maximum shader speed (owing to the driver controlled core/shader speed linkage) is 1512. To push it higher you increase the BIOS set shader speed. For example, with a BIOS set to core/shader 576/1404 (from 576/1350), all linked shader speeds are bumped up by 54MHz. So now when increasing the core to 648, the maximum shader speed becomes 1512+54=1568. I eventually determined my maximum stable shader speed to by 1674 (achieved with GPU BIOS startup speeds set to 576/1512; overclocking core to 648 now yields a shader speed of (1512-1350)+1512=1674).

However, as of NVIDIA driver release 163.67, the shader clock can now be modified independently of the core clock speed. Here is the announcement by Unwinder:

"Guys, I've got very good news for G80 owners. I've just examined overclocking interfaces of newly released 163.67 drivers and I was really pleased to see that NVIDIA finally added an ability of independent shader clock adjustment. As you probably know, with the past driver families the ForceWare automatically overclocked G80 shader domain synchronicallly with ROP domain using BIOS defined Shader/ROP clock ratio. Starting from 163.67 drivers internal ForceWare overclocking interfaces no longer scale shader domain clock when ROP clock is adjusted and the driver now provides completely independent shader clock adjustment interface. It means that starting from ForceWare 163.67 all overclocking tools like RivaTuner, nTune, PowerStrip or ATITool will adjust ROP clock only.
However, new revisions of these tools supporting new overclocking interfaces will probably allow you to adjust shader clock too. Now I've played with new interfaces and upcoming v2.05 will contain an experimental feature allowing power users to definie custom Shader/ROP ratio via the registry, so RT will clock shader domain together with ROP domain using user defined ratio.
And v2.05 will give you completely independent slider for adjusting shader clock independently of core clock.

Note:

By default this applies to Vista specific overclocking interfaces only, Windows XP drivers still provide traditional overclocking interface adjusting both shader and ROP clocks. However, XP drivers also contain optional Vista-styled overclocking interfaces and you can force RivaTuner to use them by setting NVAPIUsageBehavior registry entry to 1."

Two big points of note here:
*) The driver's new overclocking functionality is only used *by default* on Vista. Setting the rivatuner NVAPIUsageBehaviour registry entry to 1 will allow XP users to enjoy the new shader speed configurability.
*) With the new driver interface, by default, the shader speed will not change AT ALL when you change the core speed. This is where the use of RivaTuner's new ShaderClockRatio registry value comes in (see below). It can be found under the power user tab, RivaTuner->Nvidia->Overclocking.


Changing the shader clock speed
--------------------------------

On to the mechanics of the new ShaderClockRatio setting in Rivatuner 2.05. Here's more text from Unwinder:

"Guys, I’d like to share with you some more important G80 overclocking related specifics introduced in 163.67:

1) The driver’s clock programming routine is optimized and it causes unwanted effects when you’re trying to change shader domain clock only. Currently the driver uses just ROP domain clock only to see if clock generator programming have to be performed or not. For example, if your 8800GTX ROP clock is set to 612MHz and you need to change shader domain clock only (directly or via specifying custom or shader/ROP clock ratio) without changing current ROP clock, the driver will optimize clock frequency programming seeing that ROP clock is not changed and it simply won’t change the clocks, even if requested shader domain clock has been changed. The workaround is pretty simple: when you change shader clock always combine it with ROP clock change (for example, if your 8800GTX ROP clock is set to 612MHz and you’ve changed shader clock, simply reset ROP clock to default 576MHz, apply it, then return it to 612MHz again to get new shader clock applied). I hope that this unwanted optimization will be removed in future ForceWare, and now please just keep it in mind while playing with shader clock programming using RT 2.05 and 163.67.
2) Currently Vista driver puts some limitations on ROP/shader domain clocks ratio you’re allowed to set. Most likely they are hardware clock generator architecture related and hardware simply cannot work (or cannot work stable) when domain clocks are too asynchronous. For example, on my 8800GTX the driver simply refuses to set the clocks with shader/ROP ratio within 1.0 – 2.0 range (default ratio is 1350/575 = 2.34), but it accepts the clocks programmed with ratio within 2.3 – 2.5 range. Considering that the driver no longer changes domain clocks synchronically and all o/c tools (RT 2.03, ATITool, nTuner, PowerStrip) currently change ROP clock only, that results in rather interesting effect: you won’t be able to adjust ROP clock as high as before. Once it gets too far from (or too close to) shader clock and shader/ROP clock ratio is out of range – the driver refuses to set such clock. Many of you already noticed this effect, seeing that the driver simply stops increasing ROP clock after a certain dead point with 163.67."

and

"In the latest build of 2.05 (2.05 test 7) I've added an ability of setting ShaderClockRatio to -1, which can be used to force RivaTuner to recalculate desired Shader/ROP ratio automatically by dividing default shader clock by default ROP clock.
So if you set ShaderClockRatio = -1 and change ROP clock with RT, it will increase shader clock using you card's BIOS defined ratio (e.g. 1350/576=2.34 on GTX, 1188/513 = 2.32 on GTS etc). If you wish to go further, you may still override the ratio, for example increase shader clock by specifying greater ratio (e.g. ShaderClockRatio = 2.5)."


Three important points here:
*) The driver currently imposes restrictions on how far the shader clock speed can be changed from what it otherwise would've been when linked to the core clock speed in old drivers (it is suspected that the restriction is owing to hardware limitations rather than a driver software design choice). This means you can't set an arbitrary shader speed which you know your card is capable of and necessarily expect it to work.
*) Setting the ShaderClockRatio to the special value of -1 will give you a very similar core / shader speed linkage that you had under previous drivers (163.44 and older).
*) When the change the value of ShaderClockRatio, in order for it to take effect, you must make a core speed. So, for example, you might reduce the core speed a little, apply and then put it back to how it was and apply again.



Worked example
----------------

Surprise surprise, back to my EVGA 8800 GTS superclocked! .. First off, if you've not already done so, I recommend setting up RivaTuner monitor to show the core clock, shader clock and memory clock speeds so that you can immediately tell if your core/shader clock changes are having any effect. My setup is vista with 163.67 drivers. With RivaTuner 2.03, when overclocking the core to 648, the shader would now stick at the bootup default speed of 1512 MHz (see last paragraph of "Overview/Background" above). If I had blindly run 3dmark2006 tests after installing the 163.67 driver, I would've assumed that the new drivers give worse performance but the rivatuner graphs show you that the shader is not running at the expected speed.

After installing RivaTuner 2.05, we are now able to set the ShaderClockRatio value to restore a higher shader clock speed. In my case since I want a shader speed of 1674 when the core is 648, I use 1674/648 = 2.58.


=======================


Table A
--------

Some cards have slightly different set freq vs resultant core/shader speeds so take the table as an illustration of how the shader clock changes with respect to the core clock rather than precise values.

Set core | Resultant frequency
frequency | Core Shader | ShaderClockRatio
---------------------------------------------
509-524 | 513 1188 | 2,315789474
525-526 | 513 1242 | 2,421052632
527-547 | 540 1242 | 2,3
548-553 | 540 1296 | 2,4
554-571 | 567 1296 | 2,285714286
572-584 | 576 1350 | 2,34375
585-594 | 594 1350 | 2,272727273
595-603 | 594 1404 | 2,363636364
604-616 | 612 1404 | 2,294117647
617-617 | 621 1404 | 2,260869565
618-634 | 621 1458 | 2,347826087
635-641 | 648 1458 | 2,25
642-661 | 648 1512 | 2,333333333
662-664 | 675 1512 | 2,24
665-679 | 675 1566 | 2,32
680-687 | 684 1566 | 2,289473684
688-692 | 684 1620 | 2,368421053
693-711 | 702 1620 | 2,307692308
712-724 | 720 1674 | 2,325
725-734 | 729 1674 | 2,296296296
735-742 | 729 1728 | 2,37037037
743-757 | 756 1728 | 2,285714286
 
Status
Not open for further replies.
Top Bottom