A New Grapgic Card for my Gear

Status
Not open for further replies.

vickybat

I am the night...I am...
why not use SSE on the gpu then? why x87?

Actually buddy its using SSE based instruction sets only and not x87 this time. Couldn't find a detailed info but some bits and pieces here and there.

See this forum:

PhysX 3.0 - NVIDIA Forums

The guy says its not using x87 for gpu anymore. Maybe the gpu too is using SSE. The article ico posted is old and no point in following that now.
Maybe nvidia made this move so that latest cpu's can also handle its physics engine efficiently but i also read that it needs an nvidia gpu to work. Until a detailed article is found, everything is a bit vague now.

Found a small read:

1) Reads like an advert for NV graphics cards.

2)
Similar to the first set of test results, Radeon video cards suffer poor frame rate speeds when PhysX is enabled. Our Intel Core i7-920 quad-core CPU just doesn't compare to the hundreds of cores available in a graphics processor. Both AMD and NVIDIA products suffer heavily reduced performance when APEX PhysX is processed by the computer's CPU, although there appears to be an unexpected trend: the most powerful GPUs offer the inverse in CPU-processed PhysX performance.

2K Games designed Mafia II using NVIDIA's PhysX 2.8.3 SDK, which supports only single-threaded PhysX CPU processing. PhysX SDK version 2.8.4 supports SSE2 instructions (which are not enabled by default for backwards compatibility), allowing updated games to compute PhysX more efficiently if developers enable the function. Finally, the forthcoming PhysX SDK 3.0 is said by NVIDIA to introduce multi-threaded CPU support to PhysX with SSE enabled by default, which could really change the game for everyone.


How the hell is NVIDIA targeting a wider audience if you need both CPU and GPU to work together? This makes absolutely no sense.
I don't see the real of point of NVIDIA PhysX in portable devices to be honest either if that's the wider audience they are targeting.

Well friend, you got me wrong. By wider audience, i meant that they are porting physx middleware in game consoles as well. You see previously games for eg- uncharted 2 used havok physics.
Now nvidia is targeting them. So in upcoming games, they might use physx 3.0 and use the apex toolset which is actually similar to havok. So that's a wider audience and not only pc gamers.
 
Last edited:

ico

Super Moderator
Staff member
See this forum:

PhysX 3.0 - NVIDIA Forums

The guy says its not using x87 for gpu anymore. Maybe the gpu too is using SSE. The article ico posted is old and no point in following that now.
your link said:
The PhysX SDK where compiled with SSE since the 2.8.4 release (August 16, 2010), GPU would never benefit from this as they don't support SSE...
Time to read your own links properly and understand them. nVidia can't support SSE just like that on GPUs without the necessary circuitry.

My article might be a year old...but whatever it said about nVidia PhysX on GPU, still avails true. THey intentionally used X87 on GPU to falsely claim a performance benefit over CPUs.

Game, set and match.
 

vickybat

I am the night...I am...
Well ico , i say its time to cool of and put up discussions in a sane manner. Fighting won't give productive data.

Lets stay calm and put up some valid points instead of flaming nvidia or physx. :smile:

Ok now we both know that sse which is an extension of x86 was first adopted by intel. Starting from sse to all its extensions i.e sse2,3 & 4 were optimized for float operations.

Yesterday i was checking a link where a guy posted a matrix multiplication c++ code involving floating data.

He compiled the code using x87, sse & sse2. The compilation time for x87 and sse were 2373ms and 2368ms respectively. Now these are almost equal. But when he used sse2, things were significantly fast and the result was 1112 ms.

Check the source please.

Now the guy says its possible for physx to use sse2 but he hasn't mentioned whether cpu or gpu.

But remember nvidia has its proprietary API i.e CUDA. Using cuda, nvidia gpu's have access to a ptx instruction set. Now this instruction set does not contain simd instructions which we call SSE or streaming simd instructions.

Now we all know that nvidia's architecture utilizes TLP in a grand scale. Cuda basically does simd but in a different manner. Here the threads are divided into groups known as warps. Within these warps, same sequence of instructions are executed and if there are dependencies from other threads, those instructions are supressed. This gives an illusion of different execution sequences. Nvidia calls this SIMT or single input multiple threads instead of multiple data.

So i think they have done something in a software level to harness sse instructions.

Now lets wait for a more concrete preview of physx 3.0 to get to know the facts in more detail. Besides, i found the above information in "stackoverflow"
 
Last edited:

ico

Super Moderator
Staff member
At the end of the day, PhysX is a gimmick. :)

They intentionally didn't use SSE on the GPU. ;) Edit: Note..SSE means the whole family. Not only the first iteration.
So i think they have done something in a software level to harness sse instructions.
If that is so...it is not worth it. :) There will be a huge performance penalty as their GPU supports only X87.

Shouldn't be quoting PMs.
ico said:
Most likely it is a wrapper or in simple words translator. To support SSE natively on the GPU, they would need to add suitable circuitry which they haven't done.
 
Last edited:

vickybat

I am the night...I am...
But guys, i don't understand one thing. Now lets say that nvidia realized its folly by not supporting sse and now they claim to do it in physx 3.0.

Or maybe they are eyeing their next-gen gpu's i.e Kepler to support sse by adding some physical circuitry(as ico said).

Considering the SIMT model, can we assume that gpu's finally will make use of sse and its extensions? I've got a feeling that they will. Can you people please throw some light?:smile:
 

ico

Super Moderator
Staff member
But guys, i don't understand one thing. Now lets say that nvidia realized its folly by not supporting sse and now they claim to do it in physx 3.0.

Or maybe they are eyeing their next-gen gpu's i.e Kepler to support sse by adding some physical circuitry(as ico said).

Considering the SIMT model, can we assume that gpu's finally will make use of sse and its extensions? I've got a feeling that they will. Can you people please throw some light?:smile:
SSE for GPUs? The future is fusion. :mrgreen:
 

Liverpool_fan

Sami Hyypiä, LFC legend
i will post a small comment here if physx was a gimmick one of the most knowledgeable members of TDF i.e cilus won't have used a dedicated card
the problem is that to take full advantage of physx u need the power of a 8800/9800 card.
it generally enhances the look & feel & most importantly game-play the impact physx has is much better than havoc or frostbrite can u deny this
Game physics is as important a graphics remember the first time you played Maxpayne2

PhysX is a gimmick and plays absolutely no role in deciding a purchase of a customer graphics card. Please don't drag this again. Thank you very much.
 

AcceleratorX

Youngling
Why should AMD accept PhysX which is owned and controlled by NVIDIA?
Why should they trust NVIDIA who rather have a fishy record?
Why should they look for "hints" with incomplete documentation and no guarantee of first class support?

I can answer your questions with more questions.

Why should AMD cause inconvenience to developers by not supporting an API which is clearly available to them, even if it runs faster on the competition's products? Why should they force slower software based physics processing when a faster method is clearly available?

Now I am well aware of the DirectCompute/Havok argument, but here's something to think about: Small developers have limited budgets. PhysX is feature packed, cheap and easy to use compared to something like Bullet. One can have a decent game even with software PhysX.

Can this developer afford Havok? Maybe, maybe not. Now, this developer is hampered by the PhysX product not being accelerated at all on AMD products. Why? Why force someone to spend more on a supposedly better solution just so that AMD cards can have a performance parity?

A similar situation with CPUs: Hey, I want to use SSSE3 and SSE4. But wait! AMD's SSE4 is not Intel's SSE4! AMD doesn't support SSSE3! Now I have to code fallback paths so it will work on these CPUs, or be happy using SSE2 and plain SSE3 instead.

What's the consequence? For bigger developers, not much - they have developers, resources and money to spare so that everything works great. Smaller developers, who want to push a product out as fast as possible so that they can post a revenue to pay their employees face the brunt of the problem. The disadvantages: Larger development time (code fallback paths), more testing required (since no support means more bugs), less performance on AMD hardware possibly leading to lower sales, etc.

Who suffers? The developer of course. Now do you understand why so many games from smaller developers carry TWIMTBP and Intel tags? ;)

I'm not bashing AMD here since I have used AMD for a very long time (fully AMD CPU setups since 2003, AMD graphics since 2007), but the fact is that AMD is simply not as open to developers as it should be. It needs to step up it's game real fast.

But for us as end users, all of the above pretty much amounts to nothing since we just care about how the graphics performance is in our games given the price we pay for the product :)

So if AMD wins there, so be it :)

As for PhysX as a gimmick - it probably is a good marketing tool for NVIDIA, but it's also responsible for giving a good amount of quality to a number of independent and small budget games (Example: See the game "Trine").
 
OP
DARK KNIGHT

DARK KNIGHT

The silent Warrior
Guys enough :argue:discussions about physx . I understand:tired: the difference now i have finally decided to go with ATI. And please start a new:chinscratch:thread physx is a gimmick or not. so everybody will know about the topic and choose his card he like . I have decided to go with hd 6950:doublethumb: . Thanks everybody for ur suggestion:)) for guiding me about the topic of phsyx:idea:.
 
OP
DARK KNIGHT

DARK KNIGHT

The silent Warrior
Guys :sad:can u tell me the best online site to so that I can buy these cards
1.msi 6950tp-III/p.e:evil:
2.saphire hd 6950 2 gb:twisted:
On online bcse somebody told me that it is not available in nehru palace:shock:.
please guide me .:razz:
 

ico

Super Moderator
Staff member
Guys :sad:can u tell me the best online site to so that I can buy these cards
1.msi 6950tp-III/p.e:evil:
2.saphire hd 6950 2 gb:twisted:
On online bcse somebody told me that it is not available in nehru palace:shock:.
please guide me .:razz:
TheITWares - MSI R6950 Twin Frozr III PE/OC 2GB

SMCInternational.in doesn't have it.
 

mithun_mrg

Cyborg Agent
Guys :sad:can u tell me the best online site to so that I can buy these cards
1.msi 6950tp-III/p.e:evil:
2.saphire hd 6950 2 gb:twisted:
On online bcse somebody told me that it is not available in nehru palace:shock:.
please guide me .:razz:

u can try here also
Buy Sapphire | Sapphire HD6950 2GB DDR5 PCI Express card | Buy PCI Express card | Buy Graphic card
 
OP
DARK KNIGHT

DARK KNIGHT

The silent Warrior
Are these sites were trustworthy sites & what is the procedure to buy a product in these sites
does it requires credit card or it need on line banking.
 
Last edited:

mithun_mrg

Cyborg Agent
these are all trusted sites mode of payment depends on u, u can deposit cash in their A/c after confirmation of the order or pay through neft using online banking
 

Jaskanwar Singh

Aspiring Novelist
^for sapphire this is the one to get -
TheITWares - One Stop for all Gizmos!SAPPHIRE 100312-3SR Radeon HD 6950 Dirt3 Edition 2GB 256-bit GDDR5 PCI Express 2.1 x16 HDCP Ready CrossFireX Support Video Card with Eyefinity
 
Status
Not open for further replies.
Top Bottom