ati gpu nvidia physx

do you consider physx as an important factor?

  • yes

    Votes: 8 34.8%
  • no

    Votes: 15 65.2%
  • i don't know what it is

    Votes: 0 0.0%

  • Total voters
    23

skeletor

Chosen of the Omnissiah
I never saw an unplayable performance hit in batman aa on a gtx 460. You get good playable framerates.
It is not about being playable. You are getting a good 60-70% hit at the moment which can be greatly minimized if you implement the effects the traditional way.

Physics processing units in the hardware level can be employed as separate execution units. These won't come in the way of rendering. Just like you have separate alu's and cu's in a traditional cpu.
Currently they aren't separate.

Well not exactly but an open physics code handled by both amd and nvidia gpu's are the way to go imo. Now this would be absolutely neutral since both gpu's will handle the physics computations and free the cpu for other useful computations and i mentioned before why.
would be good.
Whatever it is, should not be processed by cpu.
any reason why?
 

vickybat

I am the night...I am...
@mohityadavx

well if nvidia enters the x86/64 market, then ofcourse it will design its own chipset and mobo. Have you seen an amd processor fitting an intel motherboard and vice-versa?

Think properly before posting.
 
OP
mohityadavx

mohityadavx

Youngling
hmm, I don't think so.

For 8k you get HD 5770 which is much faster.

8800 GTX = HD 4770 level.

r u sure its gtx series
costed him 40 k at that time

what should be the apt price

actually i am not buying the card for myself but for a cousin my pc won't even support the card due to generic psu so i don't want my cousin later on to say that i tricked him so as to help my friend dump his card.
 

vickybat

I am the night...I am...
It is not about being playable. You are getting a good 60-70% hit at the moment which can be greatly minimized if you implement the effects the traditional way.

Like i said it shouldn't always have to be the traditional way. More optimisations will lead to lesser performance hits.

Currently they aren't separate.

I know that.

any reason why?

Gpu's ability to process much more floating point operations per second,integer data types, unified shader architecture, and a geometry shader stage which allows a broader range of algorithms to be implemented. Thus more capable than a cpu for handling physics computations. Read this in an article in digit(way back).

But separate physics units in a gpu [something like dedicated ppu's and spe's in cell(sony,ibm,toshiba)] will work wonders.
 
Last edited:

skeletor

Chosen of the Omnissiah
Like i said it shouldn't always have to be the traditional way. More optimisations will lead to lesser performance hits.
Would take 5 years.

Gpu's ability to process much more floating point operations per second,integer data types, unified shader architecture, and a geometry shader stage which allows a broader range of algorithms to be implemented. Thus more capable than a cpu for handling physics computations. Read this in an article in digit(way back).
Honestly saying, I'm someone who doesn't give a damn about these terms. ;)

GPU currently are not more than capable of doing things on their own - PhysX is a good example. They still can't handle rendering and physics processing together without a massive performance hit. They might become after 5 years, but that is another thing.

Infact what you have said is the very idea behind Fusion by AMD.

---------- Post added at 02:47 PM ---------- Previous post was at 02:37 PM ----------

anyways, a small Google search, "PhysX gimmick" would get you tons of threads and reasons.
 
J

Joker

Guest
ok i played batman:arkham asylum with physx on. only had paper flying and some bullet effects. all these have been employed in games since long.

same case in mafia 2. now ur computer cant handle it because u dont have an nvidia card? total gimmick. if it wasnt it would have been used in every game which it isnt. as of now..it is total gimmick. end of.

if it is implemented properly in future..then it isnt. but today it is and not a decidign factor.
 

vickybat

I am the night...I am...
Would take 5 years.

No not that much. You will see several improvements in kepler that will be launched later this year. In a 5 years time, it will only become much more mature. Amd southern islands may also have some tricks up its sleeve.

You sound more like a pessimist.


Honestly saying, I'm someone who doesn't give a damn about these terms. ;)

GPU currently are not more than capable of doing things on their own - PhysX is a good example. They still can't handle rendering and physics processing together without a massive performance hit. They might become after 5 years, but that is another thing.

Infact what you have said is the very idea behind Fusion by AMD.


Well thats your problem. It doesn't matter whether you give a damn to those terms or not. They are going to happen and cpu physics will no longer be a de-facto like in current scenario.

What i said has nothing to do with amd fusion whatsoever. But maybe we may really see an APU working on physics computations instead of the cpu cores. They might well have a dedicated physics unit in future.

Go through the cell broadband architecture properly and you will know what i am talking about. Its spe's are much more capable in handling physics computations than traditional cpu cores and heck it can even render geometrical shapes and figures which are the job of vertex shaders.They closely resemble a gpu architecturally.

In a nutshell- cpu physics is not the future.

anyways, a small Google search, "PhysX gimmick" would get you tons of threads and reasons.

Well it might sound as a gimmick now but is the perfect step incorporating gpu physics. When both companies will support it (something similar but de-facto for both), will put a final nail to the coffin (read cpu physics).
 
Last edited:

skeletor

Chosen of the Omnissiah
What is Crysis 2 using? Battlefield 3? ;)

Nothing about pessimism. As of now, all current implementations are gimmicky - an opinion shared by most.
 

vickybat

I am the night...I am...
ok i played batman:arkham asylum with physx on. only had paper flying and some bullet effects. all these have been employed in games since long.

same case in mafia 2. now ur computer cant handle it because u dont have an nvidia card? total gimmick. if it wasnt it would have been used in every game which it isnt. as of now..it is total gimmick. end of.

if it is implemented properly in future..then it isnt. but today it is and not a decidign factor.

Actually physics does these in case you don't know. You won't get jaw dropping vistas or scenic atmosphere with in game physics. But its the objects behaviour as in a real world.

I am not talking about amd vs nvidia here. Read my previous posts. But implementation of physics in gpu rather than cpu. I am favouring both amd and nvidia here. Nvidia has started and amd will follow suit. Soon we might see physics handled by both amd and nvidia gpu's.

What is Crysis 2 using? Battlefield 3? ;)

Nothing about pessimism.

I told you already. This is the present scenario. And as you said, there is no de-facto solution of gpu physics handled by both camps yet. That day isn't long though.

We might or lets say will see much better implementations of in game physics than the current cryengines and frostbite engines in future and that isn't much long. Since these are neutral games, they support cpu physics cause they don't have much option left.

I am damn sure there will be a physics engine in the very near future that will be handled by both amd and nvidia. Thats because physics algorithms favour the gpu more than a cpu and has tremendous architectural differences. This isn't a gimmick by any means and has been proved by many tech experts.

As of now, all current implementations are gimmicky - an opinion shared by most.

Yes this part is somewhat true but the implementations may be gimmicky, show us whats in store for future when gpu's start handling physics.
 
Last edited:

Liverpool_fan

Sami Hyypiä, LFC legend
Yes it is. It doesn't have to favour all games across all platforms. Everything doesn't have to be open source. Its proprietary code and there's nothing wrong with that.
Clearly you are confused between "Open Source" and "Open Standards". Come back when you get your terms right.

---------- Post added at 06:16 PM ---------- Previous post was at 06:10 PM ----------

^^ Thats actually fair imo. You are not supposed to use an nvidia card as physx and a non nvidia card as the primary gpu for rendering. In this case if amd would have been promoting physx, it would also not support non-amd cards as primary gpu. So its a fair marketing strategy.
So nVidia should dictate how should I use the products I bought with my own money :))
How about Intel locking out nvidia that you can't use nVidia GPU with an Intel processor. Oh wait in that case you are not supposed to use the nVidia graphics card since you are "not supposed to" :lol: and it will be a "fair marketing policy" :D
 

ithehappy

Human Spambot
Hmm, I am reading a lot about Marketing Strategy and blah blah but I highly doubt how many of them (who are saying about Marketing Strategy) have a minimal idea about Marketing Strategy! Marketing Strategy is a PRECISE stuff, if everyone had idea about it then we would have 10,000 or more brands like Nvidia or AMD. So I guess it's better to comment on the Performance or other things but not regarding Business here :).
BTW- I do have some idea about Marketing Strategy :)
 
OP
mohityadavx

mohityadavx

Youngling
@vickybat

@mohityadavx

well if nvidia enters the x86/64 market, then ofcourse it will design its own chipset and mobo. Have you seen an amd processor fitting an intel motherboard and vice-versa?

Think properly before posting.

actually i got somewhat confused but wanted to say this ,

So nVidia should dictate how should I use the products I bought with my own money :))
How about Intel locking out nvidia that you can't use nVidia GPU with an Intel processor. Oh wait in that case you are not supposed to use the nVidia graphics card since you are "not supposed to" :lol: and it will be a "fair marketing policy" :D

mind is a strange thing.....
 

Liverpool_fan

Sami Hyypiä, LFC legend
Clearly nVidia should give a refund unless the product box has explicitly stated it works only with nVidia cards.
 
OP
mohityadavx

mohityadavx

Youngling
^^ thats not the case buddy my friend had to sell his physx card for dirt cheap as nvidia won't accept this(it is also the reason for the existence of this thread). Actually Nvidia earlier used to give support for AMD cards but later on they closed this by a firmware update just like sony doing with ps3
 

vickybat

I am the night...I am...
Clearly you are confused between "Open Source" and "Open Standards". Come back when you get your terms right.

I am back. You are right, i mistyped the terms. I meant open-standards. Open source are the ones where the developer gives the source code with the app which he develops. In the other hand, open standard is something that is universally accepted and is not proprietary. Is that right?


So nVidia should dictate how should I use the products I bought with my own money :))
How about Intel locking out nvidia that you can't use nVidia GPU with an Intel processor. Oh wait in that case you are not supposed to use the nVidia graphics card since you are "not supposed to" :lol: and it will be a "fair marketing policy" :D

No friend you did not get my point. Intel is not nvidia's competitor in the gpu market but amd is. If nvidia develops some proprietary standard, you don't expect the competitor to use that.

In hardware level proprietary is taken as patent. Now there are a number of patents between intel and amd which they cannot utilize in their chips.
Now many things are thought of when preparing a patent or developing something proprietary. Its not just done randomly.

In this case, physx code is proprietary and amd gpu's cannot process them which is absolutely fair & is a marketing strategy. Can amd or nvidia gpu's utilize quicksync? No, because its proprietary. Same can be said on cuda and stream. Manufacturers do it for the sake of competition.

Now we want something open-standard(or universal) codepath which can be utilised by both gpu's. I was telling ico the samething and he kind of agreed.There should be an open physics engine that can be handled by both gpu's and the future is leading us there.

Physx is the first step of a gpu handling physics. Look it this way. A gpu is better than a cpu on handling complex physics algorithms. So we can expect more physics engines that are gpu based and it will be sweet if they follow the open-standard which according to me is inevitable.
 
Last edited:

skeletor

Chosen of the Omnissiah
In hardware level proprietary is taken as patent. Now there are a number of patents between intel and amd which they cannot utilize in their chips.
Now many things are thought of when preparing a patent or developing something proprietary. Its not just done randomly.

In this case, physx code is proprietary and amd gpu's cannot process them which is absolutely fair & is a marketing strategy. Can amd or nvidia gpu's utilize quicksync? No, because its proprietary. Same can be said on cuda and stream. Manufacturers do it for the sake of competition.
Improper analogy here with Quick Sync. Quick Sync in reality is nothing more than an on-chip H.264 hardware encoder. Many devices use hardware encoders, but you have to pay a royalty to MPEG for that. It is still neutral. nVidia and AMD can employ hardware H.264 encoders if they want just like Intel.

Lastly, Stream is nothing more than OpenCL with AMD's API. OpenCL is an _open_ standard.
 

vickybat

I am the night...I am...
^^ Not that improper i guess. Of course quicksync is an on chip H.264 encoder but its patented not functionally but architecturally. Its a fixed function piece of silicon that solely has one function i.e encoding.

Even if amd and nvidia develop their own dedicated encoders, they cannot violate the architectural patents that intel employs. The same thing happens in software too. Different codepath achieving the same functionality. Game engines also differ in a similar way. Nvidia and amd also differ in a similar way.

Atleast this is my understanding. Correct me if i am wrong.

*i53.tinypic.com/nnochy.jpg

Now architecturally, anything employed by nvidia or amd will be different from the above but will have same functionality i.e a dedicated block that can encode and decode.


What you said about stream is very true and i already knew it. Nvidia also supports OpenCL.
 
Last edited:

asingh

Aspiring Novelist
What is highly irritating and naive about nVidia is that:

1. They literally pay the game developers to write code which only renders affects (certain) via there hardware. Monopoly.
2. For the game to launch their developed middle level (almost driver level) software has to be installed as a prerequisite which is not generic. Bloatware.
3. If nVidia specific hardware is not detected then the extra Physics gets dumped (partially) to the CPU. Force ware (literally their driver name) :)
4. If partnered with another companies hardware the game refuses to run because of lock in driver mechanisms. Bad consumer experience.

It is what they used to do once upon a time with their nForce boards to get SLI. Now they have woken up, and license it openly - post X58. I hate a company with such tactics even if they make excellent hardware. They are just greedy and what the whole pie. Extremely expansionist, which company is not, but not unlawful practices. They will get caught and law suited. Similar to what happens to Intel every couple of years.

PhysX is nVidia on games is forced slip stream.
 

vickybat

I am the night...I am...
@ asingh

I couldn't understand the 4th point. Can you please elaborate a bit?

About your first point, paying developers to write code for their hardware is simply a market strategy. Even console manufacturers like sony , nintendo and microsoft pay developers to write code for their hardware( read as exclusives), even though there are more multiplatform titles. But exclusives is what sets these apart.

Monopoly will exist in competition. Amd too does the same in some titles i guess.
 
Last edited:

asingh

Aspiring Novelist
@ asingh

I couldn't understand the 4th point. Can you please elaborate a bit?

About your first point, paying developers to write code for their hardware is simply a market strategy. Even console manufacturers like sony , nintendo and microsoft pay developers to write code for their hardware( read as exclusives), even though there are more multiplatform titles. But exclusives is what sets these apart.

Monopoly will exist in competition. Amd too does the same in some titles i guess.

nVidia does not allow a consumer to use a GeForce accelerator as a PPU along with a non-nVidia GPU.

Please, console exclusivity to a title is different from being able to run on a certain type of hardware. It is like creating/selling/marketing a cola drink that only people who have are 5' 5" tall can digest.

Can you show me a game title where AMD/ATI have done something similar. The game refuses to run unless an AMD/ATI sofware is installed, or hardware is mandate..? Yes monopoly will exist, but it should be told.
 
Top Bottom