Jump to content
Search In
  • More options...
Find results that contain...
Find results in...
Lord FlatHead

GeForce FX benchmark

Recommended Posts

Apparently anandtech got these benchmark numbers straight from Nvidia, comparing performance between GF4ti4200, Radeon 9700 and GeForce FX:



46 fps at 1280x1024 ? The future's so bright, I gotta wear shades !

Share this post


Link to post

Nvidia can claim and project performance what they like. I only believe Independent benchmarks.

Share this post


Link to post
Zoost said:

Nvidia can claim and project performance what they like. I only believe Independent benchmarks.

yea.. although it's good to see such benchmarks but I rather belive independent ones (too)

Share this post


Link to post

I for one am amused that anyone would listen to anything that came not just from the card's manufacturer, Nvidia, but then from Anandtech, the second most unreliable hardware site on the web (the first being Van's Hardware).

I'm also amused that to get to 500Mhz, Nvidia had to strap on an appliance which looks like it's from @*#()$* black & decker. Ridiculous.

I think ATI has no reason to be worried at all, when they can just strap some DDRII and a new core revision (or even new drivers) onto their existing cards and equal or beat the Geforce FX's performance.

Share this post


Link to post

Katarhyne said:
I think ATI has no reason to be worried at all, when they can just strap some DDRII and a new core revision (or even new drivers) onto their existing cards and equal or beat the Geforce FX's performance.

From a coder point of view, the geForce FX brings a lot and freedom and flexibility and could easily outperform the latest Radeon in term of rendering quality: more color depth, bigger shader code etc...
Problem is it's probably too early and I doubt any game using this new features (which btw go far beyond the DirectX-9 specification) will come out before ATI comes with something pretty similar.
Oh, my dear, I accidently cicked the X on xircon the other day and lost the infos you gave me... would you be kind enough to mail them to me so that I can make this master ;)

Share this post


Link to post

Yeah, but Julian - think about it. While the Geforce FX DOES go beyond DX9, it doesn't come anywhere near the projected DX10 specs. So what's the use? Even Nvidia themselves admitted that trying to run 65536 shader apps at once would slow the card to a halt and make the program a literal slideshow. Really, the GeforceFX's much-vaunted "flexiblity" is useless in a real-world situation. ATI did the smart thing by sticking to the specification.

Share this post


Link to post
Katarhyne said:

Yeah, but Julian - think about it. While the Geforce FX DOES go beyond DX9, it doesn't come anywhere near the projected DX10 specs. So what's the use? Even Nvidia themselves admitted that trying to run 65536 shader apps at once would slow the card to a halt and make the program a literal slideshow. Really, the GeforceFX's much-vaunted "flexiblity" is useless in a real-world situation. ATI did the smart thing by sticking to the specification.


And what are the projected DX10 specs?!

Do you know them? I heard some rumours, but nothing is for sure yet, so don't talk about something you yourself know absolutely nothing about!

Sure, there is room to expand if we look a bit further into the future, like infinite loops, the ability for shaders to "share" info between cycles, the ability to read from pointers and write to pointers, etc... but given the paraller nature of GPU's, I doubt those restrictions would be removed for quite a while...

So the way I see it, dynamic branching in PS is a BIG thing (much more so than in VS), so that's one major thing (which btw is already supported by the PS3.0 standards in DX9).

We shall see what DX10 would bring us, since the target of providing the same instruction set for both the vertex & fragment processing has already been achieved and there is not much difference between them, other than the stream they're operating on (i'm talking about the VS/PS 3.0 specifications, and NV30 even goes beyond some of the features described there!! and vice versa).

Share this post


Link to post
Katarhyne said:

Yeah, but Julian - think about it. While the Geforce FX DOES go beyond DX9, it doesn't come anywhere near the projected DX10 specs. So what's the use? Even Nvidia themselves admitted that trying to run 65536 shader apps at once would slow the card to a halt and make the program a literal slideshow. Really, the GeforceFX's much-vaunted "flexiblity" is useless in a real-world situation. ATI did the smart thing by sticking to the specification.

The GFX is the first card that supports shaders complex enough to accelerate any 3D Package material system, where hundreds of passes aren't that uncommon. Since 60 fps isn't really needed, and even a slideshow is acceptable while previewing shader changes, this thing could at long last bring artists the chance of using virtual shading viewports without 2, 3 minutes wait between any change of the parameters. Of course extra coding is required, but every major developer has already started to include pixel shader powered viewports in their apps.

While you might not find the GFX's specifications useful, the Quadro5 is a dream come true.

Share this post


Link to post

Oh, I agree wholeheartedly that the technology has its uses in the professional 3D field. But I'm not a professional, and I don't care about that. I care about it's applications for gaming, and for gaming, the Geforce FX really offers nothing over the R300 core unless it actually goes to market with that Black & Decker apparatus attached to it, and in that case it will still only have a very marginal speed advantage over the Radeon 9700.

Dima, for your information some of us are intelligent enough to form educated guesses at what Microsoft will want for DX10, yes. And I can promise you, the Geforce FX isn't going to meet it. Not even almost. Granted, it will be closer than the Radeon 9700, but close only counts for horseshoes and hand grenades.

Share this post


Link to post
Katarhyne said:

Oh, I agree wholeheartedly that the technology has its uses in the professional 3D field. But I'm not a professional, and I don't care about that. I care about it's applications for gaming, and for gaming, the Geforce FX really offers nothing over the R300 core unless it actually goes to market with that Black & Decker apparatus attached to it, and in that case it will still only have a very marginal speed advantage over the Radeon 9700.

Dima, for your information some of us are intelligent enough to form educated guesses at what Microsoft will want for DX10, yes. And I can promise you, the Geforce FX isn't going to meet it. Not even almost. Granted, it will be closer than the Radeon 9700, but close only counts for horseshoes and hand grenades.


Perhaps some of you are indeed educated enough, although I'm not too sure...

guesses are only that, guesses!

I can make tons of guesses about DX10 that is actually bound to be introduced in time for the release of Longhorn and that PS/VS 3.0 specifications were included in DX9 so no further DX versions would be released till Longhorn ships... but again, i'm just guessing, or am I? :)

Share this post


Link to post
doomedout said:

Dima...wtf?


This is a technical forum m8... I assume tehnical mumbo jumbo is welcome here :)

Share this post


Link to post
Dima said:

This is a technical forum m8... I assume tehnical mumbo jumbo is welcome here :)

lol

I bet most of us (includid me) didn't understand a thing from that..

Share this post


Link to post
Katarhyne said:

Dima, for your information some of us are intelligent enough to form educated guesses at what Microsoft will want for DX10, yes. And I can promise you, the Geforce FX isn't going to meet it. Not even almost. Granted, it will be closer than the Radeon 9700, but close only counts for horseshoes and hand grenades.


What features will DX10 (by your guess) implement that would not be implementable or emulatable by the GFFx?

Share this post


Link to post

You can emulate anything. But emulation is essentially worthless in real-time rendering. Case in point - try to run 3DMark2001's "Nature" demo without programmable T&L hardware. I've seen it, and it's not a pretty sight.

It staggers me to think that you people would expect DX10 to be such a small jump from DX9 that a DX9+ card could meet DX10's expectations. My original argument still stands - there's no need to go beyond the specification, unless you can have a hope of meeting the next specification; something the Geforce FX will not do.

Share this post


Link to post
Katarhyne said:

You can emulate anything. But emulation is essentially worthless in real-time rendering. Case in point - try to run 3DMark2001's "Nature" demo without programmable T&L hardware. I've seen it, and it's not a pretty sight.


That's an extremely good point. Which brings me back to my question: what feature is DX10 going to have that NV30 cannot (ok, reasonably) implement or emulate? The jump from fixed function pipeline stuff to programmable pipelines was a huge shift in how graphics are approached. Do you anticipate another such shift in DX10? What is it? (I'm not being snotty, if you know something we don't, do share!)

Right now, it's looking like the upcoming 3D cards can do pretty much the same things that offline graphics packages can do at interactive speeds. The only real jumps I can think of would be periferal extentions to this, such as the ability to iteratively sample neighboring pixels for things like sub-surface scattering, or some form of interactive radiosity that I haven't heard of. But all these things are quite subtle, and aren't even used that extenively in the CG films we see in the theatre. Perhaps true per-pixel geometry tesseslation (that would be nice).

It's just that all these things are a bit minor for the moment-- they'd be nice, but I don't see developers abandoning the 9700 or Fx functionality anytime soon to design marketable engines around these features. As to DX10 having these features and making Fx redundant: eventually some API (DX10 or beyond) will of course, as with all hardware, but not for a long while. Certainly long enough to for the Fx to provide enough superior performance and quality in the marketplace to warrant it's creation. Very few games, even today, take full advantage of even DX8.

Share this post


Link to post

Oh, I don't mean to say I think the Geforce FX will be rendered 'redundant', to so speak. All I'm saying is that they wasted a lot of transistors and a lot of energy producing a card whose real-world capabilities are really not much more impressive than the Radeon 9700's - sad, considering that the card will have been over six months late when it comes out - six months after the Radeon 9700 was released.

Share this post


Link to post

Whoa, so Doom 3 is final now? Cool. Anyway, Those numbers sound very similar to a certain alpha version floating around, which had specific paths for NV30 GPU, but R300 had to share paths with R200 & Parhelia.

I'd love to see how the 196MHz NV30 does in Doom3, because until it gets to around 400Mhz the card probably wont be much competition, which as I said its at 196MHz, and needs 500Mhz or so to edge out over the R9700 Pro. Good luck with that one nV.

As for the programming aspects, there is a "few" advantages, although they are quite small. Programmers could possibly include a fall-back path if the NV30 or a similar-spec card (in terms of instructions) was not detected, so the NV30 (again or similar instruction-based card) can render in "full" quality, while other cards must use a fall-back path. Chances of this are VERY small though :P. BTW, the card falls short of VS & PS 3.0, which are rumoured for DX 9.1, so no, its not a DX10 card...

I think ATI has no reason to be worried at all, when they can just strap some DDRII and a new core revision (or even new drivers) onto their existing cards and equal or beat the Geforce FX's performance.

Exactly what I've been saying. A 350Mhz R300 core already took away nV's advantage at vertex shading, not to mention quite a lot of other things. And BTW, my R9700 clocks to 400 out of the box, I'd like to see a GF FX get over 505 (assuming it comes clocked @ 500), since indeed the card will be PUSHED TO THE MAX to try and stay competitive, because it does much less work per cycle (Sorta like Intel :D)

Ah well, seems nV is just throwing around more bullshit again. At least they gave me one satisfying product though, go out and buy your nForce2 now, its an overclockers dream :) !

Share this post


Link to post

I saw the demo movies for the FX card, pretty fucking awesome.
But when you think about it, how many developers are there that have people that can even write shaders and shit. People like that work in the movie fx biz and I'm guessing they make a lot more dough than most people in the game biz.
Games don't have the kind of production values that 3d animation has.
When a studio creates an animation, the visuals are everything. For game developers, the visuals are not the only concern, you have many other facets of the product. I don't expect to see many pc developers outside of id software bothering with the fancy new features of the fx.

Share this post


Link to post

That's just the point, though, DaJuice - Nvidia's Cg supposedly makes shaders much less of a pain in the ass to work with, but really, they're not that bad to begin with. Anyone who's ever worked on a Q3A-engine game, or even a single map or model, has had the opportunity to work with some very basic shaders.

In any event, game companies, even the little ones like Wizardworks, will have to eventually learn shaders or they will fall by the wayside. People want games that look good, and shaders are, for now, the way of the future.

Share this post


Link to post
Katarhyne said:

That's just the point, though, DaJuice - Nvidia's Cg supposedly makes shaders much less of a pain in the ass to work with, but really, they're not that bad to begin with. Anyone who's ever worked on a Q3A-engine game, or even a single map or model, has had the opportunity to work with some very basic shaders.

Someone at id coded the support for that. CG's meant to replace the home-made script listeners for an unificated method.

Share this post


Link to post

Sigh. I'd hoped this thread had died a quiet death. I see that it has not.

Cg is a marketing tool. And, I'm sure there are lots of game companies distributing or otherwise sharing shader frontends. I'm aware that coding shaders straight to the metal is pretty hardcore, like writing ASM code by hand, but I really don't think anyone does it that way anyway.

Share this post


Link to post
Alientank said:

Forget the past 25 replies. Nvidias card is ahead of the Radeon right now. Period.


/me waits for Kat

Share this post


Link to post

...You realize that post is like saying "BAN ME NOW BECAUSE I HAVE NOTHING VALID TO ADD TO ANY CONVERSATION", right?

Share this post


Link to post
Katarhyne said:

...You realize that post is like saying "BAN ME NOW BECAUSE I HAVE NOTHING VALID TO ADD TO ANY CONVERSATION", right?


Arg I'm not trying to start anything up sorry. I shouldn't have said that. You do have a lot of valid points and that but I mean, facts are facts, the Nvidia card will pass the 9700 Pro, so why bother arguing about the next ATI card that isn't even in development yet.

Share this post


Link to post

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×