Jump to content
Search In
  • More options...
Find results that contain...
Find results in...
Sign in to follow this  
Dima

Stuff happened after the techdemo

Recommended Posts

I decided to notify all of you about the events that occured after the increduble new doom technology presentation by John Carmack:

1)John Carmack updated his plan file where he shared his thoughts about the new Geforce 3 card from Nvidia (if you've seen it already, move on):

Name: John Carmack
Email: johnc@idsoftware.com
Description: Programmer
Project:
Last Updated: 02/22/2001 21:02:26 (Pacific Standard Time)
-------------------------------------------------------------------------------
Feb 22, 2001
------------
I just got back from Tokyo, where I demonstrated our new engine
running under MacOS-X with a GeForce 3 card. We had quite a bit of
discussion about whether we should be showing anything at all,
considering how far away we are from having a title on the shelves, so
we probably aren't going to be showing it anywhere else for quite
a while.

We do run a bit better on a high end wintel system, but the Apple
performance is still quite good, especially considering the short amount
of time that the drivers had before the event.

It is still our intention to have a simultaneous release of the next
product on Windows, MacOS-X, and Linux.


Here is a dump on the GeForce 3 that I have been seriously working
with for a few weeks now:

The short answer is that the GeForce 3 is fantastic. I haven't had such an
impression of raising the performance bar since the Voodoo 2 came out, and
there are a ton of new features for programmers to play with.

Graphics programmers should run out and get one at the earliest possible
time. For consumers, it will be a tougher call. There aren't any
applications our right now that take proper advantage of it, but you should
still be quite a bit faster at everything than GF2, especially with
anti-aliasing. Balance that against whatever the price turns out to be.

While the Radeon is a good effort in many ways, it has enough shortfalls
that I still generally call the GeForce 2 ultra the best card you can buy
right now, so Nvidia is basically dethroning their own product.

It is somewhat unfortunate that it is labeled GeForce 3, because GeForce
2 was just a speed bump of GeForce, while GF3 is a major architectural
change. I wish they had called the GF2 something else.

The things that are good about it:

Lots of values have additional internal precision, like texture coordinates
and rasterization coordinates. There are only a few places where this
matters, but it is nice to be cleaning up. Rasterization precision is about
the last thing that the multi-thousand dollar workstation boards still do
any better than the consumer cards.

Adding more texture units and more register combiners is an obvious
evolutionary step.

An interesting technical aside: when I first changed something I was
doing with five single or dual texture passes on a GF to something that
only took two quad texture passes on a GF3, I got a surprisingly modest
speedup. It turned out that the texture filtering and bandwidth was the
dominant factor, not the frame buffer traffic that was saved with more
texture units. When I turned off anisotropic filtering and used
compressed textures, the GF3 version became twice as fast.

The 8x anisotropic filtering looks really nice, but it has a 30%+ speed
cost. For existing games where you have speed to burn, it is probably a
nice thing to force on, but it is a bit much for me to enable on the current
project. Radeon supports 16x aniso at a smaller speed cost, but not in
conjunction with trilinear, and something is broken in the chip that
makes the filtering jump around with triangular rasterization
dependencies.

The depth buffer optimizations are similar to what the Radeon provides,
giving almost everything some measure of speedup, and larger ones
available in some cases with some redesign.

3D textures are implemented with the full, complete generality. Radeon
offers 3D textures, but without mip mapping and in a non-orthogonal
manner (taking up two texture units).

Vertex programs are probably the most radical new feature, and, unlike
most "radical new features", actually turn out to be pretty damn good.
The instruction language is clear and obvious, with wonderful features
like free arbitrary swizzle and negate on each operand, and the obvious
things you want for graphics like dot product instructions.

The vertex program instructions are what SSE should have been.

A complex setup for a four-texture rendering pass is way easier to
understand with a vertex program than with a ton of texgen/texture
matrix calls, and it lets you do things that you just couldn't do hardware
accelerated at all before. Changing the model from fixed function data
like normals, colors, and texcoords to generalized attributes is very
important for future progress.

Here, I think Microsoft and DX8 are providing a very good benefit by
forcing a single vertex program interface down all the hardware
vendor's throats.

This one is truly stunning: the drivers just worked for all the new
features that I tried. I have tested a lot of pre-production 3D cards, and it
has never been this smooth.


The things that are indifferent:

I'm still not a big believer in hardware accelerated curve tessellation.
I'm not going to go over all the reasons again, but I would have rather
seen the features left off and ended up with a cheaper part.

The shadow map support is good to get in, but I am still unconvinced
that a fully general engine can be produced with acceptable quality using
shadow maps for point lights. I spent a while working with shadow
buffers last year, and I couldn't get satisfactory results. I will revisit
that work now that I have GeForce 3 cards, and directly compare it with my
current approach.

At high triangle rates, the index bandwidth can get to be a significant
thing. Other cards that allow static index buffers as well as static vertex
buffers will have situations where they provide higher application speed.
Still, we do get great throughput on the GF3 using vertex array range
and glDrawElements.

The things that are bad about it:

Vertex programs aren't invariant with the fixed function geometry paths.
That means that you can't mix vertex program passes with normal
passes in a multipass algorithm. This is annoying, and shouldn't have
happened.

Now we come to the pixel shaders, where I have the most serious issues.
I can just ignore this most of the time, but the way the pixel shader
functionality turned out is painfully limited, and not what it should have
been.

DX8 tries to pretend that pixel shaders live on hardware that is a lot
more general than the reality.

Nvidia's OpenGL extensions expose things much more the way they
actually are: the existing register combiners functionality extended to
eight stages with a couple tweaks, and the texture lookup engine is
configurable to interact between textures in a list of specific ways.

I'm sure it started out as a better design, but it apparently got cut and cut
until it really looks like the old BumpEnvMap feature writ large: it does
a few specific special effects that were deemed important, at the expense
of a properly general solution.

Yes, it does full bumpy cubic environment mapping, but you still can't
just do some math ops and look the result up in a texture. I was
disappointed on this count with the Radeon as well, which was just
slightly too hardwired to the DX BumpEnvMap capabilities to allow
more general dependent texture use.

Enshrining the capabilities of this mess in DX8 sucks. Other companies
had potentially better approaches, but they are now forced to dumb them
down to the level of the GF3 for the sake of compatibility. Hopefully
we can still see some of the extra flexibility in OpenGL extensions.


The future:

I think things are going to really clean up in the next couple years. All
of my advocacy is focused on making sure that there will be a
completely clean and flexible interface for me to target in the engine
after DOOM, and I think it is going to happen.

The market may have shrunk to just ATI and Nvidia as significant
players. Matrox, 3D labs, or one of the dormant companies may surprise
us all, but the pace is pretty frantic.

I think I would be a little more comfortable if there was a third major
player competing, but I can't fault Nvidia's path to success.

2)John Carmack and Jim Dose made a few comments on slashdot regarding the movement to Visual C++ for the new doom engine:

John Carmack:

We moved to C++ for the current game (which does not have an official full name yet).

I will probably do a .plan update about it, because it has definately had its pros and cons.

Jim Dose had inadvertantly used a few MS specific idioms that we had to weed out over the past couple weeks of the bring up on OS-X.

Jim Dose:

While a lot of the renderer is not made out of objects, it's still C++ and uses class objects for vector and matrix math, as well as storage classes such as lists and hash tables. Materials and models are implemented as classes, and as time goes on, more of the renderer will take advantage of OOP, but only where it poses a benefit. C++ can be written to have no overhead compared to C if you know what you're doing.

3)John and Jim made some more comments on slashdot regarding the models of characters and monsters present during the demostration:

John Carmack:

We don't have any technology specifically directed towards character features. The animation was done pretty conventionally in Maya. Our new animator comes from a film background, and we are finding that the skills are directly relevent in the new engine.

Jim Dose:

When I originally discussed what features we wanted in the animation system with the animator, I suggested adding controls for parametric facial animation, and he was basically horrified. His response was that he could do a much better job by hand. "This (animation) is what I do.", he said. After seeing the results of what he can accomplish by hand, I tend to agree.

I've looked into the research that's been done on parametric facial animation, and while it's impressive, I haven't seen anything that approaches the quality that an animator can do by hand. Even when the set of expressions it uses are manually created, the expressiveness doesn't compare to the subtlety an animator can put into it.

While the generality of a parametric system would be great for generating massive amounts of facial animation, as well as animation for dynamic content (such as net-based voice communication), if the animator is willing and able to handle to workload, I am more than happy to stick with hand animation. The technical challenge would be quite enjoyable, but in the end, I'll take a limited amount of high quality hand animation over an unlimited quantity of mediocre computer generated animation.

4)I chatted a bit with Graeme Divine and he said that the main reason the models were looking this awesome was due to the use of Real Bumpmapping.
Personally, I think this was also due to the increduble polycount of the characters (35000 guys! 35000!!!!!!!!!).

Share this post


Link to post

Ah ha, so even the GeForce 3 isn't perfect. I'm glad to hear that it's mostly a joy to work with and that as few as things as possible are a pain in the ass.

It's good to keep things in perpective like this. TV-Out on the GeForce 2 *SUCKED BALLS* but no one really cared except Gamecenter... those were the only fuckers that rode nVidia because of it. I'm a big proponent of TV-Out.

I remember in the original review of Street Fighter 2 for the SNES Gamepro magazine gave it five point zeros across the board. Excuse me? The music fucking sucked! It deserved a three at best. Simple immaturity, blinded to the cons by the pros.

The thing that really bothers me are the prices now. You can't tell me that production and shipping of the GeForce 3 card (everything included) couldn't POSSIBLY exceed fifty fucking bucks. That's why they're made in far away lands.

Ford has some of its parts made in Mexico, where the workers (who are considered RICH by this standard) make five dollars a day. I know, I used to work for Ford. Dirty fucks.

Six hundred fucking dollars? Ha. Yeah, yeah FUCKING RIGHT. You can just SUCK MY FUCKING COCK nVIDIA. Two hundred dollars is a fucking rediculous price for a video card, let alone three times that amount. Fuck your markup, don't you have enough millions?

C'mon GeForce 3 MX, I know you're coming! I don't even care if it limits me to Doom 3 in 800 by 600 I refuse to pay that much for a fucking video card. nVidia should be fucking ASHAMED.

If they were smart they'd keep the price REASONABLE because EVERYONE WILL WANT ONE for Doom 3. By making the price so fucking retarded only the rich gamers are going to get one.

Thank God that game isn't coming out for another year or two. By that time I should be able to snag nVidia's card with tile-based rendering for a realistic price, and that card will be leaps and bounds faster than a GeForce 3.

Man, now I know what it was like to feel like you have to go out and get a 486 to play the new Doom game...

Share this post


Link to post
Guest
This topic is now closed to further replies.
Sign in to follow this  
×