Jump to content
Search In
  • More options...
Find results that contain...
Find results in...
Use

Tim on Tom's

Recommended Posts

I think Tim Willits/Tom's Hardware means Carmack started working on the Doom engine after Quake III was released. That's what I have read in every other interview.

Share this post


Link to post
gatewatcher said:

I think Tim Willits/Tom's Hardware means Carmack started working on the Doom engine after Quake III was released. That's what I have read in every other interview.


Yep. That IS what they mean. A bad typo to make. Could create some confusion :P

Share this post


Link to post

Slashdot posted some links related to Quakecon, and to nobody's surprise, John made a couple of interesting followup posts.
On the topic of engine "shelf life" and more:

My comment specifically regards the "shelf life" of a rendering engine. I think that an upcoming game engine, either the next one or the one after that, will have a notably longer usable life for content creation than we have seen so far. Instead of having to learn new paradigms for content creation every couple years, designers will be able to continue working with common tools that evolve in a compatible way. Renderman is the obvious example -- lots of things have improved and evolved, but its fundamental definition is clearly the same that it was over a decade ago.

This is only loosly related to the realism of the graphics. I don't think a detailed world simulation that is indistinquishable from reality will be here in the next decade, except for tightly controlled environments. You will be able to have real-time flythroughs that can qualify as indistinguishable, but given the ability to "test reality" interactively, we have a lot farther to go with simulation than with rendering.



A comment on the X-Box, and the dramatic effect of coding for a specific platform:



The X-Box GPU is more of a GF4 than a GF3, but a modern PC is generally much higher end than an X-Box.

However, you can usually count on getting twice the performance out of an absolutely fixed platform if you put a little work into it. There are lots of tradeoffs that need to balance between the different cards on a general purpose platform -- things that I don't do with vertex programs because it would make the older cards even slower, avoiding special casing that would be too difficult to test across all platforms (and driver revs), and double buffering of vertex data to abstract across VAR and vertex objects, for instance. We might cut the "core tick" of Doom from 60hz to 30hz on X-Box if we need the extra performance, because it has no chance of holding 60hz, but the PC version will eventually scale to that with the faster CPUs and graphics cards.



Commenting on the effect of providing backend support for specific 3D hardware:



The generic back end does not use vertex programs, or provide specular highlights, so the custom back ends provide both performance and quality improvements.

There are some borderline cases that may or may not get custom coding -- Radeon R100, Matrox Parhelia, and SiS Xabre are all currently using the default path, but could benefit from additional custom coding. I will only consider that when they have absolutely rock solid quality on the default path, and if it looks like they have enough performance headroom to bother with the specular passes.

The NV20 back end has more work in it than any other, with two different cases for the lighting interaction, but on the X-Box I would probably create additional special cases to optimize some of the other possible permutations.

Share this post


Link to post

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×