Jump to content
Search In
  • More options...
Find results that contain...
Find results in...

How optimizeable is the Doom renderer?

Recommended Posts

esselfortium said:

Except that it doesn't actually have more performance potential, because of the way Doom's levels are built. Also that you're automatically assuming that everyone who uses OpenGL must surely "think software is trash and should be rid of it".

Claiming that I posted "derogatory remarks" is an awfully strange way of retracting from a discussion.

You know, like, given that you're still here and that I didn't make any such comments about you.

You know what. You are right. Obviously mis-used the term "derogatory".

It looks like as far as optimizations go, according to Graf - you really can't do anything to the render to make it faster. Hmm, I was hopeful for the potential performance in the render; it is too bad.

Share this post

Link to post
Maes said:

Edit: the infamous thread, so you don't think I'm talking out of a cybie's ass ;-)


just for the record, i tried entryway´s test demos (only nuts.wad and sunder.wad) with the provided configs and got the following results:

glboom 202-206 fps (there are small variations every time)
prboom 151-154 fps

glboom 138-140 fps
prboom 136-138 fps

gaming system is an AMD phenom II 1090t @ 3.8 ghz, AMD HD 5870 graphics card, benchmarks were run in 640x480 as per cfg.

there were 2 surprises:

- virtually the same frame rate for hardware & software in the sunder demo, while the nuts demo shows a clear difference

- overall performance of a system seems to count less than in modern games. for example entryway stated these results for a core2-based pc, which aren´t far from what a six-core is getting. the frame rate seems to depend more on CPU clock.

i plan upgrading to ivy bridge and a next-gen graphics card at some point this year. i expect ivy to make 5 ghz with little effort, but my guess is that a gain over the current system will be mostly due to higher clock.

Share this post

Link to post

To make those benchmarks even more meaningful, a way to compare "renderless" performance should be provided, aka how many FPS would be obtained if actors were updated, BSP tree navigated etc. but NOTHING actually drawn to the screen (however, the BSP tree and any geometry-building functions, visplane & sprite clipping and sorting etc. must still be performed, so what is compared is the actual time needed to set up everything for a draw, however without actually performing it.

This means that OpenGL directives must not be sent, drawcol & drawspan functions must not be called etc. but everything else should be done.

The value of "FPS" that you will obtain in this way, will also be the upper bound on the speedup that even the world's most perfect renderer will give you, and gives a good idea of what overheads will always be there even if you parallelize/delegate the actual drawing to hardware.

Share this post

Link to post

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now