Jump to content
Search In
  • More options...
Find results that contain...
Find results in...
Csonicgo

How optimizeable is the Doom renderer?

Recommended Posts

Then again OpenGL is known to choke on maps whose geometry doesn't translate well to its primitive structures, and in general where the OpenGL speedup doesn't manage to overcome the overhead of using it.

This includes particularly formed architecture, numerous sprites etc. and in general, any situation which results in more geometry pushing/adaptation than actual rendering. The Doom engine is infamous for being a bad match for OpenGL and in general, polygon-based rendering.

I realize that this is highly port-dependent (and maybe driver dependent), but there are threads about timedemos where OpenGL is smoked by software renderers on relatively simple maps (vanilla complexity) by factors exceeding even that 200%-300% you mentioned, exactly because that "payoff condition" does not hold, and the geometry changes are simply pushed too quickly for the hardware pipeline to keep up, regardless of how simple the scenes may be.

Edit: the infamous thread, so you don't think I'm talking out of a cybie's ass ;-)

http://www.doomworld.com/vb/source-ports/49378-software-vs-hardware-timedemos/

About the AI code...too many data dependencies and almost complete loss of consistency/repeatability, unless you alter the engine's rules dramatically (in which case we might as well be talking about another game entirely).

Share this post


Link to post

In that case, the Doom engine is just a bad match for development. I would like to see a good example where the software render excels over the hardware render in performance on particular maps. Should be interesting.

While, not a great example as its not vanilla, but does have complex architecture. Sunder Map10

OpenGL


Software



Edit: I saw your point in the thread. But, it's simply put that if you don't have, I mean nowadays an in-expensive video card; then your CPU will be best at rendering something like Doom compared to a hardware render. Now, I would say there is always room for optimization, and I would be more optimistic towards OpenGL than software. I could see if a lot of devs could came together to find a way to optimize the hardware render rather than work on the software render, we might actually have something.

Share this post


Link to post

now try that in other ports that have 2-Sided line optimization like Cardboard, or Prboom-plus, where things are still being tweaked in the renderer. I'll be honest with you, while Zdoom's renderer was fast, it hasn't been touched in a long time. Plus, you're probably rendering to texture on that second shot anyway... :P

Share this post


Link to post
TIHan said:

In that case, the Doom engine is just a bad match for development. I would like to see a good example where the software render excels over the hardware render in performance on particular maps. Should be interesting.

S9DM03 from SpaceDM9, for Skulltag. In software I get perfectly playable framerates on my computer, but in OpenGL it spends a lot of time as a slide show. And that's even when considering what CSonicGo said about ZDoom's version of the software renderer having some major inefficiencies, some of which are apparent in that map.

Share this post


Link to post

I did some testing in PrBoom for sunder, the software render seemed to handle map10 much better at lower resolutions, compared to GL with lower res.

As far as SpaceDM, I roughly got the same framerate between renders using (G)ZDoom.

I still say get rid of software; and pour more effort in optimizing the hardware render.

Share this post


Link to post
TIHan said:

I did some testing in PrBoom for sunder, the software render seemed to handle map10 much better at lower resolutions, compared to GL with lower res.
I still say get rid of software; and pour more effort in optimizing the hardware render.


You really CANT. This may not seem obvious but the polys needed to render that sunder scene are ginormous, while all the doom renderer has to do is write columns to the screen. that's not gonna change, no matter how much you wish it to be.

There's a reason games have Dynamic LOD, because if all polys were rendered at once, it'd be in the millions in no time. if you think about it this way: every line, if the nodes treated it well, is two tris. MAYBE. that's not counting the massive amount of subsector tris in that shot on the floor. There's a LOT of geometry here.

Share this post


Link to post

TIHan, you're assuming that the same people working on software renderers would be interested in doing any work on OpenGL. You're also assuming that their users would ever willingly make the switch if they prefer one or the other.

Beyond that, you're assuming that I (or anyone, for that matter) haven't noticed that most high-profile GZDoom projects look akin to bargain-bin N64 games at best, but that's neither here nor there.

Share this post


Link to post

I did a small test on sunder a while back while I had some time, and used one of the sunder maps with a lot of "turret" detail:

_|-|_|-|_ etc
And guess which renderer won out? The one that could selectively draw to the screen. Guess which renderer can't do that?

the hardware one.

That's because polygons aren't like columns. for the scene to show up properly on the screen for a hardware renderer, all polys must be rendered. For the software renderer, just as many segs as you can in that span. bonus points if it's a 1-S line. Hardware renderers don't have that luxury. They simply must render them all.

If you ever can get a chance, find and download (and play) some of the old versions of CUBE, by aardappel. He implemented a very interesting LOD where sections of the map would simplify if a lot of detail was on the screen. naturally, it looks atrocious if you like large, open areas like Sunder.

Share this post


Link to post

esselfortium, perhaps I might be assuming, but why not assume? It's 2012; not 1995.

If you want more support in software rendering, then I can't stop anyone from doing it or users here wanting to use it. All I'm trying to do here is make a point that with more time and effort, a lot of those issues that we experience in OpenGL can be solved; therefore, truly surpassing that of software. Which even now, in most cases OpenGL does when compared to software.

I may not know anything about rendering; but I'm damn well sure OpenGL can pull more horsepower than software. Perhaps some form of dynamic occlusion culling could solve a few issues.

Share this post


Link to post
TIHan said:

esselfortium, perhaps I might be assuming, but why not assume? It's 2012; not 1995.

The year meaning what, exactly, in this case?

Share this post


Link to post
esselfortium said:

The year meaning what, exactly, in this case?


Software rendering is old? Developers over the past decade use hardware rendering because it's faster and more maintainable? Hello?

I will end this conversation from my end. :) No sense in trying to argue with you; as you will just keep just posting derogatory remarks against me. Cool! :D

Share this post


Link to post

CPU's just keep getting faster and faster, so what's the point in dumping software renderers? furthermore, before someone comes with a visually perfect way of dealing with that disgusting sprite clipping all hardware renderers suffer from, software renderers shall continue to be relevant and necessary.

Share this post


Link to post
Guest DILDOMASTER666
TIHan said:

I may not know anything about rendering; but I'm damn well sure OpenGL can pull more horsepower than software. Perhaps some form of dynamic occlusion culling could solve a few issues.


It's pretty obvious by the way you talk about OpenGL like some sort of godsend that you do not know anything about rendering at all

GZDoom is brutally inefficient, and there's a reason for that -- generally videocards nowadays are designed to handle very large batches of vertices very efficiently, but they fall short trying to do the same with smaller batches of vertices. When it comes right down to it, the problem Graf faced was that when it came time to render the subsectors and segs, he was passing lots of small batches of vertices at a time to the GPU, which will not likely see any improvement in the foreseeable future. GZDoom is slow and inefficient because it has to be. In all fairness, Graf did the best he could with what he had.

Now shut the fuck up and play more PrBoom

btw

TIHan said:

Software rendering is old? Developers over the past decade use hardware rendering because it's faster and more maintainable? Hello?

I will end this conversation from my end. :) No sense in trying to argue with you; as you will just keep just posting derogatory remarks against me. Cool! :D


essel didn't make any derogatory remarks against you, you clown. By the way, in case you missed the memo, Doom is old too. See my above paragraph for why you're wrong about hardware rendering being "faster and more maintainable" for a game like Doom.

Share this post


Link to post
HavoX said:

i fail to see the difference (other than the pinkies)



there's indeed little visible difference, apart from the metal/wood column to the player's left which appears pixelated in software.

on the other hand, software has a surprisingly crisp look to it. this is evident in the shot from sunder map 10, where the tower looks much clearer and more detailed, and there's no graphical error on the sky, or whatever that smear is.

i mean, surprisingly for someone like me who hasn't played in software for ages. i'll try it again, i'm curious to see how it will look on a 26" monitor in its 1920x1200 native resolution.


@ Fisk: is there a possibility to change this inefficient rendering method? probably by rewriting the renderer from scratch, i guess, at least in theory?

Share this post


Link to post
Guest DILDOMASTER666

I doubt it. At the end of the day if you don't want to be drawing the entire map at once you'll have to pass individual subsectors down the pipeline, which constitutes lots of small batches of vertices to the GPU. You can't really get around that AFAIK.

Share this post


Link to post
Porsche Monty said:

CPU's just keep getting faster and faster, so what's the point in dumping software renderers? furthermore, before someone comes with a visually perfect way of dealing with that disgusting sprite clipping all hardware renderers suffer from, software renderers shall continue to be relevant and necessary.


You ain't kidding, every implementation of that in an opengl renderer has been quite hilarious. GZDoom has what, 5 ways of doing sprite clipping now?

It's bizarre.

Share this post


Link to post

Well, the only way I can envision this getting fixed for good is by rendering sprites like menus and zooming them in and out in a way that they would seem to be aware of their place within the map's architecture, but that's just talk out of my arse.

Share this post


Link to post
Maes said:

Looks like OpenGL without any blurring/texture filtering on, which is a quite unusual mode. It's well-known (?) that you can do that and have "crisp OpenGL", OK.

Some other give-aways are the proper 3D perspective (look at how the door's "glass brick" pillars appear slanted in the 1st screenshot. You CAN'T have that with a pure vanilla-like renderer), and the lack of distance-lighting discoloration on the pinkies.

The problem is that you don't always have that degree of control over the settings, and then there are always subtle or even major differences in the way distance/sector lighting works etc. which may not be everybody's cup of tea.

Plus when people speak of "OpenGL" they usually mean the filtered "non pixelated" kind, not the one that tries too hard to look like software mode ;-)


Unblurred OpenGL with linear mipmap and Anisotropic filtering has always been my fav in Quake.

Share this post


Link to post
Fisk said:

It's pretty obvious by the way you talk about OpenGL like some sort of godsend that you do not know anything about rendering at all

GZDoom is brutally inefficient, and there's a reason for that -- generally videocards nowadays are designed to handle very large batches of vertices very efficiently, but they fall short trying to do the same with smaller batches of vertices. When it comes right down to it, the problem Graf faced was that when it came time to render the subsectors and segs, he was passing lots of small batches of vertices at a time to the GPU, which will not likely see any improvement in the foreseeable future. GZDoom is slow and inefficient because it has to be. In all fairness, Graf did the best he could with what he had.

Now shut the fuck up and play more PrBoom

btw



essel didn't make any derogatory remarks against you, you clown. By the way, in case you missed the memo, Doom is old too. See my above paragraph for why you're wrong about hardware rendering being "faster and more maintainable" for a game like Doom.


Hahaha. What are you talking about? Also, mind your offensive language please.

Share this post


Link to post

I think TIHan is talking out of his ass. If you don't know anything about rendering you shouldn't be saying anything about OpenGL being by default faster. Honestly I like OpenGL as well, I happen to prefer software though in DOOM and actually run it at 320x200, and find that extremely enjoyable. Other than that, there's a lot of stuff you can do with a software renderer over OpenGL. Have you ever wondered why those specters just look transparent? (Of course it's possible to do with shaders, but whatever). Even then, it's very possible to write a full 3D true color renderer for DOOM, based upon the same data formats. So software should never be kicked. Especially for a game that is as old as it is. God forbid you looked at the Wolf3D ports

Share this post


Link to post
shadow1013 said:

I think TIHan is talking out of his ass. If you don't know anything about rendering you shouldn't be saying anything about OpenGL being by default faster. Honestly I like OpenGL as well, I happen to prefer software though in DOOM and actually run it at 320x200, and find that extremely enjoyable. Other than that, there's a lot of stuff you can do with a software renderer over OpenGL. Have you ever wondered why those specters just look transparent? (Of course it's possible to do with shaders, but whatever). Even then, it's very possible to write a full 3D true color renderer for DOOM, based upon the same data formats. So software should never be kicked. Especially for a game that is as old as it is. God forbid you looked at the Wolf3D ports


Looks like more offensive language against me, again. And I shouldn't say anything about rendering because I don't know how to program in it? Please. While I can agree with you it is somewhat fun to play DOOM in software at 320x200; but, for more people outside this community to even use these ports, I bet you that would want OpenGL at high resolutions vs. your software. :)

Share this post


Link to post

Well in that case if you speak about people outside the community, then yes, OpenGL would most likely be a better option for them. But that doesn't mean that we should throw out the software renderer like you said. What you have suggested is completely rid ourselves of software and use only hardware, "because software is trash" whereas a lot of people honestly believe software isn't.

Share this post


Link to post
shadow1013 said:

Well in that case if you speak about people outside the community, then yes, OpenGL would most likely be a better option for them. But that doesn't mean that we should throw out the software renderer like you said. What you have suggested is completely rid ourselves of software and use only hardware, "because software is trash" whereas a lot of people honestly believe software isn't.


Then we have two types of people: people who think software is trash and should be rid of it; and people who love software and think it should be more focused in development compared to a hardware render. Heh, I might be in the minority with the "software is trash" bit; maybe I should put it in a way that means "Hey, we should keep software; but, let's focus more development on OpenGL since it has more performance potential than the software render does."

Share this post


Link to post
TIHan said:

While, not a great example as its not vanilla, but does have complex architecture. Sunder Map10

OpenGL
http://i.imgur.com/AfqFd.png

Software
http://i.imgur.com/DhxOo.png

Could have found a better screenshot to argue OpenGL's superiority... Just look at the sky. Something's wrong here.

TIHan said:

Software rendering is old? Developers over the past decade use hardware rendering because it's faster and more maintainable? Hello?

OpenGL evolves. There's talk of deprecating immediate mode. GLBoom+ and GZDoom both use immediate mode.

Graf tried to rewrite an OpenGL renderer based on the VBO arrays that were to be the Way of the Future, but it ended up being dropped because it wasn't actually faster than immediate mode.

OpenGL can offer greater performances than software, especially at high resolutions, but it's not a sure thing given the varieties of configurations out there. All the people with a crappy Intel GMA or an ATI aka "who cares about OpenGL, DirectX is what matters" will generally be better off with the software renderer, for a long time to come.

Porsche Monty said:

CPU's just keep getting faster and faster

Not really. They only do so by getting more and more cores as the other ways to increase power (clock speed, circuit thinness, transistor count) have all more or less reached hard limits. Which means that for a single-threaded program, the CPUs are not effectively getting faster. Which is why Maes was interested in seeing how much parallel architecture could help, but the answer was "not that much".

Csonicgo said:

You ain't kidding, every implementation of that in an opengl renderer has been quite hilarious. GZDoom has what, 5 ways of doing sprite clipping now?

There's just one way clipping is done. There are a few options used to adjust sprite coordinates before clipping is done, though, which basically are "don't do it", "always do it", "do it selectively" and "do it selectively but not by the full amount that'll get clipped".

Share this post


Link to post
Gez said:

Not really. They only do so by getting more and more cores as the other ways to increase power (clock speed, circuit thinness, transistor count) have all more or less reached hard limits. Which means that for a single-threaded program, the CPUs are not effectively getting faster. Which is why Maes was interested in seeing how much parallel architecture could help, but the answer was "not that much"


Not quite the whole story. Intel just released a 4ghz CPU last year and I don't think they're leaving it at that. They're also developing a laser-based CPU which is to be in orders of magnitude faster than anything we've seen before, and if you want to be really optimistic, we might actually live to see personal quantum computers...

Hard limits? not before they hit the atomic barrier.

Share this post


Link to post
Pirx said:

@ Fisk: is there a possibility to change this inefficient rendering method? probably by rewriting the renderer from scratch, i guess, at least in theory?



Not really.
The efficiency problems are all caused by how Doom stores, handles and manipulates its level data. Sector coordinates can change, so can textures, light values and whatever. This makes it near impossible to build efficient structures to store the geometry and one is basically forced to rebuild the vertex buffers each frame. All fine and well, but it's not a single bit faster than using immediate mode - on the contrary! Doom requires so many vertex attributes when using enhanced features that the rigidity of the vertex buffer format bloats the amount of data to a degree that all potential improvements are cancelled out.

The only thing I can imagine to improve speed is to explicitly mark static parts of the map but with scripts, neighboring sector triggers and loads of other things that can cause changes to map geometry it's a hopeless battle. And you'd still face the problem that most batches are only <10 vertices large so nothing is really gained.

On modern systems GZDoom is 100% CPU bound, unless large amounts of portals are used - but they are really the only thing that may stall the graphics card.

Share this post


Link to post
TIHan said:

Then we have two types of people: people who think software is trash and should be rid of it; and people who love software and think it should be more focused in development compared to a hardware render.

Dude, both software and hardware are constantly being developed by programmers. I think people in this community just use whatever they prefer.

Share this post


Link to post
TIHan said:

Then we have two types of people: people who think software is trash and should be rid of it; and people who love software and think it should be more focused in development compared to a hardware render. Heh, I might be in the minority with the "software is trash" bit; maybe I should put it in a way that means "Hey, we should keep software; but, let's focus more development on OpenGL since it has more performance potential than the software render does."

Except that it doesn't actually have more performance potential, because of the way Doom's levels are built. Also that you're automatically assuming that everyone who uses OpenGL must surely "think software is trash and should be rid of it".

TIHan said:

I will end this conversation from my end. :) No sense in trying to argue with you; as you will just keep just posting derogatory remarks against me. Cool! :D

Claiming that I posted "derogatory remarks" is an awfully strange way of retracting from a discussion.

You know, like, given that you're still here and that I didn't make any such comments about you.

Share this post


Link to post

Well, I prefer playing Doom in software mode since it looks crisp and clear. And also, I like the "radioactive" Doomguy. :P I am talking about the light effect present in the software renderer which makes the closer things more bright.
Well, OpenGL has the plus of being able to support full 3D without any hassle (3D floors and models) and also supports dynamic lights.

I only switch to hardware mode when the PWAD requires it, else I play using the software renderer because IMO it looks better. And that is only an opinion so if you prefer using hardware mode then that's fine with me. :)

Anyways, the guys at ID wanted to make the game runnable on the more powerful than average but affordable computers at the time, because what good is a game you can't actually play (without shelling out a bag of money for a supercomputer).

Basically, the Doom engine was very optimized, but it is still possible to squeeze out some more, but since it would be faster and easier to wait for the ultra powerful processors to get cheaper than to actually sit down and optimize the code it's not really worth it.

Share this post


Link to post

Don't forget that there is significant room for optimization outside the renderer as well, optimizations which source ports in general adopt swiftly.

Perhaps the most infamous examples are in the WAD loading subsysten, where the method for searching lumps by name was utterly dire, and was one of the first things the Killough fixed in Boom.

And, as I said before, maps "heavy" enough to take a toll on the renderer probably also tax other subsystems as well, and would slow down the game anyway, even if you played with the automap on or without drawing to the screen at all. Even a map choke-full of "inactive" monsters taxes the the thinker subsystem so much that it may easily consume more time than the renderer itself, so even if you get a zero-time renderer (infinite speedup), you still have a big chunk of time that you can do nothing about.

Share this post


Link to post

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×