Jump to content
Search In
  • More options...
Find results that contain...
Find results in...
cacomonkey

Fastest OpenGL source port engine for WADS?

Recommended Posts

Using Skulltag (latest build), I noticed some rather significant slowdown in certain parts of wads, like The Ultimate Torment & Torture & Sapphire, and I'm on a fairly good but older mid-range system (ATI x1950 video, 2.2Ghz dual-core AMD cpu) at only 1024 X 1280. Is there a faster zdoom-compatible opengl port that can handle high detail WADS with little or no slowdown? TIA for any help.

C.

Share this post


Link to post

Fastest OpenGL source port? GLBoom+ by far.

Fastest ZDoom-compatible OpenGL source port? GZDoom. Not that there's a lot of choice; ZDoomGL is old and buggy, and Skulltag uses an old version of the GZDoom renderer.

A big issue with your hardware is that GZDoom is not optimized or even developed for ATI chipsets.

Share this post


Link to post
Gez said:

A big issue with your hardware is that GZDoom is not optimized or even developed for ATI chipsets.


And for no other reason than the developer doesn't like them.

Share this post


Link to post

Actually, now that I think about it, Vavoom is a possible choice. It does have an OpenGL renderer and it is compatible with a lot of ZDoom features. No guarantees for anything that requires ZDoom 2.2.0 or newer; but mods that can work with older versions of ZDoom should probably work with Vavoom.

John Smith said:

And for no other reason than the developer doesn't like them.

Yeah. That's his right, though. People can bitch and moan about it, but it's not extremely constructive. People are free to write something better, but then they don't. E.g., this Skulltag thread. Here we have someone explaining that GZDoom sucks and why, and that he's a much better programmer and that he knows because making efficient OpenGL renderers is his day job, and he's part of the Skulltag developer team... Aaaaaaannd Skulltag still uses an old version of GZDoom rather than a new and improved ATI-friendly hyper-efficient OpenGL renderer. Why?

So yeah. GZDoom may not deeply flawed, but it's what's available for an up-to-date ZDoom codebase.

Share this post


Link to post
Khorus said:


Oh, okay, boys. I'll give it a whirl. Didn't realize I was using an outdated gzdoom renderer with the latest Skulltag. Hopefully it'll give me some speed increase. Thanks.

C.

Share this post


Link to post
Gez said:

A big issue with GZDoom is that GZDoom is not optimized or even developed for ATI chipsets.


I fixed that for you.

Share this post


Link to post
cacomonkey said:

Oh, okay, boys. I'll give it a whirl.

PrBoom+ won't whirl very well for TUTNT or Sapphire. ;)

Csonicgo said:

I fixed that for you.

For anyone who has an Nvidia chipset, there's no issue with GZDoom. :p

Share this post


Link to post
Gez said:

For anyone who has an Nvidia chipset, there's no issue with GZDoom. :p

Well well don't be so ignorant. There is an ATI chipset userbase growing rapidly now. It is obvious Gzdoom is not meant for ATI cards due well, shaders being different completely etc. it's all complicated. So it doesn't work efficiently on ATI cards just because, there's much more to it. However ATI cards tend to suffer many times slower speed than on nvidia cards in gzdoom. How otherwise I would get 60 FPS on my old 8600GTS in my Gzdoom project and 20 FPS on my current HD 4770 512MB OC'd (which is far faster) in same location with tons of dynamic lights? But that gives me one advantage in design, it will make my gzdoom project... low-end friendly haha.

Share this post


Link to post
Gez said:

For anyone who has an Nvidia chipset, there's no issue with GZDoom. :p

For anyone without an nvidia chipset, there's plenty of issues with GZDoom :p

Share this post


Link to post
John Smith said:

And for no other reason than the developer doesn't like them.



... and for no other reason that I don't have one. I simply can't remote-profile ATI-based systems, especially if their drivers refuse to handle any supposedly sane code optimization I added. There's 3 or 4 special cases in the code that revert to less efficient methods on ATI because the good ones just don't work - and this stuff that doesn't work provides an easy 20% speed increase on NVidia.

Remember, I asked for development help countless times but absolutely nobody came forward so far so please don't complain. This is just a hobby project and I sure won't invest any real money here.

Gez said:


Even more pathetic is that he closed the thread instead of continuing the discussion. Which quite lowered my opinion of him and made him look more like a jerk who just wanted to bitch.

Share this post


Link to post
Graf Zahl said:

Remember, I asked for development help countless times but absolutely nobody came forward so far so please don't complain.

Fair enough. I do think a lot of the performance problems on ati cards could be avoided by not using so many advanced GL features, but it is your port after all, not mine. I'll just keep in mind for future reference though that the ati issues in gzdoom have more to do with quality of the community surrounding the port than the developer of the port himself.

Share this post


Link to post
John Smith said:

I do think a lot of the performance problems on ati cards could be avoided by not using so many advanced GL features,



Actually, no. The benchmarks I did last year clearly indicated that this was not a problem at all. In fact these tests showed on HD4xxx and HD5xxx cards that the performance was identical for low and high resolutions.

And forcing the engine to a low end compatibility mode also had no effect on performance leaving only one option:

The driver simply has problems with the way the data is processed. But the only way to change this is to throw all the code away and start completely new - which clearly is not an option because I have neither the time nor motivation to do it. It's even doubtful that it'll help. Doom ports generate a kind of geometry that modern drivers often are just not optimized for. They are written to process huge batches of hundreds or thousands of vertices at once (like modern games use) but the demands of a Doom engine mean that each wall and sector is inevitably its own batch. I can't confirm this but somebody once told me that ATI specifically has problems with such data. For small or medium sized maps this normally won't show but for the huge ZDoom monsters that have been created this will easily kill any performance if the driver has problems starting tens of thousands of draw calls with up to 10 vertices max.


I'm just wondering: Does glboom+ run better? It's using mainly the same methods to draw stuff than GZDoom so it should suffer from some of the same problems.

PS.

Concerning the OP's graphics card, yes it's medium range and the X1000 series of cards seemed to perform almost on the same level as comparable NVidia cards. But the card is simply too weak for extremely highly detailed maps like UTNT. Even my GF 8600 has some slight problems on these levels so it's no surprise that older hardware is more affected by that. On my old GF 6800 system it's almost unplayable due to low frame rates.

Share this post


Link to post
Graf Zahl said:

I'm just wondering: Does glboom+ run better? It's using mainly the same methods to draw stuff than GZDoom so it should suffer from some of the same problems.

Much better, it runs Sunder maps 9 and 10 without frame rate drops. Those 2 maps are very convenient as a benchmark for DOOM rendering engines, as they work on almost any port and are extremely hardware intensive (as opposed to nuts, which is more of a thinker benchmark).

Share this post


Link to post
Gez said:

I don't get it, so I'm going to assume that it's a dig against Sapphire's layout.



I suspect it's the reflective floors that occur in some parts of the map. They can bring down the engine to a crawl on weaker hardware.

Share this post


Link to post
Gez said:

I don't get it, so I'm going to assume that it's a dig against Sapphire's layout.


Its because sapphire is really quite flat throughout most of the level and the map consists mostly of hallways, much like your average wolf3d map. So, yeah, its a "dig" at the layout.

Share this post


Link to post

I'm a nice guy... I try to say things nicely, and I'm not usually an ass, but you didn't take the hint the first time. You just happened to catch me on a bad day, and I'm pissed, so fuck you man... It's your code Graf. I would have helped you if you wouldn't have been a stuck up pompous ass about things, but that's not the case. Go Fix Your C++ Code before you even start to worry about the OpenGL implementation. You obviously don't even understand proper memory management.

Because I am a nice guy, I'll give you a clue.
When you allocate memory like this:

BYTE * foo = new BYTE[numBytes];

You MUST de-allocate it like this: delete [] foo;
NOT Like this: delete foo;

This is C++ 101, day 2, Dynamic Memory Basics...

Graf Zahl said:

... and for no other reason that I don't have one. I simply can't [ take accountability for my fucked up code, or listen to the suggestions and complaints of others, or even actually run basic tests on my own code ], especially if their drivers refuse to handle [ my totally fucked up and untested code that leaks memory like a sieve despite all of my nieve attempts at ] code optimization I added [ despite the fact that my C++ code is utterly fucked up regardless of the OpenGL implementation ]. There's 3 or 4 special cases in the code that revert to less efficient methods on ATI because the good ones just don't work - and [ I Have No Idea why, because honestly, I didn't test shit, but I say I do all the time. I can't even figure out how to use the debug build of the OpenGL Reference Renderer help me find my bugs! Besides, the strict ATI drivers won't let me get away with this shit, while the more lenient Nvidia driver ] provides an easy 20% speed increase on NVidia.

Remember, I asked for development help countless times but absolutely nobody came forward so far so please don't complain [ I' too stuck up to Care! ]. This is just a hobby project and I sure won't invest any real money [ or even three seconds to type
"valgrind ./gzdoom > allYourMemLeaksAreBelongToGZDoom.txt"
and find out where my real problems are
].


[] Fixed that for you.


Even more pathetic is that closed the thread instead of continuing the discussion. Which quite lowered my opinion of him and made him look more like a jerk who just wanted to bitch.


Yeah, I closed it because it was just you spreading more of your ATI FUD (fear uncertainty and doubt). I told you to PM me or any other dev / admin if you wanted to unlock it and discuss it further.

It doesn't take a rocket scientist to figure out that you're just pissing into the wind, with a blind fold on, saying "I Can't Hear You!" while all of your users are crying out, "Please Stop Pissing On Us Graf!"

I'm honestly a nice guy, but I can only put up with so much bullshit before I snap. You're a dick to me, and everyone else. I hope someday you learn to listen to your users (Like I Do).

For the record: I plan on adding my own MIT licensed (open & closed source compatible) OpenGL renderer to the Doom community, but honestly, the fastest route would be for us to just grab someone else's renderer that's already done, and doesn't leak memory like a sieve...

Valgrind Output: http://pastebin.org/149787

USE VALGRIND! WHAT THE FUCK ARE YOU ON MAN?!?!

Like I said, I'm a nice guy... So, I'm sorry if I offended you, but there's obviously no way else to reason with you.

Until those memory leaks are fixed, you can only guess at what the real problems are. (Any C++ coders out there will immediately realize that I've just done the nicest thing in the whole world: Shown him where his bugs are!)

Share this post


Link to post

I agree with Vortex. I don't have any special graphics card and I don't have a great computer, but I can run UT2004 on OpenGl with medium settings better than I can run GZDoom with low settings. I have an intel graphics card (about 8 years old). Seriously man, I can give a thousand other things I can run with OpenGl on this hunk of junk.

If you don't want criticism or suggestions, why would you release the engine in the first place?

Its not like we want to tear you down just because "you have a widely used port with a widely used renderer". Its hard to take criticism but its not hard to just accept the fact that you are the only person to blame for it.

Share this post


Link to post
VortexCortex said:

For the record: I plan on adding my own MIT licensed (open & closed source compatible) OpenGL renderer to the Doom community, but honestly, the fastest route would be for us to just grab someone else's renderer that's already done, and doesn't leak memory like a sieve...

And why aren't you doing either at the moment? You've got a rich choice for the latter: ZDoomGL (HOMs galore in TNT MAP02), GLBoom (incompatible license), EDGE (incompatible license), Doomsday (incompatible license)... What are you waiting for?

:P

About that valgrind output, I've found that the mismatched new[]/delete found were all in the same two parts. Then I've looked at the other warnings but they are in files that do not exist (such as gl_renderstruct.h). So I'm guessing that this is valgrind's output of Skulltag. There are like 640 revisions between your codebase and GZDoom now. Not sure how relevant most of these warnings are now.

scalliano said:

It's a sad day when CODERS start beefing...

Especially when it's BAADBEEFing.

Doomsphere said:

I agree with Vortex. I don't have any special graphics card and I don't have a great computer, but I can run UT2004 on OpenGl with medium settings better than I can run GZDoom with low settings. I have an intel graphics card (about 8 years old). Seriously man, I can give a thousand other things I can run with OpenGl on this hunk of junk.

Then stop using it. It's crap, okay, great.

Share this post


Link to post
Gez said:

Then stop using it. It's crap, okay, great.


I can't use it, it gives me a BSOD every second time I run it on my computer. I'm sorry you get all butthurt if someone adds their 2 cents into a regular conversation.

Share this post


Link to post
Gez said:

About that valgrind output, I've found that the mismatched new[]/delete found were all in the same two parts. Then I've looked at the other warnings but they are in files that do not exist (such as gl_renderstruct.h). So I'm guessing that this is valgrind's output of Skulltag. There are like 640 revisions between your codebase and GZDoom now. Not sure how relevant most of these warnings are now.



In short: the whole thing is useless. There was actually a memory leak in GZDoom but it was completely unrelated to this and easily plugged once I spotted it.

But what can you do? Some people apparently only feel good when they can attack others, even if it's completely groundless.

@Doomsphere: That card is old garbage. So the performance you get is what's to be expected. I'm surprised that UT2004 runs on it at all. Are you sure it's really OpenGL and not just internal D3D fallback? (which is much better implemented on Intel drivers.)

Share this post


Link to post
Doomsphere said:

I can't use it, it gives me a BSOD every second time I run it on my computer.

Thank drivers developers from Intel.

Share this post


Link to post
Graf Zahl said:

But what can you do? Some people apparently only feel good when they can attack others, even if it's completely groundless.


heh

Graf Zahl said:

Even more pathetic is that he closed the thread instead of continuing the discussion. Which quite lowered my opinion of him and made him look more like a jerk who just wanted to bitch.

Share this post


Link to post
entryway said:

lol yeah, removal of some non critical memory leaks must help some ATI with GL_NICEST fog.


The earlier linked thread has his thoughts on the renderer itself, underneath the spoiler tag.

Graf Zahl said:

But what can you do? Some people apparently only feel good when they can attack others, even if it's completely groundless.


Oh hush. I've dealt with VortexCortex many times before and he is quite pleasant to work with and pretty much the most universally well liked person in the Skulltag community now that Rivecoder is gone. On the other hand I'm sure there are many who appreciate the work you've done, but _like_ you? And then you honestly wonder why nobody would step forward to volunteer to help you out?

Share this post


Link to post

He's blabbering a lot of stuff that you hear everywhere but the one thing he's not done is actually experiment with it.

I find it pathetic that he claims to know everything but dismisses other people's experiments out of hand and then in the end forcing an end to the discussion because he doesn't like it.

Well, let's see how his own renderer will fare if he ports it to Doom. I can outright promise some unpleasant surprises because Doom is not like all those nifty newer games modern hardware is geared to.

Share this post


Link to post
Graf Zahl said:

He's blabbering a lot of stuff that you hear everywhere but the one thing he's not done is actually experiment with it.

I find it pathetic that he claims to know everything but dismisses other people's experiments out of hand and then in the end forcing an end to the discussion because he doesn't like it.

Well, let's see how his own renderer will fare if he ports it to Doom. I can outright promise some unpleasant surprises because Doom is not like all those nifty newer games modern hardware is geared to.

Talking to yourself?

Share this post


Link to post
Guest
This topic is now closed to further replies.
×