Jump to content
Search In
  • More options...
Find results that contain...
Find results in...
Sign in to follow this  
invictius

gzdoom on unsupported hardware

Recommended Posts

I ran an invasion map on a 1ghz celeron system (it's actually from 2009, an ex-POS system, complete with ddr2 ram!) and an intel GMA935, just to see what would happen. Amazingly, it ran fairly well once dynamic lights were turned down. But this:



Graphics were fine, just the text. I know basically why it looks like that, unsupported hardware/not up to the opengl spec that the current build of gzdoom was designed for, but I'd like technical details. Why would there be graphical corruption of text but nothing outside of that?

Share this post


Link to post

I can't help you, but kudos for running GZDoom on GMA 935. I think that's even more impressive than me running it on GMA 3150 (via some linux hack which allowed me to run it despite not meeting OpenGL requirements).

Share this post


Link to post

There seems to be some fuckup with the way menu overlays are drawn, not just text.

E.g. that "Options" menu title doesn't look too good either, and I'm not sure if ZDoom uses the IWAD graphic for that, or composes it out of single characters letters. Probably all menu items, including the main menu screen logo, should be damaged somewhat.

All you can do is trying different drivers for that old GMA card, or toying with GL options such as bit depth, alpha etc.

Share this post


Link to post

Probably a shader not getting turned off when it's supposed to, just hazarding a wild guess though. Seems to be a common thing.

Share this post


Link to post

I'm trying to find something (that doesn't have thousands of monsters) to bring this machine to its knees under the software renderer. Even 034s runs at about 18fps in the final area at 320x200. Maybe the latest zdoom community map?

Share this post


Link to post

This happens to me if I the texture format to compr_rgba or s3tc_dxt, and it's not just the menu.

Share this post


Link to post
Quasar said:

Probably a shader not getting turned off when it's supposed to, just hazarding a wild guess though. Seems to be a common thing.



On hardware that doesn't even run the shaders? It's fixed function only on such old cards, expect for a handful of effects - and the 2D stuff is not among them.

This almost looks like it tries to fetch texels from the wrong mipmap level, although I'd have to wonder how that can happen without mipmaps.

Share this post


Link to post

If you want my opinion though, when using such sub-par graphics cards, using any form of "hardware acceleration" might actually be worse than using pure software rendering mode. The "GPU" does so little by itself, that it may actually end up offloading a lot of work to the CPU and "implementing" many OpenGL features in pure software anyway, which was, ironically, what you were trying to avoid.

Even worse, Intel stuff ever only uses shared memory, so you're basically making a shitty GPU steal memory bandwidth and CPU cycles from the main system, on top of any purely graphical issues you may have with it. Don't be surprised if even relatively uncomplicated maps bring the system to its knees fairly soon.

Share this post


Link to post

My academical curiosity would very much like to see benchmarks for that scenario. My gut instinct, however, is that the GPU path would win in any case.

Share this post


Link to post

It's already well-known that problematic OpenGL drivers can result in the GL renderer "losing" to the software one, especially when geometry or scene description pushing becomes a significant portion of the total data moved. And that's with a real GPU with its own onboard memory and a proper AGP/PCIe bus connecting it to the main system. The GMA 9x0/9x5 series don't even have a 3D pipeline. ANY form of 3D pipeline. Just a 2D one. So at best it's like emulating OpenGL effects in software.

Think that the people who wrote the drivers reeeeeeeally cared about performance and accuracy in games when they wrote them?

Edit: my curiosity made me search the forums for OpenGL and GMA. There's a wealth of information already written, and it all points to the same direction: the GMA is basically a 2D framebuffer with a bunch of drivers giving it a sort of "courtesy support" for features like DX10 and OpenGL 2.2, but with no actual hardware to really accelerate anythng on its own. What can you expect afterwards?

Share this post


Link to post

Yes but the key here is scene complexity. In the general case DOOM maps are so simple that pushing that geometry to the GPU is basically negligible in terms of overall execution time. The main problem as far DOOM rendering is concerned is the hyper-dynamic nature of the world meaning that the geometry is typically updated every frame. That however is another matter, that isn't strictly related to raster performance.

If transformation et al is indeed done on CPU side, then the "video card" becomes nothing more than a glorified framebuffer plus video signal encoder.

Consider the case of a software renderer which must first copy it's local framebuffer into the GPU's framebuffer before it can then be encoded to video.

Are you suggesting that in a geometrically primitive scene that the software renderer can both beat the GPU-rasterization and essentially make the cost of the copy-to-GPU-framebuffer disappear also?

Share this post


Link to post
DaniJ said:

Consider the case of a software renderer which must first copy it's local framebuffer into the GPU's framebuffer before it can then be encoded to video.


This isn't a necessary cost -most modern OSes, GFX APIs and drivers allow allocating drawing memory directly on the framebuffer -even Java can do that.

DaniJ said:

Are you suggesting that in a geometrically primitive scene that the software renderer can both beat the GPU-rasterization and essentially make the cost of the copy-to-GPU-framebuffer disappear also?


With a true hardware OpenGL renderer, perhaps only on very pathological scenes, or with particularly inefficient drivers. But with no 3D pipeline whatsoever, then there's really no difference between a simulated OpenGL (by the drivers) and a sophisticated software renderer (by the application itself). It all will boil down to drawing a 2D, CPU-rendered framebuffer in the end, so how can the a GPU that does nothing by itself be of any advantage here?

The software renderer at least has the advantage that a pixel written by the CPU goes directly to the screen, and that's the end of it, if memory was allocated directly on the video card by the OS/drivers. This is also the only case where shared memory might be a tad faster than even the best dedicated bus (think about it: no memory contention, as pixel writes don't even need to leave main memory and head to a different bus).

With a "soft-GL" approach however, you have the overhead of emulating all the geometry pushing, all the texture copying, plus doing extra CPU-size transformations, plus the overhead of calling all those extra GL API/driver functions.

Even if you try to act like a smartass and simply render the software renderer's output to a single texture, the "OpenGL" approach will consume double the (shared) memory bandwidth, and double the CPU time to emulate both the user-side rendering, and the "GL texture rendering" to the screen. Trying to run a more sophisticated GL engine with geometry, polygons, textures etc. can't be more efficient than that, when it's still the CPU that has to do everything.

It's like having OpenGl...without having OpenGL benefits. Might as well write your own "GL-like" software renderer, at that point.

Share this post


Link to post
Maes said:

This isn't a necessary cost -most modern OSes, GFX APIs and drivers allow allocating drawing memory directly on the framebuffer -even Java can do that.

No, its indeed a necessary cost. Regardless of whether the video framebuffer is memory mapped to a CPU-accessible region in the best case, you can't eliminate the foreach pixel copy color.XYZ from the local double-buffered framebuffer.

The difference between a software renderer and a co-opted GPU renderer with soft-implemented OpenGL is that the later will automatically take advantage of the hardware in ways that the former won't, without dedicated effort on the application side. (Such as not updating specific regions of the buffer in a manner similar to DOOM's original software renderer).

The software renderer at least has the advantage that a pixel written by the CPU goes directly to the screen, and that's the end of it

In this context the definition of screen actually means a video-system level framebuffer. I don't know of any modern OS in which a user application can encode and push individual pixels directly to the output video stream in a manner that respects vsync.

I totally get your reasoning and on paper at least, it makes a degree of sense. However, with no benchmarks to prove otherwise I'm still inclined to think the GPU-with-emulated-OpenGL method will be faster.

Share this post


Link to post
DaniJ said:

The difference between a software renderer and a co-opted GPU renderer with soft-implemented OpenGL is that the later will automatically take advantage of the hardware in ways that the former won't, without dedicated effort on the application side. (Such as not updating specific regions of the buffer in a manner similar to DOOM's original software renderer).


Another concept that sounds good and solid on paper, but what are the chances of that happening with the drivers for a "graphics card" that was basically designed to be as little of a graphics card as it was possible to go, short of removing the actual screen-driving circuitry?

Do you really expect the drivers of something like that to be super-optimized, super-smart, demosceene-quality works of art, able to hide or even overcome the inherent limitations of the "hardware" in the way you described? Could it be that they took all possible shortcuts for the hardware but worked really hard to produce good drivers?

Share this post


Link to post

Certainly a valid counter argument. However, where do we go from here? I personally have little interest in doing the necessary research myself. At this impasse its probably best to just move on?

Share this post


Link to post

Maes, I can picture you in a mancave with stacks upon stacks of long-obsolete hardware in every direction, in a scene oddly similar to Tony Montana and his piles of coke.

Share this post


Link to post
invictius said:

Maes, I can picture you in a mancave with stacks upon stacks of long-obsolete hardware in every direction, in a scene oddly similar to Tony Montana and his piles of coke.

Not coincidentally, that's also how I picture the average Greek server farm to be.

Share this post


Link to post

Hey, I'm not the one trying to play GZDoom on fucking Intel GMA here ;-)

Oh and DaniJ, as for whether a GPU can actually perform worse than pure software for the same rendering quality...look no further than the S3 Virge, the world's first graphics decelerator ;-) So, at least in principle, it's perfectly possible (not that you'd ever want to....)

Share this post


Link to post

Such ancient (and terrible) hardware wasn't even a consideration when I made my earlier comments. Its certainly possible that a modern driver may be written that poorly but frankly I prefer to think there is at least a certain degree of competency until proven otherwise. It just depresses me too much to think otherwise.

Edit: For a little context - I had considerable experience with solutions of that ilk, from that era, when working as a radiography/imaging engineer in the healthcare sector. For the most part this was supporting applications whose rendering was done mostly in 2D, in DOS though. Wow, feeling old now...

Share this post


Link to post

Well, the GMA 935 is even less functional than the S3. In that sense, it can not "rival" it in deceleration, since it doesn't actually have any 3D hardware at all, so no opportunities to use it badly. In the end, it's purely a software rendering engine contest ;-)

Share this post


Link to post

You are transposing the comparison to the performance of a true 3D T&L pipeline with that of a dedicated DOOM software renderer. Whats the point in that when the latter can't produce the same results as the former? Apples vs Oranges.

Share this post


Link to post
Gez said:

Not coincidentally, that's also how I picture the average Greek server farm to be.

Internet technology-wise, you could do a lot worse than Greece, better luck next time with your random country namedropping.

Share this post


Link to post
DaniJ said:

You are transposing the comparison to the performance of a true 3D T&L pipeline with that of a dedicated DOOM software renderer. Whats the point in that when the latter can't produce the same results as the former? Apples vs pOranges.


True, but which is supposed to be the "true T&L rendering pipeline " here? Intel GMA?

Share this post


Link to post

Its "true" in the sense that it is capable of rasterizing arbitrary 3D geometry with a proper view space projection. The DOOM software renderer cannot do that, so any comparison of performance on those terms is completely pointless. Regardless of how well that pipeline is implemented in practice.

Share this post


Link to post
VGA said:

Internet technology-wise, you could do a lot worse than Greece, better luck next time with your random country namedropping.

It wasn't random; Maes is Greek.

Share this post


Link to post
DaniJ said:

Its "true" in the sense that it is capable of rasterizing arbitrary 3D geometry with a proper view space projection. The DOOM software renderer cannot do that, so any comparison of performance on those terms is completely pointless. Regardless of how well that pipeline is implemented in practice.


Well, then Microsoft's DX10 "dream" can be considered fullfilled. One of DX10s "novelties", if you recall, was that display drivers were required to provide fallback software implementations for any features lacking support in hardware -something which wasn't necessarily the norm before- and this must have trickled down to OpenGL too. In theory, you won't find a video card with a DX10 driver refusing to run something because of unsupported features (unlike DX9 driver). In other words, "soft GPUs" have been a reality in every sense of the word at least since DX10, and are also part of what allowed the archaic GMA chipsets to be labelled as "Vista ready". Heh.

Share this post


Link to post

This has very little to do with Microsoft or indeed the Direct X10 drive at OS level to try to persuade XP users to upgrade. Remember that we're talking about OpenGL here, which, has none of those market forces as a direct influence. Furthermore, the same is true of OpenGL on platforms that have absolutely nothing at all to do with either Windows or Microsoft.

Even in that context, you are suggesting that a user application can implement a functionally equivalent 3D T&L pipeline that beats the performance of a vendor-designed driver for their own hardware, which, has "direct" access to said hardware. This user application would not only have to implement and better it from a performance perspective but also simultaneously circumvent the bottlenecks inherent in the API it has to negotiate to do so.

Frankly, I don't even know what you're getting at now.

Share this post


Link to post
DaniJ said:

Even in that context, you are suggesting that a user application can implement a functionally equivalent 3D T&L pipeline that beats the performance of a vendor-designed driver for their own hardware, which, has "direct" access to said hardware.


*sigh* Only that the "hardware" in nothing more than a framebuffer in that case, with the specs in hand. It certainly is possible to make a better renderer with pure software under those conditions.

DaniJ said:

Frankly, I don't even know what you're getting at now.


I think you do, but you don't like it. There's no chickening out of this so easily for you, buddy ;-)

Share this post


Link to post

Fine. If you think you can do better then prove it.

And no, I really don't know what you are talking about so please spell it out to me.

Share this post


Link to post

Am I hearing clucking in here? Cluck..cluck..cluck... *with my best Duke Nukem 3D voice*

Share this post


Link to post

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
Sign in to follow this  
×