Jump to content
Search In
  • More options...
Find results that contain...
Find results in...
GooberMan

Announcing Calamity - an in-development rendering library

Recommended Posts

Calamity - An in-development hardware accellerated "2.5D" rendering library

I figure it's about time I threw my hat well in to this arena.

tl;dr
I've long considered the idea that you can't replicate Doom's renderer in hardware to be somewhat bullshit. I don't blame anyone for having that opinion. The rendering techniques used in Doom source ports are not at all what a modern graphics programmer would use when attempting to replicate Doom's renderer with OpenGL or D3D.

Rather than create an entirely new source port that force people to use it to get the benefits, I am instead creating a cross-platform library that any program can use to render a Doom scene. Source ports, level editors, other utilities, etc can link to this library and let it worry about how to render the scene correctly.

With the tl;dr out of the way, let's get to specifics.

What is Calamity
Calamity is a cross-platform cross-API library that aims to render a vanilla-accurate Doom scene using current graphics APIs.

Vanilla accurate?
Yes, Vanilla accurate. The lighting model, invisibility effects, and even bugs such as slime trails will be reproduced with this library. As I want the library to be as flexible as possible, these will all be configurable. With all of the glitches turn on, the aim is to be indistinguishable from the Vanilla renderer at the original 320x200 resolution.

Why are you attempting this?
We live in a future where 4K displays are affordable and computers more powerful than the machines Doom was developed on live in our pockets all day. In both of these cases - rendering at ultra high resolutions; and rendering on hardware that needs to operate in a low power state - the software Doom renderer is not the best way to do things.

If you take an overly optimistic estimate that just drawing an 8-bit pixel to the screen using the software renderer (ie ignore BSP traversal) takes 10 cycles, for a 1920x1080 buffer that will take 20736000 cycles. Or, to put that in more understandable numbers, 20.736 megahertz. To do that 60 times a second? 1244.16 megahertz, or 1.24 gigahertz. Multiply that by 4 and that's how long it will take to fill a 4K display's backbuffer 60 times a second. Phones increasingly have 1080p displays, and taking up that much processor time is a sure fire way to kill your battery.

3D hardware is far more efficient at rendering graphics than CPU code. So let's leverage that more effectively going in to the future.

So this is a 2.5D renderer?
That's not an accurate term. This is rendering in full 3D. How it does so is where it differs.

What does it do differently?
The first big difference is that it is not a forward-rendered pipeline. A form of deferred rendering will be used to achieve all the effects required. I will explain each effect in detail when the library is released.

So what APIs are supported?
Currently:

Direct3D 11

With minimal effort:

OpenGL 2
OpenGLES 2

Planned:

Direct3D 12
Vulkan

And operating systems?
Currently:

Windows 7

Once I have a stable renderer:

Linux (Ubuntu/Mint is the environment I have set up at home)

Planned:

OSX
iOS
Android

Why are you developing for D3D11 and Win7 first? No one does that.
Quite simply, I didn't have D3D11 code at home. I have code sitting about for D3D9, GL2 and GLES2. Bringing that across to my library here won't take a whole lot of effort as I've written my D3D11 code with the quirks/limitations of those APIs in mind.

(Also, the development tools for D3D11 are far superior and I can get a solid reference renderer happening to compare against for other APIs and platforms).

You said this is in a raw state. How raw?
Rather. I was planning on announcing this earlier, but I've spent most of my time on this over the last few weeks just setting up the background stuff such as asset crunching and D3D11. Actually working on the rendering effects is something I've only been able to physically see on my monitor over the last couple of days.

So it's a while away from being released?
Yes.

You sound like you have a plan though?
Yes.

The lighting model is already up and running. Invisibility effects are trivial thanks to the deferred pipeline. Flats, however, are something that I have a few theories for that I need to experiment with. Likewise with sprites, and their ability to overdraw on top of walls rendered in front of them.

When I have those things running in a meaningful manner, I will make an initial release.

Should I immediately demand my favourite source port author use this?
Fuck no. In fact, don't badger them. Ever.

As an example, I have spoken to Fraggle about this project purely to get information on the software renderer, and while he's keen to see how it turns out that in no way indicates that he'll replace the GL capabilities in Chocolate Doom with this renderer when it's up to snuff (and in my opinion there's no way in hell that he should). I've actually forked Chocolate Doom (which I'm calling Rum and Raisin Doom) for the sole purpose of showing by example how to integrate the renderer.

It is solely a source port author's discretion if they take on the renderer. If you don't like that, then fork your favourite source port and integrate the renderer yourself.

Can you show us something then or is this all just talk?
This here is STARTAN2 being rendered with a modified shader to show how each 32 unique light levels affect it. Starting with a vertical spread:



And a horizontal spread:



Earlier tonight, I took a screenshot of current rendering progress on E1M8, rendering to a native buffer of 1280x800 (which, looking at it, seems that I've flipped the lighting logic so it gets lighter in the distance):



A similar scene from Chocolate Doom for comparison:



EDIT: well here's a bonus image: the lighting levels represented in grayscale.

Share this post


Link to post
Linguica said:

Neat. If you haven't checked it out, https://github.com/cristicbz/rust-doom is a hardware-accelerated renderer that is very accurate to the original software mode as well.

I have seen that before. Needless to say, I'm aiming to go the whole way with this library (even including a shearing style of free look if you desire such a thing). The lighting model is really the easiest part, hence why I got it in and running so quickly.

For slime trails, I'm likely going to eschew the traditional wisdom of using GL nodes and using the original segs instead. The exact method I use will depend on how well my experiments go.

Share this post


Link to post

Very cool stuff! More accurate rendering in GL would be a very good thing. I'd be particularly interested to see this used as a base for further graphical enhancements, to combine the classic Doom aesthetic with modern effects, additional palettes, and so on.

Share this post


Link to post

Do software renderers work by drawing on a texture or buffer, each frame, kinda like the original game would do, except that one sent the info to the video ram directly? Brute forcing their way pixel by pixel, frame by frame?

And do hardware renderers create a 3D scene each frame? Using draw calls to draw shapes etc?

Would these statements be considered correct, in layman's terms? If not, please explain.

Share this post


Link to post
GooberMan said:

even including a shearing style of free look if you desire such a thing

I honestly don't think you can get all the rendering hacks to work perfectly without shearing, so I think "desire" is putting it lightly. However, I'm prepared to be corrected.

Ever since I saw SVE doing good sprite clipping in GL I've been pretty convinced it is possible to do this, just basically needed shearing to complete things.

Share this post


Link to post
andrewj said:

For a library, it is important to mention the programming language. Is it C++?

It's actually not that important, the important thing is how the functions you link to are defined. Those functions are being defined to be compatible with the C ABI. Beyond that, it really doesn't matter. I could have programmed it in D, each of the platforms I'm targeting has a D compiler and it allows you to export functions with C linkage.

Still, the internals are C++. Wile programming with D is something I'd rather do (I can rant about how terrible C++ is for many things these days), I know many tricks to get C++ performing well and as there will be a code release it means other people in this community have a better chance of reading and branching/maintaining it.

Share this post


Link to post
Blzut3 said:

I honestly don't think you can get all the rendering hacks to work perfectly without shearing



Some of the more elaborate hacks are even broken just by enabling shearing.
Just have a look at Requiem's MAP31 in ZDoom. The effect completely breaks apart if you look down.

Share this post


Link to post

Speaking of. I'm slowly building up a test map of software rendering abuses that I need to ensure work with the renderer. Is there a list of maps somewhere that include rendering oddities that I need to take in to account?

Share this post


Link to post

Those are some solid abuses right there. UNDERWTR in particular immediately breaks with shearing in ZDoom, and plain doesn't work in GZDoom (although it did show me exactly how the effect was achieved without needing to open the editor).

Share this post


Link to post
andrewj said:

For a library, it is important to mention the programming language. Is it C++?


Well, realistically, for something that's obviously meant to be cross-platform, high performance and interoperable, what else could it be other than C or C++?

Share this post


Link to post

It's like you didn't read a thing I said.

tl;dr - the author of D, Walter Bright, wrote the first proper C++ compiler (earlier compilers generated C code that compilers of the time could compile) and part of his mission statement with D is to make it compatible with the C ABI. C++ virtual function tables are also supported.

Also tl;dr - We're shipping Quantum Break with code written in D, compiled with DMD, and linked with Microsoft's Xbox One toolchain. Needless to say, I have experience in this area.

Share this post


Link to post
Graf Zahl said:

Just have a look at Requiem's MAP31 in ZDoom. The effect completely breaks apart if you look down.

What exactly do you mean, the "fall through the ceiling" effect?

Share this post


Link to post
Graf Zahl said:

I mean the rendering part in that room-over-room area. Just look down and see it fall apart.

Share this post


Link to post

I expect it's the same effect as in that demo WAD Ling linked, considering Iikka also authored MAP31 of Requiem.

Share this post


Link to post

No Doom rendering engine is complete without Illusio-Pit (TM) compatibility, though.

Share this post


Link to post
Guest

If Direct3D runs faster, a lot faster, than opengl, then it's all Intel's fault for screwing up opengl performance.

I want to see Direct3D running the stupid high detailed maps!

Share this post


Link to post
GooberMan said:

I don't trust any language that uses whitespace to define program flow.

What about JavaScript? At least it doesn't use whitespace in that way ;)

Share this post


Link to post

Well, that's a long rant in itself. The tl;dr though is that I'm a multi-platform console programmer and performance is a key interest for me. ASM.js is better in that department, so I'm glad they're making that push at least. As a language though, it really is quite limiting. I was hoping Dart would have gained some traction, but every browser vendor is invested in JavaScript so realistically that one was a long shot.

Share this post


Link to post

I embedded python in doom once with a view to rewriting I all routine by routine. The embedding part, at least, was easy.

Share this post


Link to post

Ling's BSP hacks are changing the way I'm approaching this now. Previously, I was going to roll in a newer form of culling that would have attempted to render more detailed maps quicker. It would have done this by completely ignoring the generated nodes and creating its own culling structures. That immediately ruled out those hacks working with that set up. However, as it clearly works in Vanilla Doom, I would be going against the goal of this project to just ignore that it exists.

Thus, I'll start off using the Vanilla nodes for rendering. Optimisation for larger maps can come later. It's more work, but it's the Right Way(TM).

Still, work will be slow. I'm heading in to the last few months of development on Quantum Break, and I have no shortage of work to do there. The amount of time I can devote to this (and it's still at a stage where I need to concentrate for a few hours at a time to get things up and running) in between general unwinding/overtime at work isn't where I'd like it to be.

Share this post


Link to post

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×