Jump to content
Search In
  • More options...
Find results that contain...
Find results in...
Sign in to follow this  
Maes

Progress on Java Doom port.

Recommended Posts

I never stopped working on it all this time (some people who contacted me via PM know what I'm talking about) but I wanted to have some things straightened up before I delved deeper into the code of Doom (pun intended).

So far I have defines, most structs, fixed point arithmetic and WAD loading and part of the lump caching pinned down (Essentially, most of the d_ and w_ functionality, and also most of r_ ).

WAD loading and lump searching works. A workaround for replacing C's raw struct reads was to implement a "DoomReadableObject" interface, whereas every struct that can be read from disk implements its own read(DoomFile f) method, which takes care with reading each field separately from a "DoomFile".

DoomFile itself is a modification of Jake2's "QuakeFile", which used a similar approach. I modified it in order to handle certain cases like mass array "structs" reads, and to provide little endian/big endian conversion (for some reason, Jake2 didn't have to mess around with that). OK, so so far I only have a working WAD loader...yeah, but one that follows Doom's code as close as possible with Java than ever before!

Lump caching is performed with the use of ByteBuffer objects, which can be used to read raw structs from them similarly to "DoomFiles". Most of these mechanisms didn't exist in older versions of the Java Sun SDK (prior to 1.5)

When this thing has achieved enough critical mass, I will set up a sourcforce cvs and maybe a wiki. Who knows, this might be the next Jake2...

Share this post


Link to post

I'm surprised you added the R_ code so quickly, I thought that would be one of the hardest parts to translate considering all the pointer magic in the renderer.

Judging by your post it seems you're not going to go OO right away? Just make everything a struct and do tons of marshaling? I guess it works, but that's still kind of hackish.

Share this post


Link to post

I mostly pinned down the structs in r_, I will start getting worked on the methods pretty soon.

I'm using a mixed approach so far: e.g. the wad.c and wad.h stuff became WadReader.java, and I didn't make it static, and W_ prepended functions became instance functions so e.g. you do

WadLoader W = new WadLoader(); 
W.InitMultipleFiles("DOOM.WAD");
System.out.println(W.GetNumForName("E1M1");
etc.

I went for strict type safety and OO approach even for fixed_t: I prefer cluttering up the arithmetic syntax rather than make everything an int. ALso, whenever I catch 4-ples of fixed_t they become a new type bbox etc. Once again, in general there are lots of things that translate very well to pure OO, and some things like pointer magic which are a clusterfuck and ultimately I prefer ironing them out (a lot of pointer magic was already present in the WAD handling code, which forced me to watch carefully for "pointers to arrays", pointer arithmetic, mallocs, struct reads etc.

Luckily, the Jake2 codebase has proven useful so far, as they had to address at least part of the same problems, and solved them. I already borrowed the "QuakeFile" (an extended RandomAccessFIle, which I'll probably extend to ByteBuffers as well) and the self-loading data structs-objects. So yeah, there's is some marshalling going on but this mostly when loading (and in theory, saving) from/to disk or bytebuffered structures. Once they are turned into Java objects, the code almost writes itself :-p

As for the renderer, I intend to make it work on a normal byte array, so porting should be pretty straightforward (I already have experience in number-crunching and 1D-array manipulation in Java, so I don't expect it to be that hard).

I don't plan on adding any platform-dependent OpenGL extensions for now, just use a good old plain Java2D canvas rendering and implement the classic renderer, as a proof of concept. The reason I stalled it is that so much stuff was dependent on i_, w_ and z_ that I needed to have at least some of their functionality in place.

For what regards caching, since it would be wasteful of memory to keep both a binary lump and a deserialized object at the same time, I will keep the binary ByteBuffer lumps in place but destroy them after reading with their clear() method, and let the gc do its magic. This is roughly equivalent to "having used them" by keeping them referenced with a pointer in the original C code, without wasting 2x memory as necessary.

Share this post


Link to post
Scet said:

Judging by your post it seems you're not going to go OO right away? Just make everything a struct and do tons of marshaling? I guess it works, but that's still kind of hackish.


OK, just to clarify: making every struct an object and marshalling from/to the binary on-disk structures IS consistent with going OO, since there is the binary blob/java object impendance mismatch to overcome. These are the "low level objects". Sure, larger stuff like maps will have their own hierarchy of internal objects etc. and the methods that formerly used to have an instance parameter will become instance methods, and local status variables will of course become instance-specific status. Luckily Eclipse helps seeing such potential almost in real-time.

But it's not like I read everything into a binary lump and have getters/setters that work on raw binary blobs everytime during runtime. I only do that during loading and de-caching, because I pretty much have to, unless I produce an XML /Java Objects version of Doom's resources and save that on disk ;-) Heh can you imagine an XML-ized version of Doom's level format? :-D Perversely, with JAXB it should be trivial...

<DoomMap>
<lumpname>E1M1</lumpname>
<things>
 <thing x="100" y="-100" ....>
..
</things>
...
<sectors>
....
</sectors>
</DoomMap>
...and Doom just got more "enterprisey" :-p Anyone else dreaming of Base64-encoded textures? ;-)

I also try to avoid static classes (except for global defines and some special functions, just like Jake2) and rather have separate instances of e.g. the renderer, the wad loader, the map loader, the main engine etc. in the unlikely case somebody would like to have two renderers or two separate engines using the same resources.

Share this post


Link to post
Maes said:

Heh can you imagine an XML-ized version of Doom's level format?

Sure it'd be just like UDMF only harder to parse. :P

Maes said:

in the unlikely case somebody would like to have two renderers or two separate engines using the same resources.

Unlikely that we would want two renderers? What about split screen?

Share this post


Link to post
Blzut3 said:

Unlikely that we would want two renderers? What about split screen?


Or a renderer inside the renderer? ;-)

Share this post


Link to post
Maes said:

Or a renderer inside the renderer? ;-)

REAL Super Turbo Turkey Puncher arcade cabinets!!!

Share this post


Link to post
Maes said:

Or a renderer inside the renderer? ;-)



Like ZDoom's camera textures? It doesn't even need 2 active renderers for that but uses the same one twice in a row.

Share this post


Link to post
Graf Zahl said:

Like ZDoom's camera textures? It doesn't even need 2 active renderers for that but uses the same one twice in a row.


But what about taking advantage of multithreading? :-p

Share this post


Link to post

What would it help if the main renderer has to wait for the camera texture to be finished?

(Of course the code is far too bad for this anyway with its shitload of global variables and self modifying assembly code.)

Share this post


Link to post
Graf Zahl said:

What would it help if the main renderer has to wait for the camera texture to be finished?


Thread barriers could be set only at those points where camera textures are used, so you'd have worst and best case scenarios:

Best case: all camera textures are visible and "on top" of other stuff, so they can be all processed in parallel with the "main" renderer and drawn last, with minimal blocking delay and nearly 2x or more speedup, if there are multiple independent cam textures each with its own renderer and under the same conditions. The assumption here is that rendering each cam texture independently would take as much CPU time as the main renderer itself, and that you are running this on a multicore/multi-CPU/hyper-threaded architecture.

Worst case: camera textures are at least partially occluded and overlapping, with "main" stuff drawn in front of them. If you take this to the extreme (large numbers of partially visible cam textures) then yeah, you have no choice but to block the main renderer, render the cam texture, draw it, let the main renderer partially occlude it and move on etc. so at worst you have them running almost serially (depending on where the blocking is).

But even then you would have a good speedup with multiple cameras, assuming they can all be rendered independently of each other. Even if they have to rendered at the far back of a scene, they could still all be computed in parallel with each other (this depends on whether you allow camera mutual/self-feedback or specify specific priorities and recursion rules between multiple cameras e.g. camera 1 sees camera 2, but doesn't have to wait for it to be rendered, while 2 has to wait for 1).

Of course, all this is at a theoretical level.

Share this post


Link to post

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
Sign in to follow this  
×