Jump to content
Search In
  • More options...
Find results that contain...
Find results in...


  • Content count

  • Joined

  • Last visited

About beetlejoose

  • Rank
    Green Marine

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. My depth buffered sprites were an attempt to keep it scanline oriented - you turned it into full fledged voxel sprites, Maes! Whats wrong with discussing alternative methods for drawing the monsters if nobody has yet come up with a feasable way in a scanline oriented software renderer?
  2. The original designers of Doom based their design decisions on the real constraints of the time. Those constraints are no longer applicable due to an advance in the hardware technology. Obviously we are not talking about rewriting the software according to todays constraints or we'd be talking about using completely different methods! So we are talking about applying some artificial constraints and we have to decide what they are. Is it that everything has to be done in software mode and the monters have to be done as sprites. Or can we relax the sprite limitation and use some other method such as polygons for drawing the monsters in software. Or can we relax having to do everything in software and use some hardware acceleration. Or can we just completely start again and write a new engine that can load the old maps etc etc. These questions further depend on your motives. If its all just for the fun of it then the above questions actually mean something. If its because you actually want to create software thats relevant to today then ofcourse choosing to write a software renderer in the first place is not the best option to say the least. Bear in mind that just because you use modern methods for doing something dosent mean you cant make it look retro. You could in principle completely change the algorithms used in doom and use as much hardware acceleration as you want but still make it look like the original doom. The question then of whether or not to use OpenGL dosent matter. OpenGL is a low level graphics library and you can do almost whatever you want with it. If you want it to reproduce what doom does then you can!
  3. With the contraint that you have to use 2D sprites then you have to manually create extra frames for the above and below views and some number of intermediate angles. Nobody wants to manually draw enough images to cover all that for all monsters. So it boils down to manually designing the least number of angles that are either acceptable just used by themselves or can be used for some sort of tweening method. Whatever method is used its going to boil down to hundreds or thousands of man-hours to accomplish for a result that falls far short of the quality attainable using 3D models.
  4. Yep, I can imagine that looking very half assed indeed! Although I did imagine that you would jump to the next rotation frame before any artifacts like that became too apparent. Note that I don't think this would really be a good direction to go in - its just sort of lateral thinking. Sometimes if you consider the redic ulous - it can give you a spark of inspiration.
  5. Just an off the wall thought - and I don't know if this's been talked about before - and I expect there would be a lot of problems with it. But - couldn't you create depth maps for the existing sprites? This could be used to smooth out the transition between a very coarse sequence of vertical to horizontal sprites. Although that would be a lot of hard work and hard to optimise. I can't imagine how you would manually decide on the depth values for each pixel either! Just as a side note, Spleen. You don't have to use lighting in OpenGL - it can just be turned off. There needn't be any effect turned on that will change the colours you tell it to draw. If you want you can also draw textures pixelated just like dooms. Also purely for informational value : a GPU implements more of a 'long' processing pipeline rather than a 'wide - slow' parallel architecture. Imagine a row of factory workers on a production line each passing their result onto the next worker. The parallelism comes from each worker being able to do their own job independently. This architecture is easier to think about when you start with the geometry and end up with pixels. But if you consider starting with the pixels and working backwards to the geometry - like a ray tracer - then a 'wide' parallel architecture makes more sense. It would be nice to have a GPU like that - far cleaner and intuitive.
  6. One of the reasons Doom uses vertical rendering is that the path traced through a given texture is then also vertical and straight provided there are no slopes in the walls. If you use scanlines then you introduce the problem that the path through the texture is curved. There are a few ways to approximate the curve though. The simplest is to approximate it with a line. But that gives unnatural distortion. The next best way is to fit a quadratic curve to 3 points properly generated in the texture. These methods all add CPU time. Another reason it uses vertical rendering is that it greatly simplifies clipping off the vertical edges of the 2D sprites. You would lose all if not most of the optimizations possible with vertical rendering - but I'm not saying I think its impossible - maybe it is using todays faster hardware?
  7. The reason I'm thinking up a series of seemingly nonsensical things to do - like using OpenGL when I already have a software renderer is because I am talking specifically about add-on features for Doom95. The Doom95 engine cannot be altered directly. You would already have Doom95 installed - but then you can run a patch that corrects a load of problems with it (I've already fixed the mouse) but can also optionally give you extras - such as vertical rotation - but still keeping the original engine unchanged with its retro feel. Doom95 on steroids you might say. The whole point is how good can I make Doom95 look without actually reverse engineering the original software. I was already resigned to using OpenGL as the final rendering stage anyway. With the ability to use hardware acceleration, using a relatively simple sphere will have negligible impact on speed. I want to implement the best quality vertical rotation I can without compromising Doom95's already pixelated output too much and a sphere fits the bill perfectly. Once I have a sphere and the correct texture coordinates compiled on graphics hardware - all thats left to do is get Doom95's rendered output into the sphere texture as fast as possible. If I can make Doom95 render directly into the texture without any intermediate copies then that will be great. There's also the question of how the final output should be presented on screen. By narrowing the vertical field of view you'll be able to have more rotation - infact you would need the rotation to be able to see everything. So I'm kind of thinking that making it look like you're wearing a helmet might be good - for example. I am hoping I can get all this to work by hooking and overriding the 'swap buffers' function in DDRAW.dll. It will also need to cooperate with the mouse patch to get its control input etc. I'm sorry Spleen for hijacking your thread with my own ideas. Its just that this thread inspired me and I was already trying to think of modifications for Doom95. I know you wouldn't need a sphere if you could modify the original software.
  8. I'll answer based on what I think you mean, but if I'm wrong then please give some more details. Do you mean to render the output onto a series of large flat surfaces? Simple, but unfortunately that won't produce the desired effect. As you rotate the view the image on the surfaces become compressed. Imagine when you have rotated so that your viewpoint is in the plane of a surface then you won't see anything at all. A sphere produces the correct projection. But even when rendering a sphere on a graphics card its an approximation made out of small flat surfaces - triangles. Each triangle has the same distortion as above but because they can be made small and many a better approximation can be achieved and the distortion minimized.
  9. I've thought of another way to do this! You don't need to create an intermediate spherical texture at all! See, I was thinking in terms of using a pre defined sphere primitive in OpenGL or DirectX that requires the texture to be in the spherical format. But this problem is really about how the texture coordinates are generated for the sphere in the first place. If I create a new sphere object that generates its own texture coodinates taking into account the spherical distortion needed then the flat buffer can be used directly. That could all be executed on the GPU and no large transform arrays will be needed! The quality of the transform can be adjusted by setting the number of slices and segments in the sphere like normal to create larger / smaller polygons. It pays to be critical because it makes people think harder - thanks Maes!
  10. Admittedly the table will be big. But I still suspect that the transformation can be made efficient enough to achieve an acceptable framerate - admittedly not the fastest possible. This is the ONLY method I can think of that might work to give Doom95 vertical rotation. I'm doing this because its a challenge not because its sensible! I cannot directly control what parts of the table are cached. But at least by using strong sequential locality when reading, the cache has the option to prefetch the table one or several cache lines at a time discarding the previous ones because they are not referenced again. The same will apply to the writing of the spherical texture. That will be sequential. Once each written cache line is finished with it will not be written again and flushed to main memory as the space is needed. The reading of the flat buffer will not strictly be sequential but there will still be long stetches of data referenced next to each other. My point of array indices versus raw pointers is that it will 'slightly' simplify calculating the reflections in the mapping in my program - but point taken - this is probably a moot point to make as the results will still boil down to pointers. Correct, the mapping will need to be regenerated if the screen size is changed - but I don't think that will happen very often. I am going to try a test case and it will be interesting to see what the results are.
  11. Sorry for hijacking your thread Spleen. Errm, sorry Maes - 'shit slow', 'irrelevant'? Maybe you should take a little extra time to understand somebody else's ideas before shooting off at the mouth like that! I don't mind constructive critisism but what you are saying sounds like you've got the wrong end of the stick. First of all, please understand that I'm not saying that spherical rendering is a fast way of accomplishing anything at all. I don't suggest that you or anyone else should use it for writing a new software renderer! I am merely saying that I think it can be done sufficiently well using todays hardware as an add on to a piece of software that was COMPILED for the machines of 15 years ago based on what I have tinkered around with before. The faster CPU's, memories, caches and buses provide a gap into which I can add a limited vertical rotation specifically for Doom95 by providing a wrapper library that overrides DDRAW.dll... Maybe I should have started a new thread for these ideas - anyway. I don't understand your objection to what I intend to do with DDRAW.dll so I'll let you 'elaborate' on that a bit more before I respond. Now a few points about my irrelevant lookup table. First of all, I specifically said that this table will be read 'sequentially' once per frame. I said nothing of the table being cached entirely. Its read in a simple cache coherent highly efficient sequence - not random access! It dosen't matter how big the table is it will not flood the cache. Besides : There are size optimizations applicable to this table If I use array indices rather than pointers. The mapping is symmetrical horizontally and vertically so the total size can be reduced by a factor of 4. Also the destination pixel spacing can match the sphere texture so those 'pointers' will not be needed. That'll reduce the size by a further factor of 2 at the expence of having to ignore a few pixels outside of the mapping. There will be no need for two tables! If the lookup table is made to match every pixel in the spherical buffer there will be no gaps closer to the edges. The pixels in the source image will be mapped to more than one pixel in the spherical image where there would otherwize be gaps. So I will not need any luck with spherical polygon filling algorithms! Having a precomputed buffer is MOST relevant! It reduces a whole matrix multiplication per pixel to a simple lookup. I am not stepping on your toes here, Maes. My ideas are irrelevant to your software renderer in Java - but I wasn't talking about what you are doing. The GIS software sounds interesting though. I should note that these are only ideas that need testing - and that is what I shall do.
  12. I may have stated what I had in mind a bit too casually - but then again it was just something that I dreamed up in the 2 minutes while reading the post. It sounds like you've been thinking about this for quite some time, Maes. It seems like an interesting challenging problem - one that I'd like to think about some more. At the moment when I think of Doom I automatically think of Doom95 - compiled and no source code. Granted, that may seem pointless - but its what I was introduced to a few months ago and now I use it to exercise my brain in my spare time... So we're talking about different problems - at least as an implementation issue - but I still think its relevant. Anyway, I realize there are hard problems to do with this and I don't propose to use some 'naive' method. I have programmed the mapping of photographs onto spherical textures and rendering them on spheres in realtime before - this is similar and at a much lower resolution. Also it only has to deal with a particular case which means all the slow stuff can be done beforhand. The transposition would happen in a new layer after doom had done all of its rendering by hooking and overriding the directdraw library. The only time the slow mapping using matrices will be required is at startup when the precomputed array is generated. The data in the frame buffer will already be layed out in scanlines -doom has already done the column to row conversion. The mapping from the frame buffer to sphere texture per frame will require no slow arithmetic operations. The precomputed buffer will be a list of pointer pairs to the pixels in the framebuffer source and the sphere texture destination. This list will be read through in sequence once per frame. The pairs can be sorted to minimize cache misses on the source and destination arrays - although a certain amount will have to be tolerated. The mapping will not reach all the way to the poles of the sphere so the distortion will be less. The destination texture simply receives the colour pointed to in the source buffer. The final step will be to do the textured sphere (OpenGL or DirectX...). I think using hardware acceleration in the final step should not matter from a purist point of view since this is already quite a departure...
  13. If you don't care about efficiency because any computer nowadays is going to be fast enough, in principle you could achieve real vertical rotation without having to convert the engine to scanline rendering - probably to about +-45 degrees. Use a larger frame buffer than will fit on the screen. Render to it as normal. Next, map the result onto an equirectangular texture using a buffer of precomputed spherical coodinates. This result can be rendered onto the inside of a sphere. Position the veiwpoint at the origin of the sphere. Rotating the sphere 'up and down' will give you free mouse look, but you would only have a limited viewable vertical angle. It could be implemented using a wrapper library and no changes to the original engine itself. Maybe a bit overkill - but a different way to think about it...
  14. beetlejoose

    Doom95 mouse patch for XP available - now

    Fair comment. Its just a convenience that was trivial for me to put in. If you are already using the mouse patch then you don't need to figure anything else out. I know a few people who don't even know what shortcuts are let alone modify them with parameters etc. If you don't do anything with it then it still 'appears to fix' the invisibility querk - albeit with a bit of a kludge... I didn't expect anyone at your level to find this truly ground breaking!
  15. beetlejoose

    Doom95 mouse patch for XP available - now

    Well, what I have is a text box that by default has -emulate in it. Any parameter can be put in there. I've included a simple help form that also has a hyperlink to the doom wiki about doom's parameters. I think that should be ok. It can be ignored and the invisibility works without the user having to find out how to use parameters. Its flexible enough that someone in the know can put in what they want.