Jump to content
Search In
  • More options...
Find results that contain...
Find results in...
Sign in to follow this  
CodeImp

Reject map building - How is it done?

Recommended Posts

My question is basically; what is the theory (or, what are the best working theories) for determining sector-to-sector possible visibility? (thus: reject map building)

I am only really interested in the basics, where solid lines are the only things considered as blocking the sight. I am not interested in any special actions which could be taken into account for extra optimization.

I have tried to look at the doombsp source code to see how Id did it, but that didn't have any useful comments (surprise!) and the RMB source code was overly complicated with extra options and extra optimizations I don't care about...

Can anyone give me a rundown of how this is done?
Thanks

Share this post


Link to post

I guess that most info you might get is flat out wrong. I don't think that aside from RMB there's any other reject builder that did it properly and some sources even stated that the best course of action would be to do a distance check (1024 map units!) and flag everything further away as 'not visible'. Needless to say it's better not to bother building a reject before doing such crippling nonsense.

So, if you can't make sense out of RMB's source I don't think there's much to be done. Where did you get that anyway? I can't remember that it was ever released.

Whether it's even useful to build reject is another matter. All the tests I ever did showed that you need pathological test cases to get any meaningful performance improvement on any system not older than 10 years - especially on ports that are locked to 35 fps.

Share this post


Link to post

I think Carmack's reject builder worked like this:

Outer loop is to check every pair of sectors, for each pair check every pair of linedefs (one from each sector) with a visibility check -- only marking the sector pair as blocked if all linedef pairs are blocked.

To check a linedef pair: look for a chain of one-sided linedefs which crosses the two imaginary lines which form the "sides" of the linedef pair (making an imaginary 4-sided shape, with the two linedefs at opposite sides). When a chain of one-sided linedefs crosses both of these imaginary sides, visibility must be blocked.

Share this post


Link to post
Graf Zahl said:

Where did you get that anyway? I can't remember that it was ever released.

heh you're right, I didn't actually see the RMB source code. I saw RMB and noticed all its options and tweaks and decided that it wouldn't help me understanding the basics. Hence I moved on to something more basic: the original doombsp code.

I know that reject doesn't gain much performance improvement in Doom, but this is not for Doom. I intend to use it for a game I am working on. It does use UDMF for maps (so the maps can be made with DB) but the game is a bit more complicated, technically. It has slopes and 3D floors and even 3D floors that can freely move around (not restricted to their parent sector). On top of that, I need a lot of hit scan traces along the map and currently the game runs at 100 FPS fixed (I may down this to 50, not sure yet if I really need 100). Long traces across the map are sometimes necessary, so simply saying anything above 1024 distance won't be visible is not a good idea IMO. A reject technique such as Doom uses would help me here.

andrewj: That does help me a bit understanding what doombsp does, thanks. I am still wondering how they define a "chain" though... How reliable is this technique? I am asking, because I read that the original Doom maps had some reject "bugs" in them.

Share this post


Link to post
CodeImp said:

I am still wondering how they define a "chain" though...

Maybe a number of lines where their combined length is just over the length of the longer line of the pair?
Like, if you're checking two lines that are parallel to each other and are 128 units long, a chain of lines parallel to this pair has to be at least 128 units long, too, to block the view.

[edit] No, that won't work. Probably more a chain of lines that cross the outer lines of the 4-sided-shape.

Share this post


Link to post
Graf Zahl said:

Whether it's even useful to build reject is another matter. All the tests I ever did showed that you need pathological test cases to get any meaningful performance improvement on any system not older than 10 years - especially on ports that are locked to 35 fps.

Not even in huge maps with high monster population and rich detail?

Share this post


Link to post

Huge maps with rich details means your REJECT lump will weigh a hefty gigabyte. :p

Share this post


Link to post

If I'm not mistaken you'd need about 90000 sectors to get a reject table of 1 GB. Even something like Vela Pax map01, which most people would probably call super massive (with over 9000 sectors) only has a reject table of less than 10 MB.

Share this post


Link to post

If you are sure this optimization is worth it, then rather than mess about with REJECT, I suggest looking into a much more robust PVS algorithm (e.g., http://www.vavoom-engine.com/glvis.php).

Once you have a complete PVS dataset for a BSP (I'm assuming you're using a BSP) you can easily query the final visibility state for a Sector (in the DOOM sense) by checking each of the leafs and combining the results.

Share this post


Link to post
printz said:

Not even in huge maps with high monster population and rich detail?



REJECT only makes sense if the map is sufficiently segmented where it's easy to exclude large portions for each position.

A lot also depends on the visibility checking algorithm.
There's 2 different ones. Doom 1.2 used one that's based on the blockmap which is significantly faster than Doom 1.9's that's based on the BSP. Unfortunately the old algorithm, despite being better, was dumped for a trivial bug which id was apparently unable to find and fix.

So if you are concerned about performance, the first step should always be to use the blockmap algorithm - in ZDoom's fixed version, of course. That alone can speed up sight checking by more than 50% on large maps.

Rich detail is pretty much irrelevant in sight checks with the blockmap algorithm. With the BSP algorithm it can be a major factor though.

Share this post


Link to post
Graf Zahl said:

A lot also depends on the visibility checking algorithm.
There's 2 different ones. Doom 1.2 used one that's based on the blockmap which is significantly faster than Doom 1.9's that's based on the BSP.

That's quite interesting. Did you find the old (pseudo)code somewhere, or did you deduce it from reverse-engineering?

Or rather -- shall I assume Heretic and Hexen still have it?

Share this post


Link to post

Graf Zahl said:
I don't think that aside from RMB there's any other reject builder that did it properly

Apart from RMB I believe Zennode can build useful reject maps. I'm sure you know of some edge case its algorithm mishandles that makes it worthless, but it has worked for me in the past.

Gez said:
Huge maps with rich details means your REJECT lump will weigh a hefty gigabyte. :p

I think you meant megabyte. Realistically, a map with 3000 sectors has a 1MB reject. I would prefer a 4MB map being bloated by a 1MB reject, over a 4MB map with 20MB of mp3s or oggs or high-res textures that only work in OpenGL mode, or whatever else. ;)

Graf Zahl said:
REJECT only makes sense if the map is sufficiently segmented where it's easy to exclude large portions for each position.

Indeed, but most maps over a certain size are sufficiently disjoint. Deus Vult for example, or even in Vela Pax there are few places where monsters could see more than a quarter of each map at once.

Highly-detailed maps with large crowds in monster teleporter holding areas are the best candidates for having a reject built. Monsters in holding areas know they can't see the player until after they've teleported out. Slaughtermap designers take note :)

Share this post


Link to post
printz said:

That's quite interesting. Did you find the old (pseudo)code somewhere, or did you deduce it from reverse-engineering?

Or rather -- shall I assume Heretic and Hexen still have it?


Randy's work, long ago:

rh-log said:

September 11, 2001
- Fixed FDecalLib::GetDecalByNum() and ::GetDecalByName() so that they do not crash if the decal is not found.
- Tried putting back in the BSP-based sight checking code that was in the Linux Doom 1.10 source, and the result was quite surprising. Using the BSP to check sight was actually slower than using the blockmap, sometimes significantly so--Vrack 2 spent between 4 and 6 times as much time checking site using the BSP as it did without. In the best case, the BSP routines were about as fast as the blockmap ones, so I took the BSP routines back out, because they offer no benefit over the Hexen code I had been using for some time now.
- Fixed: Maps with starts for players 5-8 would trample over memory when saving the starts for those players.
- Added blood splats for instant hit weapons.


Also:

rh-log said:

November 14, 2003
<snip long list of other stuff>
- Brought back the blockmap-based P_CheckSight and fixed the iterator in it and its equivalent in p_maputl.cpp so that it properly handles the case where a trace crosses the corner of a block.

Share this post


Link to post

I once did some statistical analysis on the IWADs REJECT tables, and whether it would be more advantageous having them in a "sparse" format, which would decrease the memory footprint or even improve speed for large maps with many mutually non-visible sectors.

Turns out that most id maps have a "weight factor" of 0.68 or above (the weight factor is defined as the average ratio of sectors visible from one particular sector versus the total number of sectors). That means that, on average, any random sector taken from an IWAD map has a direct visual with 68% of the other sectors (not necessarily mutual).

It would pay to have a "sparse" reject table if this ratio was lower, e.g. if there were indeed many mutually NOT visible sectors.

As for precalculating (mutual) visibility, that's another rathole. I assume most tools first check trivial non-visibility such as no direct LOS between the CENTERS of sectors, and then maybe try some different positions too, but the format of the REJECT table allows only for a very black & white view of the world: e.g. imagine a large sector A which cannot "see" a smaller, distant sector B hidden behind a wall from certain positions, but if you move around the larger sector eventually you find several spots with a visual.

So, how do you best describe the situation? "A can't see B in any case, EVER"? "A can see B all of the time"? "A can see B some of the time?". Unless you can be 100% totally certain that A can't see B, you should mark those two sectors as "potentially visible" and incur an in-game LOS check.

Remember, the purpose of the REJECT table is to indicate when a sector A is definitively NOT visible from another sector B, no matter how much you fiddle around with different positions and heights. If there is even a slight chance that there's even a single grid point where visibility is possible, then you must incur a LOS check: it would be a mistake to flag two sectors as "non mutually visible" even if there is a single point in their surface where this may not hold true.

Therefore, a good REJECT builder should only flag two sectors as absolutely non-visible (and therefore not worth a LOS check) if NONE of their enclosed grid points of the former can see ANY of the enclosed grid points of the other's, and of course that take a LOT of time to compute accurately.

I had even considered a "dynamic" REJECT table (one that is populated dynamically at runtime as LOS checks are performed, and then used as a cache), but it would have to be probability-based and subject to a lot of uncertainty before being considered reliable enough to skip LOS checks.

Share this post


Link to post
Maes said:

Therefore, a good REJECT builder should only flag two sectors as absolutely non-visible (and therefore not worth a LOS check) if NONE of their enclosed grid points of the former can see ANY of the enclosed grid points of the other's, and of course that take a LOT of time to compute accurately.

Out of boredom I wrote a DB2 plugin to compute such a REJECT table. It worked pretty good when it wasn't perfectly accurate. Now I have cranked up the accuracy and I'm actually not sure if it's locked up :D Been processing E1M1 for an hour or so now.

Share this post


Link to post

Thanks gentlemen, but I think I have another solution in mind so a reject table isn't really necessary.

Still though, it is interesting food for thought how to build this table accurately (somehow I love to spend a lot of time thinking about these sort of problems to see if I can find a clever solution...) I don't think what Maes described (test more points with greater resolution for better accuracy) is the way to approach this problem, because indeed it would take a lot of time...

Id's approach in doombsp sounds interesting. I still like to hear more from andrewj about what he figured out (how are those chains defined?)

Share this post


Link to post

RMB had a -perfect parameter or somesuch, which probably did a brute-force, painstaking approach at the expense of a ridiculously high amount of time. There were of course other methods based on faster but not guaranteed-perfect heuristics and shortcuts.

Of course, since visibility in a 2D plane enriched with pseudo-3D constraints is primarily a geometrical and topological problem, there are many possible more or less clever heuristic solutions, kinda like other NP-complete problems.

The brute-force results should only be used as a reference for other methods, as it's useful to know that a theoretically "perfect" solution is the very least possible, and a goal to work towards with heuristics. E.g. it may very well be that taking 16 specially distributed points per sector instead of ALL of them is enough to build a near-perfect reject (with, say, 99% of the accuracy of a "perfect" one, but only taking a fraction of the time to build).

To be honest, there aren't that many REJECT builders out there: AFAIK there's only RMB (whose SC in Pascal was never released and is probably lost forever, unless a decompiler could produce a somewhat close C equivalent and some poor Christ tries to reverse engineer it), and ZenNode, which I once got into the process of analyzing but never went too far with it due to other concerns. So if you or someone else has a fresh/novel approach to what is practically a classical (and NP-complete) problem...

Share this post


Link to post
Gez said:

An imperfect reject builder was good enough for id software.


If the goal is making an "id compatible" or "a REJECT builder the way id did" then yeah, imperfection is part of the game. The catch is that it has to be just the right kind of imperfection ;-)

Share this post


Link to post

With the REJECT I have to say there is no such thing as "the right kind of imperfection". This structure is intended as an optimization which shortcuts to a logic decision that would be otherwise costly to realise dynamically (I'm considering the general problem, not its application for sector<>sector LOS in DOOM, which, given a small enough dataset might well be within tolerable overhead range). A REJECT which contains invalid data is of no use to anyone (even worse than a so-called 'empty' REJECT - filled with "yes, LOS is possible", which is nothing but bloat).

Whether or not the REJECT is also used to apply secondary domain optimizations (e.g., considering plane heights in 3D) or map authoring tricks (deny visibility) is largely unrelated to the integrity of the dataset.

Share this post


Link to post

A way of building near-perfect REJECT tables would be by using a specially modified source port that gradually builds REJECT tables via Monte Carlo approximations: start with totally blank REJECT tables and perform full LOS checks in every case, only registering sector pairs that pass a LOS check. By exclusion, and thanks to the law of Big Numbers, if averaged over several hundred different games, certain sectors will never get a LOS confirmation, and might thus be assumed to be "rejectable" with a high degree of confidence.

In other words, the philosophy is: "at first, we do LOS checks for everything, and keep track of those that succeed, by saving the partial REJECT data between session and reusing it. Over the course of many games, we reuse previous data and eventually correct it by adding new LOS confirmations. Eventually, no new LOS confirmations will be generated, at which point the REJECT table will be near-perfect".

This means that once two sectors have been LOS-confirmed at least once, even if just once in a million games, they can't be "un-LOS-ed". However, some sector pairs will never pas LOS checks in a reasonable number of games, and they are the ones we're actually after.

Of course, some mechanism to retain partial REJECT data between games and/or crowdsource them would be required. If enough people have at it, this mechanism could approach the performance of even a "perfect" RMB build.

Share this post


Link to post

I think you are barking up the wrong tree there Maes. There is no need to approach the problem by firing rays when it can be decomposed into a general geometric problem and thus solvable by making use of the inherent locality within the dataset's Cartesian coordinate space. This isn't about solving a general connectivity problem. It is this locality which simplifies the problem (in an NP sense).

Share this post


Link to post

Such a mechanism may appear overkill, but if you think about it, it's not entirely different than what actual REJECT builders need to do to a certain degree: they all need to recreate the Doom engine's map rules as close as possible and investigate a subset of possible visibility cases, no matter how coarsely or finely. Sounds like reinventing the wheel, no?

And surely the Doom engine itself can work out visibility scenarios much better than any external tool which was developed using only very general guidelines without knowledge of the engine's quirks (RMB, for instance, predated source ports, and so could only have been written in a logical manner which however would fail to account for all the glitches and hacks present during actual gameplay).

Share this post


Link to post

My definition of the REJECT table is as solution to the "simple" 2D geometric problem of whether two polygons in a 2D Cartesian coordinate space can see each other from any point within them. All other concerns such as quirks in id's original algorithm or mod authoring applications (tricks) are entirely separate to this primary purpose.

This underlying geometric problem can be solved unequivocally without the need for ray casting (and its inherent problems).

As this is a classic geometry problem, the Doom engine itself is no better placed than an external tool when it comes to solving it.

Share this post


Link to post

Sure, if you deem that a solution based on the simplifications you just mentioned is functional enough for your purposes, then you have no reason to go any further. At least as long as you don't consider it totally equivalent to an engine-aware REJECT builder. That would be a quite strong assumption.

Share this post


Link to post

I agree with your diagnosis given your own criteria, i.e., that there is a benefit to an engine-aware REJECT builder.

However, that is going above and beyond the actual definition of this data construct as used by every Doom port.

Share this post


Link to post

The REJECT construct itself is pretty clear and one-sided in its function and interpretation as you said: it simply tells whether a direct LOS from sector A to sector B is possible or not.

However, just like other constructs such as e.g. MP3 or ZIP files, while there is a single way to decode/interpret them correctly once they are built, actually encoding/creating them is an "open ended" procedure with regards to the algorithms used, and furthermore it's can also be "lossy" to varying degrees, as there's no standard set in stone as to how they should be built or what they should look like (well, in the case of ZIP files there's the requirement that decoders should be able to read any compliant ZIP file, no matter how it was generated, and recreate bit-exact copies of the uncompressed data, while for REJECT there's the "-perfect" option in RMB or the IWAD results setting up an arbitrary standard by which to calibrate methods).

Share this post


Link to post

@CodeImp, you can find Carmack's code in the idbsp10.zip file on idgames, in the file "savecnct.c"

First thing I notice is that it doesn't check sectors completely properly, it essential converts each sector into an axis-aligned bounding box and checks the sides of those -- which is obviously a lot faster but less accurate too.

Second thing is that it makes chains of one-sided linedefs by just finding the next unused linedef which touches the end point of the last one added to the chain. The code would not handle forks properly (it would end up with multiple chains), but I guess forks of one-sided linedefs are rare and so he did not worry about that case.

Also:

boolean DoesChainBlock (bchain_t *chain)
{
/*

if a solid line can be walked from one side to the other without going out
an end, the path is blocked

*/

.....

Share this post


Link to post

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
Sign in to follow this  
×