Jump to content
Search In
  • More options...
Find results that contain...
Find results in...
Sign in to follow this  
fraggle

Vanilla mappers: How much of a problem is the BLOCKMAP limit?

Recommended Posts

For those of you who map for Vanilla Doom, how often do you run into the BLOCKMAP limit? I did a search of the forums and saw conflicting opinions.

I had a crazy idea last night to write a BLOCKMAP generator that would support large maps within the Vanilla format, but I don't know if it's worth the effort to implement.

Share this post


Link to post

It could always come in handy in case somebody decides to do a large vanilla map and it's often easy to break the BLOCKMAP limit if the map is wide enough.

Share this post


Link to post

As I understand it, the BLOCKMAP lump stores linedef references. Doesn't it go overboard when the map is too detailed?

Share this post


Link to post

There are several maps in Back to Saturn X episodes 2 and 3 where squeezing everything into the blockmap limit has caused major headaches. I believe ZenNode already does a form of blockmap compression, which we're using, but if there's anything further that can be done we'd be very grateful for it!

Share this post


Link to post

One problem with implementing such a generator is that there are at least two conflicting algorithms for generating the BLOCKMAP. The algorithm used internally by Boom is not compatible with vanilla (different data is generated), for example.

Presently Doomsday always generates a new BLOCKMAP using a vanilla compatible algorithm and ignores this data if present in the map. (This will change at some point for the sake of supporting map hacks, however).

Share this post


Link to post
fraggle said:

I had a crazy idea last night to write a BLOCKMAP generator that would support large maps within the Vanilla format, but I don't know if it's worth the effort to implement.


You mean like the extended (512 x 512) blockmap support already in prBoom+ and Mocha? That one actually fixes incongruencies with how the vanilla/Boom engine treated differences within the blockmap larger than 256 blocks, rather than supporting arbitrarily large blockmaps (which are constrained by the maximum allowed coordinate dimensions anyway). While prBoom+ included a BLOCKMAP builder, the vanilla code prevented it from taking full advantage of 512x512 blockmaps, until the fix was introduced (check e.g. EUROPE.WAD)

This is actually a trivial hack to implement, once you understand how it works (there are relevant threads about it), and it's actually easy to turn on/off at will, so in theory it won't (well, SHOULD'T) hurt compatibility. Gory details here.

However, you DO need to include a dynamic blockmap generator in order to take advantage of it because, as of now, none has defined an extended ON-DISK format for blockmaps, and so any blockmaps present in PWADS will have to abide with the 128KB/64K chains (correct?) limit, which is also dictated by the 16-bit length of chain pointers. Most large blockmaps are actually simply "cut short" on disk, and left incomplete. Worse yet, they suffer from pointer aliasing.... *shudder*

Of course, if you want to discuss a standard....IMO, the only way to make it "backwards compatible", is to somehow append extended blockmap data beyond where vanilla Doom will cease reasing the normal blockmap, maybe in an entirely new lump.

So, in order to store a 512x512 blockmap on disk and not run into the same hare-brained limitations of vanilla, we need a format which uses 32-bit pointers for chains, and allows -almost- arbitrary blockmap lump sizes.

Share this post


Link to post

I wholeheartedly second anything you can do to make the BLOCKMAP limit in vanilla less of a restriction! One of those maps in BTSX E2 where the BLOCKMAP was a big limit was mine :P. Granted the map was pretty big and I did have issues with it during original production. But it turned into more of a problem when I revisited the map to make updates and edits.

Share this post


Link to post
esselfortium said:

I believe ZenNode already does a form of blockmap compression,

How exactly is vanilla BLOCKMAP compressed? Do blocks with equal linedef references (such as all empty blocks) occupy one common index?

Share this post


Link to post

To be honest I can't really see much benefit to introducing a new format for BLOCKMAP data on disk because an entirely new blockmap can be generated dynamically for even the largest of maps in mere seconds (using either algorithm).

However, I do see some benefit to introducing a new standard dictating how the Blockmap should be dynamically generated.

Doomsday for example has none of the vanilla BLOCKMAP or physical map size limits -- if a map author wants to make a gigantic map that breaches these limits they are free to do so because this data is generated dynamically.

Obviously a new format can't retrospectively extend what is possible in vanilla, so, whats the point?

Share this post


Link to post
Maes said:

You mean like the extended (512 x 512) blockmap support already in prBoom+ and Mocha?

Nope. I mean a smarter BLOCKMAP builder that could use some tricks to accomodate larger (ie. lots of linedefs) maps. But it's just an idea. I don't know yet if it could actually work in practice.

Share this post


Link to post
fraggle said:

Nope. I mean a smarter BLOCKMAP builder that could use some tricks to accomodate larger (ie. lots of linedefs) maps. But it's just an idea. I don't know yet if it could actually work in practice.


Maybe some sort of "blockchain compression"? Because the on-disk format certainly sucks, and is actually LESS capable than what even vanilla Doom can handle. 128 KB of on-disk size limit, of which the blockchain pointers occupy the first positions, and the actual blockchains have to make do with what's left (!) of them.

The only way to some some space, other than blockchain compression (having multiple blockchain pointers pointing to one same blockchain) is to build the blockchains themselves by trying to economize space, not including certain linedefs in them, smartly avoiding using a blockchain for certain blocks of a map that are unlikely to be entered or shot by the player etc. but that'd require a lot of manual finetuning, in most cases, and one heck of a visual blockmap editing tool.

But in the end, the only way that would produce results 100% identical to the IWADs would be to use the exact same blockchain tool used by id software themselves, but applied real-time on the level data, and WITHOUT having the limitations of the on-disk format. In other words: just rebuild the blockmap with id's blockmap tool (if possible).

Share this post


Link to post
Maes said:

But in the end, the only way that would produce results 100% identical to the IWADs would be to use the exact same blockchain tool used by id software themselves, but applied real-time on the level data, and WITHOUT having the limitations of the on-disk format. In other words: just rebuild the blockmap with id's blockmap tool (if possible).

This is effectively what Doomsday does.

Share this post


Link to post

Do you want BTSX E2? Because Blockmap limits are a big reason one of like the only two things left to do in it isn't done.

Share this post


Link to post

Seems this got discussed before.

DoomLegacy handles blockmaps that exceed the max size by synthesizing a 32 bit version of the pointer. Without compression the blockmap pointers are consecutive increasing numbers. It is easy to record the upper bits by watching it roll over to lower numbers.

Blockmaps that are compressed would not be entirely consecutive increasing numbers, but they are much less likely to overflow the pointer size.

Share this post


Link to post
wesleyjohnson said:

Blockmaps that are compressed would not be entirely consecutive increasing numbers, but they are much less likely to overflow the pointer size.


And as with any and all lossless compression schemes, there's no "maximum guaranteed final size" or "minimum guaranteed compression percentage".

In some cases it might be possible to fit a working blockmap in 128 KB even if automated builders cannot, if the map's layout allows for it and some manual finetuning takes place, but it would require an extremely specialized editor, which would allow you to edit e.g. individual blockchains and add/remove linedefs in order to cut back on a few bytes of size, and show you the resulting blockmap's length in bytes in real-time. When you get it under the "safe" 128 KB limit, you may stop.

Share this post


Link to post

To give a rough idea of some of my ideas:

  • Compressed blockmaps have multiple blocks that point to the same block list. I believe blockmap compression so far has focused on blocks with identical block lists, but if a line appears in a block's list but is not actually in that block, it expect it probably doesn't do any harm. Theoretically then, two blocks with some lines in common could be "merged together" to save some space. Essentially you're trading game performance for smaller space.
  • Along the same lines, block lists have a footer value (-1) that indicates their ending, so there's a per-block list overhead. It might therefore be advantageous to merge small block lists into larger ones to reduce the number of block lists.
  • An alternative to merging blocks would be to generate the blockmap on a coarser grid (eg. 256x256) and then assign the four adjacent blocks the same block list. But I suspect that merging would be more effective.
  • Every time a line crosses a block boundary, it ends up appearing in at least two block lists. But the blockmap format allows the origin point for the grid to be specified. Careful choice of the origin could reduce the number of "splits" like this and therefore reduce the lump size as a result.
  • I'm pretty sure not all lines need to be in a blockmap. For example, if you have a two-sided linedef that joins two sectors with the same floor and ceiling heights, that can (probably?) be omitted. I suspect there are a whole load of potential gotchas with this idea - like raising stairs, for example. If we're really desperate, very small linedefs or tiny steps could potentially also be omitted.
  • By design, the format is limited to 128KB (pointers are 16-bit into an array of words). But the pointer points to the start of a block list and the list itself can overrun past that 128KB size. As a simple example, you could make a dumb blockmap where all blocks point to a single block list containing all linedefs. Such a blockmap would be horribly slow, but it shows that in theory, it is possible to construct a blockmap that works in any level.
It's possible some of this might be old hat. I need to check the ZenNode source to see if it already does anything like this.

Share this post


Link to post

That sounds really promising! This stuff amazes me :) I just hope it's workable in practice. BTW, in vanilla it's actually limited to 64kb, not 128kb.

Share this post


Link to post

Are you sure? The Doom Wiki article says "the maximum size for a vanilla blockmap is approximately 128KB, not 64KB"

EDIT: Looks like you're right. I checked the source and it uses signed shorts, not unsigned. So it should be a 64KB limit, even though the format supports more than that.

Share this post


Link to post

Yeah, it's 64. The ones with more than that start crashing.

Share this post


Link to post
fraggle said:

Compressed blockmaps have multiple blocks that point to the same block list. I believe blockmap compression so far has focused on blocks with identical block lists, but if a line appears in a block's list but is not actually in that block, it expect it probably doesn't do any harm. Theoretically then, two blocks with some lines in common could be "merged together" to save some space. Essentially you're trading game performance for smaller space.


This one is actually brilliant, but there's no easy way of telling if it will be side-effect free (e.g. tracers will cross more lines than usual, thus inflating the lists of crossed linedefs) and maybe triggering unwanted effects (if there are no further checks of whether a line really belongs in that block).

fraggle said:

Every time a line crosses a block boundary, it ends up appearing in at least two block lists. But the blockmap format allows the origin point for the grid to be specified. Careful choice of the origin could reduce the number of "splits" like this and therefore reduce the lump size as a result.



Minimizing block extension/grid dimensions through origin centering should be something that every decent blockmap tool should be able to do.

But in general the problem with maps that exhaust their blockmap space is that they are too "densely" built, almost every used block has several lines crossing it, and simple tricks like saving a block here and there usually are not enough.

fraggle said:

I'm pretty sure not all lines need to be in a blockmap



Maybe, but again, it would be too complex to write an algorithm able to make this (and other) call(s) correctly all of the time. Maybe a visual editor and hand-optimization by an expert...

fraggle said:

By design, the format is limited to 128KB (pointers are 16-bit into an array of words). But the pointer points to the start of a block list and the list itself can overrun past that 128KB size. As a simple example, you could make a dumb blockmap where all blocks point to a single block list containing all linedefs. Such a blockmap would be horribly slow, but it shows that in theory, it is possible to construct a blockmap that works in any level.


That would work, by forcing the engine to check every line. If implemented, such an "universal collision block" or UCB should be used sparingly, if at all. It could be used e.g. for blocks at the outer limits of the map, decorative features etc. where a player or monster is unlikely to enter or shoot. But again, that's a judgment call that's better done by an expert mapper, rather than an algorithm, and once again, a visual editor that lets a mapper directly edit the blockmap or at least leave "hints" for the blockmap builder would be required (e.g. "This is an outer wall far from the playing area only designed to look pretty, thus don't waste a good block for it, and map it to the UCB instead"). An extreme version of that, would be for the blockmap NOT to include some parts of the map, provided this does not create other problems like no-clipping in areas where players may wander.

As for Zennode, I'm certain it does at least simple 100% identical blocks compression.

Share this post


Link to post

Sorry for the double post, but I think this deserves a separate mention:

the one singlest inefficiency of the blockmap is this: no matter how carefully you craft the blockchains themselves (which can be "sparse", in a sense), the POINTERS to them are always a dense structure. For a 256x256 blockmap, they alone would use up ALL of the available 128 KB space, leaving NONE for the blockchains themselves (!). No way to "skip" some either: it's always a dense grid of pointers.

A REALLY extreme solution would be to stuff some blockchains INSIDE the space reserved for blockchain pointers. The reasoning is this: in large maps, the blockchain pointer array itself will be large, but most pointers will be wasted for blocks that are IN THE VOID. Unless a player idclips, he will never visit those regions of the map...so, why not place actual blockchain data in them? After all, pointers are defined from the beginning of the blockchain lump (which is another big weakness of the format...) ;-)

Of course, for this to work, you need a blockmap which leaves enough contiguous blockmap cells free (each cell gives you two bytes...so for e.g. a 32x32 or 4x256 region in the void you would get 2 KB), and a builder which is super-smart when building it, keeping track of void blocks and using "pointer space" whenever possible.

Edit: saw the "64 KB" update now. That just makes finding a solution even more pressing....so what do you think about pointer-space reuse?

Share this post


Link to post
Maes said:

A REALLY extreme solution would be to stuff some blockchains INSIDE the space reserved for blockchain pointers.

You should be aware that doing something of that nature will cause the lump to fail validators in modern source ports, meaning they'll have to waste time rebuilding the map at level load. Which makes demo sync impossible, btw.

Share this post


Link to post
Quasar said:

You should be aware that doing something of that nature will cause the lump to fail validators in modern source ports, meaning they'll have to waste time rebuilding the map at level load. Which makes demo sync impossible, btw.


But it WOULD work in vanilla, isn't that the point? Plus, some ports like ZDoom and prBoom+ always rebuild the blockmap (or even the nodes).

Maybe there should be a compatibility switch forcing reading/assimilation of such blockmaps even if they fail validation (though there might be an extra step required in order to force interpreting "pointer" data as blockchains).

Of course, I understand the problem to be distinguishing between a "weird" blockmap which however works as intended/is deliberately that way, and one which is simply borked up. My answer to that would be: give the user a choice, make it a compatibility setting.

Share this post


Link to post
Maes said:

Of course, I understand the problem to be distinguishing between a "weird" blockmap which however works as intended/is deliberately that way, and one which is simply borked up. My answer to that would be: give the user a choice, make it a compatibility setting.

I try to avoid as a rule settings which mean "crash or don't crash" when it's possible; having the need for them deliberately intensified by new tools would be irritating.

Share this post


Link to post

Yeah, this is strictly a "can it be done in Vanilla" thing. If source ports have to rebuild, they can rebuild.

Maes said:

This one is actually brilliant, but there's no easy way of telling if it will be side-effect free (e.g. tracers will cross more lines than usual, thus inflating the lists of crossed linedefs) and maybe triggering unwanted effects (if there are no further checks of whether a line really belongs in that block).

I'm pretty sure this isn't a problem. Remember that the blockmap is only a very coarse list of "lines you should check" when checking for collisions and interactions, not a list of "lines that were crossed". The engine already does the line checks itself: the blockmap is just there to make the process more efficient.

Minimizing block extension/grid dimensions through origin centering should be something that every decent blockmap tool should be able to do.

But in general the problem with maps that exhaust their blockmap space is that they are too "densely" built, almost every used block has several lines crossing it, and simple tricks like saving a block here and there usually are not enough.

It completely depends on the structure of the level, but quite a lot of levels are built around regular layouts, even if they aren't entirely grid-based. If you limit yourself to 16-pixel offsets from the normal (0,0)-aligned grid, that's 64 different offsets. It's probably not too CPU-intensive to check each of them and find out how many block boundary crossings there are in each case. No matter the level, some must be "better" choices than others.


Maybe, but again, it would be too complex to write an algorithm able to make this (and other) call(s) correctly all of the time. Maybe a visual editor and hand-optimization by an expert...

I'm not entirely convinced it's workable either. Moving floors/ceilings are the big problem here. With sufficiently clever logic it might be able to work out which lines are safe, but it depends if it's worth it. I'd need to figure out some statistics about the number of candidates in typical levels to find out.


That would work, by forcing the engine to check every line. If implemented, such an "universal collision block" or UCB should be used sparingly, if at all. It could be used e.g. for blocks at the outer limits of the map, decorative features etc. where a player or monster is unlikely to enter or shoot. But again, that's a judgment call that's better done by an expert mapper, rather than an algorithm, and once again, a visual editor that lets a mapper directly edit the blockmap or at least leave "hints" for the blockmap builder would be required (e.g. "This is an outer wall far from the playing area only designed to look pretty, thus don't waste a good block for it, and map it to the UCB instead"). An extreme version of that, would be for the blockmap NOT to include some parts of the map, provided this does not create other problems like no-clipping in areas where players may wander.

I really want to avoid any kind of manual interaction if at all possible, because I'm sure mappers probably don't want to know about this stuff at all if they can avoid it.

I think the "huge trailing blocklist" idea ought to be a final, last resort if every other technique fails. Probably the best thing to do in that situation is to restructure the level or give up on Vanilla compatibility entirely (and the theoretical blockmap builder I haven't written yet ought to say as much). But in terms of generating a valid blockmap, I would think that the best course of action would be to combine all the largest blocklists - they're already going to be slow anyway.

As for Zennode, I'm certain it does at least simple 100% identical blocks compression.

I checked the source, and it looks like you are correct.

Maes said:

the one singlest inefficiency of the blockmap is this: no matter how carefully you craft the blockchains themselves (which can be "sparse", in a sense), the POINTERS to them are always a dense structure. For a 256x256 blockmap, they alone would use up ALL of the available 128 KB space, leaving NONE for the blockchains themselves (!). No way to "skip" some either: it's always a dense grid of pointers.

Maybe this is a laughably naive question to be asking, but 256x256 blocks is a 32768x32768 sized map. Do any such maps even exist?


A REALLY extreme solution would be to stuff some blockchains INSIDE the space reserved for blockchain pointers. The reasoning is this: in large maps, the blockchain pointer array itself will be large, but most pointers will be wasted for blocks that are IN THE VOID. Unless a player idclips, he will never visit those regions of the map...so, why not place actual blockchain data in them? After all, pointers are defined from the beginning of the blockchain lump (which is another big weakness of the format...) ;-)

Of course, for this to work, you need a blockmap which leaves enough contiguous blockmap cells free (each cell gives you two bytes...so for e.g. a 32x32 or 4x256 region in the void you would get 2 KB), and a builder which is super-smart when building it, keeping track of void blocks and using "pointer space" whenever possible.

Edit: saw the "64 KB" update now. That just makes finding a solution even more pressing....so what do you think about pointer-space reuse?

It's an amusing idea, and theoretically you might be able to do this (put everything in a single block list that overlaps with the blockchain) but I think at this point we've crossed into the realm of complete insanity :)

Share this post


Link to post
fraggle said:

Maybe this is a laughably naive question to be asking, but 256x256 blocks is a 32768x32768 sized map. Do any such maps even exist?


It gets even better if you consider that coords can go from -32K to 32K -then you need a 512x512 blockmap.

Now, there certainly are some maps that certainly exceed at least ONE of the two dimensions (e.g. EUROPE.WAD), which is what sparked all that "extended blockmap" development.

And I'm pretty sure there are some ZDoom map "mashups" (like the one that puts all E1 maps in a single map....) that must stretch the standard map format to its limits.

However, you don't even need to go to that extreme to start causing problems: if you reach 32K blocks by ANY combination of width and height, you will have "burned" all available space already.

Even "cutting some slack" and going to, say, 16K blocks, that still burns a good half of the blockmap's size...and at most you can fit the blockmap terminators for each block O_o

With 8K blocks, you may barely fit ONE line per block etc.

It's a broken disk format, period ><

Share this post


Link to post

For another gross hack that's only compatible with vanilla / vanilla emulating ports, you can remove the block list header which will save 2 bytes for each block list entry and ironically fixes the blockmap bug whilst breaking source ports that have implemented fixes for it.

Share this post


Link to post

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
Sign in to follow this  
×